Ethics As People, Not Compliance

Ellie Dickinson • March 5, 2026

Ethics As People Change,

Not Compliance 


Why Ethical AI Keeps Failing in Organisations 


As AI is being adopted by more organisations, it has become clear that there needs to be some guardrails for its implementation… Welcome, ethical AI. 


Ethical AI is the practice of designing, deploying and governing AI systems in ways that are fair, accountable, transparent, and aligned with human values. It is typically implemented through technical safeguards and internal policies, like the document your HR sent you about using ChatGPT that you skimmed. 


Ethical AI is important for organisations to prevent the perpetuation of biases and to ensure that AI systems are fair. But here’s what organisations often miss: ethical AI should be embedded into an organisation, rather than added on as a tick box. Failure to embed ethical AI into an organisation creates the familiar pattern of distrust in AI systems. 


The Compliance Trap 


When an organisation attempts to implement ethical AI, it often falls into ‘The Compliance Trap’. This occurs when there has not been sufficient embedding into the workflow, but instead, added as an afterthought or for compliance, and actual transformation cannot occur. 


But why do we keep seeing this happen? Organisations fall into the compliance trap because the initial implementation of ethical AI creates a false sense of control. Policies are written, frameworks are approved, and sign-offs are completed; everyone settles into a comfortable routine again. But nothing really changes. 


Compliance doesn’t change how: 

  • Teams make decisions 
  • Trade-offs are prioritised, or 
  • How pressure is handled in delivery 


This results in ethical AI being something to get through, instead of something to work with. 

 

Ethical AI as a change problem 



So, if ethical AI is always treated as a compliance issue, then it’s becoming a change problem. 

Implementing AI reshapes the way people in the organisation view authority, accountability and trust. This should be expected when changing the fundamentals of a workplace, but it’s important to remember that this is a human and organisational change, and not a technical one, and that the solution to these issues lies with the people. 


These failures are more likely to occur when organisations: 

  • don’t prepare people for these shifts, 
  • don’t redesign decision-making, and 
  • don’t align incentives with responsible behaviour. 


women in glasses staring with hand on chin surrounded by brain and AI motifs

Ethical AI as a change problem 


So, if ethical AI is always treated as a compliance issue, then it’s becoming a change problem. 


Implementing AI reshapes the way people in the organisation view authority, accountability and trust. This should be expected when changing the fundamentals of a workplace, but it’s important to remember that this is a human and organisational change, and not a technical one, and that the solution to these issues lies with the people. 


These failures are more likely to occur when organisations: 

  • don’t prepare people for these shifts, 
  • don’t redesign decision-making, and 
  • don’t align incentives with responsible behaviour. 

 

Where Ethical AI breaks down during transformation 


So what about the people and the organisations make this fail? 


Firstly, there is often a distance from leadership, which delegates ethics downwards and creates a culture where ethics are perceived as less important. Leaders approve an AI strategy but then fail to model ethical-decision making, so when it becomes someoneelse’s responsibility, there aren’t frameworks or guidelines to work from, and ethics gets passed over. 


Next, there is an issue when it comes to delivery pressure. Teams are rewarded for the scale and speed of a project, whether that reward be commission, promotions or just praise, taking things slower often puts teams at a disadvantage. But ethics thrives when the time is slowing things down, and when the time is taken, ethical AI can thrive. 


As with many other aspects of AI adoption, there are capability gaps when it comes to ethical AI. People are expected to do the right thing, especially when it comes to ethics; however, not much more than an AI policy is given in terms of training, leaving gaps in what people are prepared to complete. 


Finally, and possibly one of the biggest factors, is late engagement. By late engagement, we mean that ethical considerations are being discussed only after key decisions are already made, leaving little to no room to adapt or account for them. This not only means that in that project, ethics isn’t being considered, but then it gives leeway for ethics to be sidelined for other projects as well. 

 

Ethics as a transformation capability: 


Looking forward, ethics must be considered as a transformation capability, meaning that there must be a whole transformation of the workflow for ethical AI to be successful. 


Ethical AI works when it is treated as; 

  • Leadership Capability: meaning that ethics are shaping how problems are defined a. 
  • Delivery Capability: meaning that ethical reflection is built into delivery systems; and a cultural norm, where everyone is on the same page of how ethical AI should work and challenges AI-driven decisions. 


Making this transformation changes ethical AI from an external control, to just the way things are done. 

 

Practical checklist


  • Leadership & Direction 
  • Leaders discuss the ethical trade-off in AI decisions 
  • Accountability for AI outcomes is clear and owned 
  • Ethics is framed as value creation 
  • Design and delivery 
  • Ethical considerations are included from the beginning 
  • Assumptions about users, data and ‘normal’ behaviour are challenged 
  • Ways of working 
  • Teams are empowered to question AI outputs 
  • Escalation paths can be used without penalty 
  • Culture and capability 
  • Training is given on role-specific ethical risks 
  • Challenge is rewarded 
  • Ethics is talked about in real scenarios. 

 

What changes when ethics is treated as change: 


When these changes have been made, and the checklist has been complete ethical AI will allow organisations to thrive. People will have greater trust in the processes and decisions made by AI, leading to greater adoption because people now believe and understand the systems. 


When more people are more comfortable using ethical AI, it reduces reputational risk and the need to rework or reverse previous work. This turns ethics from a blocker to an enabler of transformation. 



 

Want to learn more? Join our Leaders Advisory Board here, or explore our other articles.


Smartphone with
By Marie May March 3, 2026
Discover how Eden Smith’s Nurture Programme is shaping future data talent, as Charles’ journey shows the impact of real-world experience on career success.
Three people smile around a table. A window and coffee cups are visible.
By Marie May February 10, 2026
Discover how the Nurture Programme has spent 10 years bridging the gap between academia and industry to build job-ready, inclusive data and AI talent.
Three people stood around projections of statistics
By Ellie Dickinson February 10, 2026
Why do most AI adoptions fail? Explore the people, skills and trust gaps behind AI adoption—and how a people-first approach can fix it.
Show More