The Psychology of AI Adoption

Eden Smith • December 3, 2025

Why Teams Resist and How Leaders Can Drive Real Buy-In 

AI transformation isn’t failing because of technology.


It’s failing because of foundational maturity & people


Across industries, organisations are investing heavily in AI tools, automation, and data-driven decision-making, yet many of these projects struggle to scale or gain traction. 


If we put governance to one side for a moment - the common thread isn’t a lack of capability, budget, or ambition... it’s resistance, uncertainty, and behavioural friction inside teams. 


To move from experimentation to real impact, organisations must treat AI adoption as a human change journey, not a technical upgrade. Understanding the psychology behind resistance is the first step. Leading for trust, confidence, and clarity is the second. 


Why Teams Resist AI 


Resistance to AI rarely stems from a dislike of technology. It comes from deep-rooted psychological triggers, many of which are completely natural. 


1. Fear of Job Loss or Skill Redundancy 

AI triggers anxiety about being replaced, automated, or left behind. Even when leaders insist AI is there to "enhance, not replace," employees may still interpret new tools as a threat, especially if communication is unclear or inconsistent. 


2. Loss Aversion 

People are more motivated by avoiding loss than gaining benefit. AI often feels like a loss of control, autonomy, or expertise, making even minor changes feel disproportionately risky. 


3. Overwhelm and Cognitive Load 

AI can feel abstract, complex, and inaccessible. When employees don’t understand how it works or how it helps their role, their natural response is avoidance. 


4. Identity and Professional Pride 

For many people, their value comes from experience and intuition.
AI challenges that by introducing automation and data-driven insights that appear to "know better." This can create subtle defensiveness. 


5. Habit and Comfort with the Familiar 

Behavioural science shows that human's default to what feels familiar, even when it’s inefficient. Old processes feel safe because they’re predictable. 


These barriers are not signs of poor culture or resistance to innovation. They’re psychological, not personal. And that means leaders need to adopt strategies rooted in behavioural change, not technical adoption. 

 

Turning Resistance into Confidence and Engagement 


Successful AI adoption isn’t driven by technology teams, it’s driven by leadership. 


1. Start With Purpose, Not Platforms 

Teams need to understand why AI matters, not just what it does.
Leaders should frame AI in terms of problems solved, stress reduced, service improved, and opportunities created. 

When people see the purpose, they see the value. 


2. Remove Fear Through Transparency 

The more ambiguous AI feels, the more threatening it becomes.
Be explicit about:
• which tasks will change
• how roles will evolve
• what new skills will matter
• how employees will be supported 

Clarity shrinks fear. 


3. Involve Employees Early 

AI adoption fails when people feel it’s being "done to them."
It succeeds when employees help design, test, and improve solutions. 

Early involvement increases ownership and reduces anxiety. 


4. Build AI Literacy, For Everyone 

AI is no longer a specialist skill.
Workforces need essential AI literacy:
• what AI can and can’t do
• how to question outputs
• how to apply it ethically
• how it affects workflows 

This boosts confidence and builds trust. 


5. Show Quick, Human-Centred Wins 

Behavioural science tells us that people change when they see fast, meaningful impact.
Small, practical use cases, not grand 5-year blueprints, create momentum. 


6. Celebrate Human + AI Collaboration, Not Competition 

Shift the narrative away from replacement.
Highlight cases where AI enhances expertise, reduces admin, improves wellbeing, or elevates decision-making. 

Culture follows the stories leaders choose to amplify. 

 

Creating a Culture Where AI Becomes a Natural Part of Work 


Lasting AI adoption is cultural, not technical. It emerges when: 

• employees trust the purpose
• leaders communicate clearly and consistently
• the organisation builds literacy, not fear
• people experience real improvements in their work
• learning becomes more important than perfection 


The future belongs to organisations that understand AI is as much about human behaviour as it is about algorithms. 


Teams don’t resist AI because they don’t care.


They resist because they haven’t been shown how AI cares for them


Leaders who bridge that gap, with empathy, transparency, and inclusion, will unlock the full power of AI transformation. 


👉 Get in touch to explore how we can support your AI transformation journey. 
Let’s build a future where people and AI work better together. 


By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Person in a cap sits atop a building overlooking a city at sunset.
By Jane Smith November 12, 2025
Discover how to recruit leaders for roles that don’t yet exist. Eden Smith explores how organisations can identify, attract, and develop future-ready leaders to navigate AI, data, and digital transformation with agility and purpose.
Show More