HUMAN + AI - Turning Strategy Into Action

Christa Swain • October 17, 2025

In today’s boardrooms, the conversation is no longer if AI will reshape work - but how fast.

On 15th October, cross-functional business leaders gathered for the first event in the HUMAN + AI Series, a collaboration between Eden Smith and Corndel, designed to demystify AI strategy and help organisations move from intention to meaningful action.


This first session was a candid, insight-rich discussion about what it takes to build trust, drive adoption, and enable every part of an organisation to thrive with AI - not just the tech teams.


Why AI Success Starts with People

Erik Schwartz, Chief AI Officer at AI Expert, opened with a clear message:


“AI is only as strong as the leadership behind it.”

In a live poll of 27 leaders, most revealed they are still in the early stages of AI adoption. Many have experimented with tools like Copilot, but few have moved into structured implementation. Erik shared powerful case studies where targeted AI initiatives streamlined workflows and delivered measurable business impact.


His call to action was simple but potent:

  • Build leadership AI literacy early.
  • Start small but show results fast.
  • Use hackathons and prototype projects to turn theory into momentum.

“Put something tangible in front of your executives,” he urged. “AI adoption accelerates when people can see and feel the value.”

Embedding Data and AI into Organisational DNA

Helen Blaikie shared how Aston University overcame silos, data hoarding, and cultural resistance to create a mature data and AI strategy for 2030.


Key pillars of their success:

  • Leadership sponsorship and clear performance measures
  • A robust data governance framework
  • Organisation-wide upskilling (over 600 trained colleagues)
  • A relentless focus on trust and quality

By aligning data and AI initiatives directly with business objectives, the university didn’t just modernise - it transformed how decisions are made.


The Human Experience of AI

Helen Matthews tackled one of the most pressing realities: people’s fears and expectations around AI.


📊 65% of employees fear job loss.
📊 45% resist change.
📊 91% want responsible AI policies.


Matthews highlighted how starting with “why” is essential. AI strategy isn’t just about algorithms - it’s about trust, transparency, and storytelling. By mapping workforce capabilities, tailoring training, and leveraging early adopters, organisations can turn anxiety into agency.


She also outlined a practical maturity model: start with foundational awareness, tailor training to function, then continuously refine. A particularly resonant insight: use the apprenticeship levy to fund AI learning programs - removing one of the biggest adoption barriers.


The Leadership Panel: Turning Insight into Impact

A dynamic panel session explored how leaders can practically navigate the intersection of people, talent, and technology.


Key insights:

  • Use AI tools to empower employees to self-assess skills and career paths.
  • Start with one well-defined pain point to build trust and credibility.
  • Involve frontline employees early to ensure solutions solve real business problems.
  • Encourage co-creation spaces and flexible policies to adapt fast.

The message was consistent: AI adoption is not a spectator sport. It’s a collective, cross-functional effort that demands experimentation, communication, and strong leadership.


Top Action Points for Leaders

  1. Build AI literacy at the top and cascade it down.
  2. Align AI strategy with business objectives - not the other way around.
  3. Start small, show value fast, then scale.
  4. Invest in data governance, trust, and culture.
  5. Equip people to experiment with AI tools and co-create solutions.
  6. Communicate, measure, celebrate - repeatedly.


This was just Part 1 of the HUMAN + AI Series. The conversations were raw, practical, and inspiring - setting the stage for the next event, where we’ll dive deeper into human capability building and AI readiness at scale.





By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
Woman and man touching hands, digital data flowing between them, with digital head projections.
By Eden Smith December 3, 2025
Discover why teams resist AI and how leaders can drive real buy-in using behavioural science, transparency, and human-centred adoption strategies.
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Show More