In an AI World, People Still Make the Difference

Jez Clark • June 25, 2025

The Rise of AI and a Call to Action

In my last article I wrote about how our own human intelligence, coupled with power skills, can allow us to compliment and get the most from the world of emerging technology. 


However, the more I read and observe how AI is being used across industries, the more I feel compelled to issue both a warning and a call to action. Now, more than ever, we all - individuals and organisations alike - have a duty to nurture the next generation of talent, consultants, and digital professionals. 


Redefining Entry into Industry 


We must create meaningful pathways into industry and invest in training that develops not only technical expertise but also the human skills that AI cannot replicate. 

As we transition, resilience and adaptability are key skills to look for and nurture in your talent whilst reducing fear and taking the lead in reshaping - and communicating - the opportunities AI creates for early-career professionals is the responsibility of our leaders.


Consulting in a Time of Volatility and Automation 


This year has seen continued volatility across sectors, with consulting firms being asked to do more with less - deliver innovation, improve efficiency, and solve complex problems within tighter budgets. 

Clients are demanding tailored, high-impact solutions, often rooted in data, AI, and advanced analytics. 


Using AI as a Tool, Not a Crutch 


Many companies like ours have embraced AI as both a tool and a catalyst. We've invested in capabilities that help us drive smarter decision-making, automate routine processes, and unlock new business models for our clients. 

But while AI has undoubtedly made us more efficient, it has also forced us to rethink the role of human insight and emotional intelligence. 


The Risk of Losing Core Human Skills 


If we’re not careful, we risk losing some of those core soft skills such as critical thinking, creativity, and interpersonal communication. 

 
AI is not capable, in my opinion, of understanding people, culture, and the impact of change - factors that ultimately pave the way through complex transformation. 


Meeting New Customer Expectations in the AI Era 


As a small consultancy, we are seeing a very different set of expectations from our customers -driven by the rise of generative AI and automation, and with it, a rise in the “art of the possible.” 


Human-Centric AI: Making It Make Sense 


So, what’s the answer? 
In my view, we must focus on higher-impact, human-centric AI solutions. That means helping our customers and communities of people to
Make AI Make Sense.
We are focusing on ethics, efficiency, and empathy in the team’s we engage with. 


Challenging Tech for Tech’s Sake 


We’re trying to have conversations where we question tech for tech’s sake.
Let’s make it meaningful - and frame all conversations through the lens of transformation and the impact it will have on people. 


Leading with Understanding, Not Just Technology 


By leading with understanding - not just technology - we can highlight case studies where success hinges on human connection rather than algorithms. This is best delivered through workshops that emphasise soft skills as essential to achieving outcomes, with technology supporting the process rather than driving it. 


Putting People First: The True Competitive Advantage 


As a nimble business with PEOPLE at our core, we’re committed to supporting the next generation of talent and contributing to the wider conversation. We believe this approach will help cut through the noise and ground AI’s role in real-world impact. 


The Real Lesson: People Still Make the Difference 


What I’ve personally learned over the last six months is this: no matter how much AI automation we implement, it’s not what sets us apart in the data & AI space.
Our success still lies in our ability to put people first, build relationships, and get to the heart of challenges - online or, even better, face to face. 


A Collective Focus on Community and Connection 


That’s where we will continue to focus our time and efforts for now. Hopefully many of you will agree and we can all support each other in the continuation of building great communities that share challenges and successes in a safe environment. 


By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
Woman and man touching hands, digital data flowing between them, with digital head projections.
By Eden Smith December 3, 2025
Discover why teams resist AI and how leaders can drive real buy-in using behavioural science, transparency, and human-centred adoption strategies.
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Show More