The Hidden Factor Behind Successful AI Adoption

Eden Smith • September 2, 2025

The Role of Data-Literate Boards

Artificial intelligence is reshaping industries, driving efficiencies, and unlocking new opportunities for growth. Yet for all the excitement, many AI projects fail to move beyond pilots or struggle to deliver real business value. Why? More often than not, it’s not the technology holding organisations back, it’s the lack of data literacy at the leadership level.


When boards and senior executives don’t fully understand the fundamentals of data, its limitations, or its potential, they can’t make the informed decisions needed to scale AI responsibly and effectively. Data literacy at the top table is the hidden factor that often determines whether AI adoption becomes a success story or a costly misstep.


Why Leadership Buy-In is the Starting Point


For AI initiatives to succeed, they must be backed by strong leadership. Boards set the vision, allocate resources, and define the metrics of success. Without their buy-in, projects risk being siloed in IT departments, underfunded, or disconnected from the broader business strategy.


Data literacy equips leaders with the ability to ask the right questions:

  • What data do we actually have, and is it reliable?
  • How do biases in the data or models impact outcomes?
  • What risks and ethical considerations need to be addressed?
  • How do we measure ROI beyond short-term cost savings?

When leaders can engage with these questions confidently, AI projects are more likely to be aligned with business goals, better governed, and more impactful. Importantly, data-literate boards can also communicate AI’s purpose and progress more clearly to investors, regulators, and employees, creating trust and transparency.


Bridging the Gap Across Departments


Successful AI adoption isn’t just about boardrooms and data science teams. It requires cross-departmental understanding and collaboration. Finance teams must interpret AI-driven forecasts. HR leaders need to assess AI’s role in workforce planning. Operations teams have to trust predictive models for supply chain or safety decisions.


This is where data literacy becomes an organisational competency, not just an executive one. By fostering a shared language around data and AI, organisations can avoid the common pitfalls of misalignment and mistrust. Employees who understand the basics of how AI works, and how it applies to their roles, are more likely to adopt it confidently, rather than resist it out of fear or uncertainty.


Creating this culture of literacy requires investment in training, but also in accessible tools and processes. Dashboards, visualisations, and natural language interfaces can make data insights easier for non-technical stakeholders to grasp, empowering everyone to participate in AI-driven decision-making.


Data Literacy as a Strategic Advantage


As AI adoption accelerates, the competitive edge won’t come from technology alone. Cloud platforms, algorithms, and machine learning frameworks are increasingly commoditised. The real differentiator will be how effectively organisations can embed AI into their strategy and operations, and that starts with data-literate leadership.


Boards that embrace data literacy don’t just understand the risks and opportunities of AI, they model a culture of curiosity, accountability, and informed decision-making that cascades through the business. They are better positioned to balance innovation with governance, to spot new opportunities early, and to build trust with stakeholders inside and outside the organisation.

In this sense, data literacy at the board table isn’t just a hidden factor behind AI adoption, it’s a fundamental requirement for sustainable, long-term success in the data-driven economy.


By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
Woman and man touching hands, digital data flowing between them, with digital head projections.
By Eden Smith December 3, 2025
Discover why teams resist AI and how leaders can drive real buy-in using behavioural science, transparency, and human-centred adoption strategies.
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Show More