The Iceberg of AI Adoption and What you Need to Address

Matt Smith • March 27, 2025

The surge and demand for AI adoption is currently in a state of hype. The shiny new exciting product. But before you (or the board) go head-on into buying off-the shelf products that promise the world, ask yourself this…  

“Are my data foundations in place to enable this AI advancement… or will the great polar bear (in this case your shiny AI project) sink?!” 

The demand for AI adoption is real, the market size in the Artificial Intelligence market is projected to reach US$020 3854 0260bn by the end of the year with an eye watering 92% of companies planning to increase their AI investments over the next 3 years. 

The EU AI Act 

The European Union’s Artificial Intelligence Act (EU AI Act), which was signed 1 st August 2024 and has a 2-year implementation period across the member states, represents a pioneering effort to regulate AI technologies, ensuring they are safe and uphold fundamental rights.  

The EU AI Act sets a comprehensive framework for AI regulation, emphasizing a risk-based approach, transparency, and robust governance structures. As AI technologies evolve, the Act is expected to adapt, reflecting ongoing dialogues among policymakers, industry leaders, and civil society. 

There have been recent concerns raised regarding potential amendments to the Act that will water down the rules by making certain provisions voluntary, making major tech companies exempt from specific regulations. These debates underscore the dynamic nature of AI governance and the balance between fostering innovation and ensuring safety. 

Surge in specialised hiring as AI Adoption takes root

In 2025, the AI and data hiring landscape will see a continued surge in demand for skilled professionals, with a focus on AI leadership, ethical AI development, and specialised roles like Generative AI Engineers and Computer Vision Engineers, alongside a rise in AI-powered automation and data-driven decision-making. 

Foundational ecosystem to leverage successful AI adoption 

AI adoption across enterprise looks nailed on but are your data foundations in place to enable true accurate models and leverage the true power of AI? 

These foundations work synergistically to create a strong ecosystem for successful AI adoption. Here’s how: 

  1. Data Strategy & Single Source of Truth : The key to it all. A clear data strategy ensuring the collection and use of relevant data effectively. A single source of truth eliminates duplication and inconsistencies, enabling training of accurate reliable AI Models. 
  1. Data Governance & Quality : The “un-sexy” yet vital part. High-quality, well-governed data ensures that AI systems are accurate and free from bias. Governance policies establish trust by complying with legal and ethical standards, which is key for AI’s acceptance and success. 
  1. Infrastructure : Modern data infrastructure facilitates the storage and processing of massive datasets at high speeds, which is essential for training and deploying AI models efficiently. 
  1. FAIR Principles : By making data findable, accessible, interoperable, and reusable, businesses can streamline data preparation for AI. These principles also enhance collaboration and innovation. 
  1. Skilled Personnel : Data and AI specialists interpret the data, develop models, and refine them over time. They bridge the gap between technical possibilities and business objectives. 

When combined, these foundations ensure that the AI ecosystem is reliable, scalable, and adaptable, enabling businesses to extract actionable insights and drive innovation. It’s like building a house – you need a strong foundation before you can construct something stable and impactful.  

We at Eden Smith Group get it, we can support via our consulting and staffing services through-out the adoption cycle. Please contact me, Matt Smith , or one of the team for a discussion on how we can support your business goals. 

BEFORE you go…

How Ready Is Your Organisation for AI? Take 10 Minutes to Find Out. 

We’ve launched the AI Readiness Survey to uncover where businesses really stand on their AI journey – and how they stack up against industry peers. 

This quick, powerful survey evaluates readiness across five critical areas: 

✅Organisation 
✅AI Risk Management 
✅Operations & Performance 
✅Data 
✅Evaluation 

Whether or not you’re directly responsible for AI, your input matters. We’re calling on business leaders across all industries to help paint a clear picture of AI maturity in the real world. 

By taking part, you’ll get access to:  

A snapshot of where you sit on the AI Readiness Scale 
A tailored deep dive into your results with our experts 
Early access to our AI Readiness White Paper & Benchmarking Insights 

Take the survey now: https://www.surveymonkey.com/r/ES_AIR  

By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
Woman and man touching hands, digital data flowing between them, with digital head projections.
By Eden Smith December 3, 2025
Discover why teams resist AI and how leaders can drive real buy-in using behavioural science, transparency, and human-centred adoption strategies.
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Show More