Why Big Data LDN Mattered

September 29, 2025

Big Data LDN 2025: Key Talks, Top Takeaways & Next Steps for Data & AI Leaders


Big Data LDN celebrated its 10th edition on 24–25 September 2025 at Olympia London, attracting thousands of data and AI professionals. Across 300+ speakers and 16 theatres, the show spotlighted how organisations are moving from analytics to intelligent action.

The headline topics were:

  • Agentic AI and autonomous data workflows
  • Unified data platforms and data products
  • Governance-by-design and ethical AI
  • Real-time decisioning, observability and AI safety
“AI-powered is now everywhere… this year, ‘agentic’ had obviously joined most offerings.”

Keynotes that Set the Agenda

Google Cloud: The Agentic Era

Google Cloud explained how data science is shifting to autonomous agents that reason and act across enterprises. Their call to action: design for unified, AI-native platforms because silos can’t sustain intelligent agents.

What you can do: Identify five workflows where agents could boost productivity (e.g., pipeline remediation, cost governance) and create an agent safety spec with approvals and audits.


Professor Brian Cox: The Universe as a Quantum Computer

Professor Brian Cox reminded the audience that complexity, measurement, and physical limits shape how we compute at scale - a timely warning as AI ambitions grow.


Women in Data: Ethics at the Core

Sessions from Women in Data highlighted bias, harm modelling, and real-world impact. For many, this signalled a shift: ethics is now a core product requirement, not a compliance afterthought.


“This year’s talk… pulls together purpose, adaptive mindsets, and effective governance… speed without proper foundations is a liability.”  Jovita Tam


Key Insights from Jez Clark: Culture & Talent Strategy for Business Impact

Jez Clark’s session on Culture and Talent Strategy underscored that people and mindset remain the foundation for every AI and data initiative. Drawing on his long experience at the heart of the data industry, Jez highlighted three key messages:

  1. Culture is the hidden architecture of innovation
    Without a shared purpose and psychological safety, even the best technology cannot deliver sustainable impact.
  2. Talent strategy drives measurable business outcomes
    Jez encouraged leaders to align data team structures and growth pathways directly with business KPIs, ensuring investment in skills maps to value creation.
  3. Leaders must enable continuous learning and adaptive teams
    Building learning loops - regular review, reflection, and re-skilling - keeps teams resilient in fast-changing AI environments.
“People create the conditions for technology to succeed. Culture and talent are not side projects - they are the strategy.” Jez Clark
For a deeper dive, you can read Jez's full presentation here.
Jez at Big Data LDN 2025 - preparing to talk on Talent Strategy for Business Impact

Six Takeaways for Data & AI Professionals

  1. Build for Agentic AI
    Unify operational and analytical data, embed memory and tool-use interfaces, and implement observability and rollback from day one.
  2. Governance by Design
    Embed policies into data contracts and CI/CD, automate lineage-based controls, and generate continuous compliance evidence.
  3. Trust as a Product Requirement
    Model harm to users and non-users, publish transparent impact assessments, and keep ethics at the heart of every AI initiative.
  4. Ruthless Data Product Management
    Track business value, deprecate underperforming products, and encourage reuse via a semantic data marketplace.
  5. Next-Level Observability
    Monitor prompts, tool calls, cost, and guardrail events for both data pipelines and AI agents.
  6. Real-Time Decision Automation
    Combine streaming data, rules, and ML for closed-loop operations, with “pause/confirm” safeguards before full autonomy.


Quick 90‑Day Action Plan

  • Days 0–30: Create an Agent Safety Spec, audit your top 10 data products, and integrate PII checks into CI/CD.
  • Days 31–60: Launch two pilot agents (e.g., pipeline remediation and BI Q&A) and implement full observability.
  • Days 61–90: Promote a successful pilot to production, roll out a data product marketplace, and publish an AI Transparency Note.


Why It Matters

Big Data LDN 2025 made one message crystal clear:  Unify before you amplify.


AI magnifies both value and risk. For data and AI leaders, now is the moment to put strong foundations in place, experiment safely, and scale only what proves real value.


See you there next year!


By Christa Swain December 3, 2025
Executive Summary: AI, Ethics, and Human-Centred Design Our recent Leaders Advisory Board event - designed in partnership with Corndel - featured three engaging sessions that explored how AI impacts human cognition, customer experience, and fairness. Here's what we learnt: 1. Think or Sink – Are We Using AI to Enhance or Reduce Cognitive Ability? Speaker: Rosanne Werner , CEO at XcelerateIQ & ex Transformation Lead at Coca-Cola Roseanne opened the day with an interactive and thought-provoking session, firmly positioning AI: “AI should be your sparring partner, not your substitute for thinking.” Her research revealed a striking insight: 83% of people using LLMs couldn’t recall what they wrote, compared to just 11% using traditional search . The message? It’s not about avoiding AI, but using it in ways that strengthen thinking , not outsource it. Roseanne explained how our brains form engrams - memory footprints that enable creativity and critical thinking. Over-reliance on AI risks weakening these pathways, reducing retention and problem-solving ability. She introduced the Mind Over Machine Toolkit , six strategies to use AI as a thinking partner: Provide Context First – Frame the problem before asking AI. Use AI as a Challenger – Stress-test ideas and uncover blind spots. Iterative Co-Creation – Collaborate, refine, and evaluate. Document Your Thinking – Keep reasoning visible. Reflective Prompts – Support reflection, not replace judgment. Sparring Partner – Test assumptions and explore risks. Roseanne summed it up with a simple rule: use Sink for low-value, repetitive tasks, and Think for strategic, creative decisions. 2. Designing Chatbots with Human-Centred AI Speaker: Sarah Schlobohm , Fractional Chief AI Officer Sarah brought a practical perspective, drawing on experience implementing AI across sectors - from banking and cybersecurity to rail innovation. She began with a relatable question: “Who’s been frustrated by a chatbot recently?” Almost every hand went up. Through a real-world example (redacted out of politeness), Sarah illustrated how chatbots can fail when designed with the wrong priorities. The chatbot optimised for deflection and containment , but lacked escape routes , sentiment detection, and escalation paths - turning a simple purchase into a multi-day ordeal. “Don’t measure success by how well the chatbot performs for the bot—measure it by how well it performs for the human.” Sarah introduced principles for better chatbot design: Human-Centred Design – Focus on user needs and emotional impact. Systems Thinking – Consider the entire process, not just chatbot metrics. Escalation Triggers – Negative sentiment, repeated failures, high-value intents. Context Awareness – Detect when a task moves from routine to complex and route accordingly. The takeaway? Automation should remove friction from the whole system - not push it onto the customer. 3. Responsible AI and Bias in Large Language Models Speaker: Sarah Wyer , Professional Development Expert in AI Ethics at Corndel “When we create AI, we embed our values within it.” She shared her journey tackling gender bias in large language models , from GPT-2 through to GPT-5, and highlighted why responsible AI matters. AI systems reflect human choices - what data we use, how we define success, and who decides what is fair. Real-world examples brought this to life: facial recognition systems failing to recognise darker skin tones, credit decisions disadvantaging women, and risk assessment tools perpetuating racial bias. Even today, LinkedIn engagement patterns show gender bias! Sarah made the point that simple actions - like testing prompts such as “Women can…” or “Men can…” - can reveal hidden disparities and spark vital conversations. To address these issues, Sarah introduced the D.R.I.F.T framework , a practical guide for organisations: D – Diversity : Build diverse teams to challenge bias. R – Representative Data : Ensure datasets reflect all user groups. I – Independent/Internal Audit : Test outputs regularly. F – Freedom : Create a culture where employees can challenge AI decisions. T – Transparency : Share processes without exposing proprietary code. Wrapping up the final session - before we opened the floor to panel questions and debate - Sarah created the opportunity to discuss how we address AI bias within our organisations by stepping through the DRIFT framework. Shared Themes Across All Sessions AI is powerful, but context matters . Human oversight and ethical design are critical . Use AI to augment thinking , not replace it. Measure success by human outcomes , not just automation metrics. We've had such great feedback from this event series - especially around the quality of speakers and the opportunity to have meaningful conversation and debate outside of functions. Definitely more in the events plan for 2026! If you'd like to be part of the conversation please navigate to our LAB events page to register your interest .
Woman and man touching hands, digital data flowing between them, with digital head projections.
By Eden Smith December 3, 2025
Discover why teams resist AI and how leaders can drive real buy-in using behavioural science, transparency, and human-centred adoption strategies.
People in office meeting with person on screen via video call.
By Eden Smith December 2, 2025
Discover why Data Translators, hybrid talent blending business, data, and communication, are becoming essential as organisations move beyond pure tech roles.
Show More