What Happens When You Put Enterprise Leaders in a Room and Ask: "Who Actually Controls Your AI?"
.png)
Subscribe to receive the latest blog posts to your inbox every week.
Last week, Brim collaborated with Capgemini on their Make AI Real event — bringing together technical leaders from some of the most recognised organisations in the world to tackle a question that keeps surfacing in every serious AI conversation:
How do you actually control AI as it scales across an organisation?
The room included leaders from Unilever, the Ministry of Defence, Dual Group, the Tony Blair Institute, and other major UK enterprises. The discussion wasn't about whether AI is powerful. That debate is over. The conversation was about what happens next, and specifically, what breaks when you try to move AI from isolated pilots into core operations.
The Same Problems, Regardless of Sector
Our CEO Kenny Alegbe spoke on a panel alongside Dr. Lucy Mason (Capgemini), Ben Byford (Machine Ethics), and Biju Mukund (Unilever), and the themes that emerged were strikingly consistent across sectors — FMCG, defence, government, insurance:

Who owns the data? Most organisations are discovering that the AI tools they've adopted don't give them real ownership of their data. Information flows into platforms but doesn't flow back out in a way they can control, audit, or redirect.
Who is accountable for decisions? As AI starts influencing operational workflows — not just generating summaries or drafts — the question of accountability becomes urgent. If an AI system makes a recommendation that affects customers, compliance, or revenue, who is responsible?
How do you prevent AI fragmentation? One of the most common patterns we're seeing is teams across the same organisation adopting different AI tools independently. The result is fragmented data, inconsistent outputs, and no unified governance. Several attendees described exactly this challenge within their own businesses.
How do you make AI do real work? There's a growing frustration with AI that generates output but doesn't execute. Tools that summarise, suggest, and draft — but never actually complete a workflow end-to-end. The consensus in the room was clear: the next phase of enterprise AI needs to be about execution, not just assistance.
Why This Matters
These aren't niche concerns for the most regulated or risk-averse industries. They're the central challenges of deploying AI at scale in any organisation. And what was particularly striking was hearing leaders from some of the largest companies in the UK — organisations with significant resources and technical capability — describe the same gaps.
If enterprises of this scale are grappling with AI control and ownership, mid-market businesses face the same challenges with fewer resources to navigate them.

What We're Building at Brim
This is exactly the problem we're focused on at Brim. We build AI systems that businesses truly own and control, designed for real execution with governance built in from day one.
That means AI that connects into your existing tools and workflows rather than replacing them. AI that keeps your data under your control rather than locking it into another platform. And AI that actually completes work — not just generates suggestions that a human still has to act on.
We call this Specific Intelligence: AI that is built around how your business actually operates, with the governance and oversight to scale it responsibly.
What's Next
Events like this — bringing leaders into a practical, no-hype conversation about what AI adoption actually looks like — are how the industry moves forward. We were proud to collaborate with the Capgemini team on making it happen.
If your organisation is navigating these same challenges, or if you're a consultancy looking to bring your clients into this kind of conversation, we'd love to hear from you. Get in touch.
Explore more articles
.png)
Why Executives Don’t Need to Be AI Experts to Lead Well
.png)
Chatbots vs AI Agents: Why Action‑Taking AI Is the Next Big Shift
