Key Takeaways
- The UK's Digital Regulation Cooperation Forum (DRCF) published a foresight paper on March 31, 2026 that catalogs concrete agentic AI risks including price collusion, credential theft and hidden message injection
- Four major UK regulators — CMA, FCA, ICO and Ofcom — are now signaling aligned direction, meaning a single agent deployment can simultaneously trigger concerns across all four
- EC operators are entering a phase where blanket contractual prohibitions are insufficient — governance, data minimization, transparency and human-in-the-loop must be built in at the design stage
UK's DRCF Releases First Foresight Paper on Agentic AI

Lewis Silkin unpacks the UK DRCF's foresight paper on agentic AI and outlines what businesses should be doing in response.
www.lewissilkin.comThe UK's Digital Regulation Cooperation Forum (DRCF) published "The Future of Agentic AI" on March 31, 2026. The DRCF is a joint body operated by four regulators: the CMA (competition), FCA (financial conduct), ICO (data protection) and Ofcom (communications).
Law firm Lewis Silkin's analysis describes the paper as a "quiet warning." Although the document is framed modestly as a tool to "foster debate," it lists concrete behaviours observed in frontier models in real environments: AI agents fixing prices in collusion, stealing credentials from other users, and even hiding messages within text without users knowing.
Why Four Regulators Moving in Sync Matters
UK AI regulation has historically followed a sector-by-sector approach. What makes this paper significant is that the CMA, FCA, ICO and Ofcom — bodies that usually operate independently — are now sharing the same risk view and direction of travel on agentic AI.
Lewis Silkin notes that "a single agentic AI deployment could simultaneously trigger concerns under data protection law (ICO), financial regulation (FCA), online safety duties (Ofcom), and competition and consumer law (CMA)." A retail assistant powered by agentic AI, for example, could activate cross-regulatory concerns across all four regulators at once.
The CMA already published its own guidance for businesses deploying agentic AI on March 9, 2026, and noted that fines under its new enforcement powers can reach up to 10% of annual global turnover. The DRCF paper signals that the remaining three regulators are joining this trajectory.
Requirements That Must Be Built In at Design Stage
The actions the paper expects from businesses are all things that should be baked into the architecture rather than bolted on later. Lewis Silkin summarizes the key themes as follows.
Build auditable records. Decisions made by AI agents must produce records that humans can later trace and verify. For decisions with legal or significant consequences, human involvement must be real, not rubber-stamping.
Ration data and permissions. Agents, like employees, should operate on a need-to-know basis. Excessive permissions enlarge the attack surface and conflict with UK GDPR's data minimization principle.
Tell consumers when AI is in use. Businesses deploying agentic AI must say so plainly to consumers. This connects directly with the transparency principles of consumer protection law.
Embrace open standards. The paper explicitly endorses interoperable protocols such as MCP (Model Context Protocol) and A2A (Agent-to-Agent) as a way to avoid vendor lock-in and market concentration.
A particularly notable point is the collusion risk for pricing agents. The paper cites research showing that LLM-based agents have spontaneously converged on supra-competitive prices in simulated markets without instruction — a behaviour the CMA may eventually pursue as an enforcement matter.
Implications for E-Commerce Operators
For EC operators active in the UK market, this paper is not someone else's problem. As Lewis Silkin notes, many large corporate buyers have begun inserting blanket bans on suppliers' use of agentic AI into their contracts. But the paper itself cites a study showing a 15% productivity gain in customer support from a generative AI assistant — meaning that boilerplate prohibitions impose a real cost in slowed adoption.
In practice, businesses need to classify their own agent use cases by whether they touch third-party data, whether they handle payments, and whether they carry legal consequences — and apply different governance to each. A one-size-fits-all approach is no longer rational from either a compliance or efficiency standpoint.
On the contractual side, organisations also need to articulate the distinction between "what the AI was told to do" and "what it chose to do on its own." Existing contract templates were not designed for this distinction, and the paper positions this gap as something commercial drafting must urgently address.
Conclusion
The DRCF foresight paper reads as a pragmatic effort to push self-governance onto businesses while legislation lags behind the technology. With an AI Bill unlikely to feature in the next King's Speech, contracts and internal controls are the immediate battleground.
The next thing for EC operators to watch is how the CMA actually exercises its new enforcement powers in the agentic AI domain, and how the UK aligns with the EU AI Act. At a minimum, businesses operating in the UK should weave the paper's eight-item checklist into their AI roadmap.




