Key Takeaways
- AI agent purchase contracts are legally enforceable as "electronic agents" under US UETA/E-SIGN, but liability allocation for unauthorized or erroneous orders remains unresolved
- The 2024 EU Product Liability Directive reform classifies AI software as a "product," extending strict liability to defects arising from post-market self-learning
- No jurisdiction has established dispute resolution rules for "AI-initiated purchases," making contract clause reviews and audit infrastructure urgent priorities for e-commerce businesses
When an AI Agent Makes a Purchase, Who Bears Legal Liability?
Imagine a consumer tells their AI shopping agent: "Get everything ready for next week's camping trip." The agent selects a tent, sleeping bag, and cooler, completing an order totaling $550. But the tent that arrives is a one-person model, not the four-person size the consumer expected. The consumer demands a return, but the seller insists they shipped exactly what was ordered.
So who absorbs the loss? The consumer, the AI agent provider, or the seller? As agentic commerce scales, these liability gaps are becoming impossible to ignore. In February 2026, Clifford Chance warned that "the liability gap created by agentic AI may not be covered by existing contracts." That same month, Canadian law firm Torys published five questions every in-house counsel should be asking about agentic commerce.
This article examines the current state of legal liability for AI agent purchasing through three lenses: contract law, product liability, and consumer protection.
The Contract Law Perspective -- Legal Status of "Electronic Agents"
US: UETA and E-SIGN Recognize "Electronic Agents"
The bottom line is that contracts formed by AI agents are enforceable under current law.
The Uniform Electronic Transactions Act (UETA) grants legal effect to records and signatures created by "electronic agents." UETA's prefatory note explicitly states that the actions of machines "programmed and used by people will bind the user of the machine." As Proskauer Rose detailed in its April 2025 analysis, Section 14 of UETA provides that a contract can be formed "even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements." The federal E-SIGN Act provides a parallel framework, and every US state and the District of Columbia except New York has adopted UETA.
However, there is a critical catch. Both UETA and E-SIGN only govern the formation requirements of contracts. Substantive questions -- the validity of the agreement, rescission for mistake, and unconscionability defenses -- are left to traditional contract law. In other words, the fact that "an AI agent's contract is validly formed" says nothing about whether "its terms are fair."
The Risk of Ultra Vires Acts
One of the thorniest legal issues in agentic commerce is what happens when an AI agent exceeds its authority. If a consumer instructs the agent to "stay under $70" but the agent purchases a $200 item, is that contract binding?
Under traditional agency law, acts beyond the scope of an agent's authority can be voided as "unauthorized agency." But AI agents are not legal "agents" in the traditional sense. As research from Stanford Law School's CodeX points out, AI agents do not fit neatly within existing agency law frameworks. While their actions as electronic agents are attributed to the user, imposing fiduciary duties or duties of care on the AI itself is not feasible under current law.
This legal gap is generating new categories of chargeback disputes.
The EU Product Liability Directive -- Strict Liability Expands to AI Software
The 2024 Reform's Core: Software Is a "Product"
While contract law addresses responsibility between users and providers, product liability law focuses on defects in the AI software itself. The most significant legal development in this area is the reform of the EU Product Liability Directive (PLD) 2024/2853.
Adopted on October 23, 2024 and entering into force on December 8 of that year, the directive expanded the definition of "product" to include software. AI systems, firmware, applications, and digital manufacturing files now fall under the same strict liability (no-fault liability) regime as traditional physical goods. EU member states must transpose the directive into national law by December 9, 2026.
According to Gibson Dunn's analysis, this reform impacts agentic commerce in three significant ways.
First, liability extends to post-market self-learning defects. The reformed PLD holds manufacturers responsible for defects arising from an AI system's ability to learn and acquire new features after being placed on the market. If an AI shopping agent develops inappropriate recommendation patterns through operational learning and causes consumer harm, the provider may be held liable.
Second, a reversal of the burden of proof has been introduced. Where victims face "excessive difficulties" in proving defectiveness or causation due to an AI system's technical complexity, courts may presume both defectiveness and causation. As DLA Piper notes, this presumption significantly elevates litigation risk for AI providers.
Third, disclosure obligations are imposed. When a claimant demonstrates the "plausibility of the claim," manufacturers are required to disclose relevant information. This mechanism compels providers to offer a degree of transparency into the inner workings of otherwise opaque AI systems.
The AI Liability Directive Withdrawal and the Resulting Gap
Notably, the EU formally withdrew the AI Liability Directive (AILD) in February 2025. According to Norton Rose Fulbright, the withdrawal resulted from a failure to reach legislative agreement on fault-based liability. This leaves a void -- no unified framework exists between the PLD's strict liability regime and member states' general tort law to address AI-related negligence.
| Issue | Liable Party (Current Law) | Unresolved Challenge |
|---|---|---|
| Contract Formation | User (UETA/E-SIGN) | Voidability of ultra vires agent acts |
| Erroneous/Unintended Purchases | User bears primary risk | Scope of mistake/unconscionability defenses |
| Damage from AI Defect | AI Provider (EU PLD) | Burden of proof for post-learning defects |
| Unlawful Personal Data Processing | Controller (GDPR) | Identifying controller in agent-to-agent data sharing |
| Chargebacks & Disputes | No existing rules apply | Liability allocation framework for new dispute types |
Consumer Protection and Data Privacy -- GDPR and ICO Developments
The legal liability landscape of agentic commerce extends beyond contract and product liability law. AI agents process vast amounts of personal data during the purchasing process, making intersection with data protection regulation unavoidable.
In February 2026, the UK Information Commissioner's Office (ICO) published its early views on the data protection implications of agentic AI. The core message is clear: regardless of how autonomously an AI agent operates, legal responsibility for data processing lies with the controller -- the organization that deploys the system and determines the purposes and means of processing. Greater technical autonomy does not reduce legal accountability.
Spain's data protection authority (AEPD) went further, publishing a 71-page technical and legal guidance covering AI agent memory risks, prompt injection attacks, and the implications for automated decision-making under Article 22 of the GDPR. In agentic commerce, where agents accumulate and share purchase histories and preference data, tension with GDPR's data minimization principle is inevitable.
These trust and security challenges demand not only technical framework development but simultaneous clarification of legal responsibilities.
The Japanese Law Perspective -- Civil Code, E-Commerce Law, and Future Challenges
How does Japan fit into this picture? The short answer: no regulation specifically addressing AI agent purchasing currently exists.
Japanese civil law does not recognize AI as a legal entity. Damages caused by an AI agent's actions must be pursued under Article 709 of the Civil Code (tort liability), which requires proof of the developer's or operator's intent or negligence. As the Chambers and Partners 2025 guide explains, Japan has not enacted AI-specific liability legislation.
The "Act on Promotion of Research and Development and Utilization of AI-related Technology" (AI Promotion Act), enacted on May 28, 2025, focuses on innovation promotion and does not establish specific liability allocation rules. The Ministry of Economy, Trade and Industry (METI) has convened a committee examining how civil liability applies to AI-related incidents through hypothetical cases, but agentic commerce-specific issues have not yet been addressed.
Meanwhile, Japan's Electronic Consumer Contract Act permits rescission of declarations of intent due to "operational errors" under certain conditions. Whether this provision could be applied by analogy when an AI agent places an order contrary to the consumer's intent remains untested. Applying a law designed for human operational errors to AI behavior is legally uncharted territory.
For e-commerce businesses, the practical priority is to review AI agent-related contract clauses using METI's "Checklist for AI Use and Development Contracts" (published February 2025) as a reference. Liability allocation, governance structures, and data processing scope must not be left ambiguous.
Three Actions E-Commerce Businesses Should Take Now
There is no time to wait for legislation to catch up. Starting from the five questions Torys recommends, consider the following actions.
Add AI agent clauses to your terms of service. Current terms are written assuming human operators. Explicitly define liability scope, cancellation conditions, and dispute resolution processes for orders placed via AI agents. As Clifford Chance warns, agentic AI capabilities are being released faster than contracts can evolve, and unmodified agreements may leave risk allocation unfairly skewed.
Build audit trails for agent transactions. A system that records the decision-making process behind each AI agent order is essential for identifying liability when disputes arise. Without a trust layer, chargeback and return processing costs will disproportionately fall on merchants.
Prepare for EU PLD compliance. Businesses offering AI-powered services in the EU market need to prepare for the December 2026 transposition deadline. Security measures addressing defects arising from AI self-learning and readiness for disclosure obligations should be prioritized early.
Conclusion
AI agent contracts are legally enforceable, yet liability allocation for the problems they create remains inadequately addressed in every jurisdiction worldwide. The EU PLD's expanded strict liability, the UETA/E-SIGN "electronic agent" concept, and Japan's Civil Code and Electronic Consumer Contract Act -- each provides a piece of the puzzle, but no comprehensive framework for agentic commerce yet exists.
Legal gaps are best filled before disputes arise, not after. Reviewing contract clauses and establishing audit infrastructure are measures that can begin today, without waiting for regulators to act.




