Agentic commerce—autonomous AI agents transacting on behalf of users—could capture an estimated $190–$385 billion of U.S. e‑commerce spending by 2030, which makes secure agent payments a commercial priority.
The trust layer is a multi‑pillar architecture that prevents fraud, enforces consent and enables auditability. Verifiable identity authenticates the agent and the human or organization it represents. Authority defines an agent’s permitted actions and spending limits. Reputation records a transparent history of behavior for future validation. Accountability traces actions to enable liability and dispute resolution.
Security combines cryptographic protections and system controls to resist tampering and replay attacks. Together these elements aim to restore consumer and merchant confidence so agents can act autonomously at scale.
Several complementary standards and vendor initiatives form the operational trust stack. The Agentic Commerce Protocol (ACP), championed by Stripe and OpenAI, introduces Shared Payment Tokens for limited‑use, programmable payments and agent discoverability.
Google’s Agent Payments Protocol (AP2) focuses on cryptographic user mandates to prove explicit authorization across payment rails. Visa’s Trusted Agent Protocol (TAP) provides cryptographic verification at the browsing edge, using signed HTTP messages to prevent impersonation and replay.
Protocols, pilots and the emerging ecosystem
Mastercard’s Agent Pay extends tokenization with dynamic Agentic Tokens to register and authenticate agentic transactions. A proposed ERC‑8004 standard outlines on‑chain registries for identity, reputation (audit trail) and validation, integrating zk‑Proofs and TEEs for privacy‑preserving attestations. Memory layers from vendors such as Mem0 and GibsonAI add persistent context to improve agent reliability.
Real‑world demonstrations are already underway. A live transaction using a Mastercard Agentic Token in partnership with PayOS moved the concept from pilot toward commercialization, and Cloudflare’s Web Bot Auth integrations with card networks have been used to cryptographically separate legitimate agents from malicious bot traffic.
Adaptive risk platforms like Oscilar and governance tools such as AgenticTrust layer machine‑learning detection and behavioral controls atop cryptographic proofs to catch anomalies in real time.
Scalability and regulatory alignment are principal constraints. Layer‑2 scaling and edge‑efficient cryptography are cited as mechanisms to handle large volumes without prohibitive latency. Auditable on‑chain trails and explicit cryptographic mandates address AML, consumer protection and consent requirements. However, the trust layer must also interoperate across competing standards to avoid fragmentation as major payment networks and technology providers converge.
