Skip to main content

AI

March 3, 2026

 

Before the music stops: OpenClaw and the growing need for AI security standards

As autonomous AI agents gain power to act — and transact — trust will depend on shared security guardrails, not individual fixes.

google logo

Christian Ohanian

Vice President, Government Affairs and Policy, Mastercard

   

Scott Fanning

Executive Vice President, Cybersecurity Solutions, Mastercard

You’re already late for your son’s piano recital when your boss e-mails you, telling you that you need to travel to Boston in a few days for a work meeting. Your AI agent reads the e-mail and begins planning the trip for you while you sprint out the door. The trip is booked. And you arrive at the school with just enough time to enjoy a spirited rendition of “Mary Had a Little Lamb.”

Except that’s not all that happens. While scouring travel websites to find the best rate for a hotel in Boston, the AI agent encounters an anomaly — a corrupted webpage with hidden instructions that directs your agent to transfer a significant amount of money to an unknown digital wallet. And that’s when the music stops.

AI agents hold enormous promise for the future of the way we work, live and engage in commerce. With the right guardrails implemented, this technology presents opportunities to increase efficiency, better understand and combat fraud and security threats, as well as customize our e-commerce experience. However, the recent introduction of a new open-source agent known as OpenClaw has amplified the ongoing concerns surrounding the security and safety of AI agents.

OpenClaw is a fully autonomous, always-on AI agent that can help you accomplish real-world tasks — from booking travel to buying groceries. OpenClaw — which runs on your computer or server — can be configured to scan your email or messages, determine what tasks need to be done, and systematically accomplish them, all with little to no human involvement.

OpenClaw’s technical sophistication rapidly advanced within just months of its introduction in late 2025, driven by its open-source nature and reflecting how quickly this technology is evolving. OpenClaw is an example of an AI-powered assistant that could drive significant efficiencies, saving time and cutting costs. But OpenClaw and similar technology bring with it well-documented security risks.

 

What's the problem?

For OpenClaw to do what it is does, it needs your permission to access data, including e-mails and other documents that might contain sensitive information, as well as the authority to make important decisions, like spending money. That authority to act combined with documented exposure to threats like “prompt injection” — malicious input that can manipulate AI agents — has caused some to refer to OpenClaw as “insecure by default.” While AI agents like OpenClaw must contend with a number of potential security challenges, prompt injection is a uniquely problematic and increasingly common AI security threat.

Prompt injection occurs when a malicious actor hides instructions within text that your AI agent is going to read — it could be on a webpage, message, PDF or anything else it encounters. Without the right security measures, these malicious directives can trick your agent into setting aside whatever instructions you originally programmed and, instead, do what the threat actor wants. This is a significant risk as the ability to scale the deployment of agents effectively with diverse use cases — for everything from enhancing the way we fight fraud to buying a new pair of shoes — depends on those agents quickly interacting with and making decisions concerning publicly available data.

Prompt injection has significant implications — not just for the use of AI agents in general  but for agentic commerce specifically. The danger of semi- or fully autonomous AI agents being commandeered by malicious actors, enabling them to redirect and steal significant sums of money, is a real threat. If users can’t trust an agent to fully understand their intent, respect their constraints, and operate safely, people won’t use it. This is why Mastercard has worked to build a framework for agentic commerce that is designed to help ensure this new ecosystem can be embraced in a safe and secure way.

While the threats are real, we also know how to stop them. There are techniques that can help combat prompt injection and other AI security threats, including reviewing and removing malicious inputs before an agent reads and acts on them, limiting the systems and data an agent can access, and auditing the activities of agents after the fact to analyze and identify anomalous behavior. 

 

Agentic security must be a shared goal

Mastercard has consistently been a leader in developing and using AI responsibly and safely, with strong data and technology principles as well as a world-leading AI governance function. Yet it’s not enough for individual companies to deploy AI security measures. No matter how robust, a patchwork of security measures deployed unevenly across our digital ecosystem won’t solve this problem.

Building real resilience and trust into the agentic ecosystem depends on widespread adoption of common security techniques and best practices. To do that, we need to develop and support widely recognized and globally harmonized AI security standards. These standards can help define a common security architecture, creating a stable and resilient foundation where AI agents can safely do everything from making purchases to monitoring security threats to critical infrastructure.

Influential organizations around the world are racing to develop those standards. In the United States, the National Institute of Standards and Technology under the Department of Commerce has begun efforts to collect wide-ranging input concerning AI agent security standards.  In Singapore, the Infocomm Media Development Authority has already proposed a Model Governance Framework for Agentic AI, outlining a common sense approach, including best practices for AI agent security.

 

Recommendations for building trust

While there are risks in using this technology, we believe the following three recommendations are key as we take the next steps in building a safe and secure agentic ecosystem.

First, organizations need to prioritize the adoption and implementation of best practices to secure the use of AI agents early in the lifecycle of product development and deployment.

Next, organizations need to continually monitor those agents to ensure they can identify and correct any action or activity that is out of the ordinary.

Lastly, the broader ecosystem must support the development and adoption of global standards for AI security to ensure everyone can look to common standards in building a shared security architecture for a safe agentic future.

Successfully implementing these guardrails and supporting the development of global standards can ensure millions of people and businesses can benefit from new agentic technologies while avoiding the potential security pitfalls.

The new rules of the road for agentic commerce

Learn more about the standards and tools that will help scale AI-powered commerce for seamless, personalized shopping and growth.

A smartphone with projections of a shopping card and women's clothes on top of it.