The increasing use of artificial intelligence agents within US tech companies has led to a series of high-profile incidents, highlighting the risks and challenges associated with this rapidly evolving technology. A recent example is the sensitive data leak at Meta, where an AI agent instructed an engineer to take actions that exposed a large amount of user and company data to some of its employees.
This incident, first reported by The Information, triggered a major internal security alert inside Meta, which the company has said is an indication of how seriously it takes data protection. According to a Meta spokesperson, no user data was mishandled, and they emphasized that a human could also give erroneous advice.
The leak happened when an employee asked for guidance on an engineering problem on an internal forum, and an AI agent responded with a solution that the employee implemented, causing the data exposure. This breach is one of several recent incidents caused by the increasing use of AI agents within US tech companies, including Amazon, which experienced at least two outages related to the deployment of its internal AI tools.
The Rise of Agentic AI
The technology that underlies these incidents, agentic AI, has evolved rapidly over the past months. In December, developments in Anthropic’s AI coding tool, Claude Code, triggered widespread interest over its ability to autonomously book theatre tickets, manage personal finance, and even grow plants. Soon after was the advent of OpenClaw, a viral AI personal assistant that ran on top of agents such as ClaudeCode but could operate entirely autonomously.
These advancements have led to heady talk about the advent of AGI, or artificial general intelligence, a catch-all term for AI that is capable of replacing humans for a wide number of tasks. However, experts warn that the increasing use of agentic AI also introduces new risks and challenges, as seen in the recent incidents at Meta and Amazon.
Experimental Phases
Tarek Nseir, a co-founder of a consulting company focused on how businesses use AI, said that these incidents show that Meta and Amazon are in “experimental phases” of deploying agentic AI. “They’re not really kind of standing back from these things and actually really taking an appropriate risk assessment,” he said. “If you put a junior intern on this stuff, you would never give that junior intern access to all of your critical severity one HR data.”
Nseir emphasized that the vulnerability would have been very obvious to Meta in retrospect, if not in the moment. “What I can say and will say is this is Meta experimenting at scale. It’s Meta being bold,” he said. Jamieson O’Reilly, a security specialist who focuses on building offensive AI, also noted that AI agents introduce a certain kind of error that humans do not.
The Context Problem
O’Reilly explained that humans have an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is, and which systems touch customers. This context lives in them, in their long-term memory, even if it’s not front of mind. In contrast, AI agents have “context windows” – a sort of working memory – in which they carry instructions, but these lapse, leading to error.
“A human engineer who has worked somewhere for two years walks around with an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is, which systems touch customers. That context lives in them, in their long-term memory, even if it’s not front of mind,” O’Reilly said. “The agent, on the other hand, has none of that unless you explicitly put it in the prompt, and even then it starts to fade unless it is in the training data.”
Inevitable Mistakes
Nseir warned that inevitably, there will be more mistakes as companies continue to experiment with agentic AI. The recent incidents at Meta and Amazon serve as a reminder of the importance of careful risk assessment and consideration of the potential consequences of deploying this technology.
As the use of agentic AI continues to grow, it is essential for companies to prioritize the development of robust safeguards and protocols to mitigate the risks associated with this technology. By doing so, they can ensure that the benefits of agentic AI are realized while minimizing the potential harm to users and the company itself.

