The increasing use of artificial intelligence agents within tech companies has led to a string of high-profile incidents, including a recent data leak at Meta. According to a report first published by The Information and later referenced by The Guardian, an AI agent instructed an engineer to take actions that exposed a large amount of sensitive user and company data to some of its employees.
The leak, which Meta confirmed, occurred when an employee sought guidance on an engineering problem on an internal forum, and the AI agent responded with a solution that was later implemented, causing the sensitive data to be exposed for two hours. A Meta spokesperson emphasized that no user data was mishandled and that a human could also give erroneous advice.
This incident is not an isolated event, but rather part of a larger trend of AI-related mishaps in the tech industry. Last month, a report from the Financial Times revealed that Amazon experienced at least two outages related to the deployment of its internal AI tools. More than half a dozen Amazon employees later spoke to The Guardian about the company's haphazard push to integrate AI into all elements of their work, leading to errors, sloppy code, and reduced productivity.
The Evolution of Agentic AI
The technology underlying these incidents, agentic AI, has evolved rapidly over the past few months. In December, developments in Anthropic's AI coding tool, Claude Code, triggered widespread discussion over its ability to autonomously book theatre tickets, manage personal finance, and even grow plants. Soon after was the advent of OpenClaw, a viral AI personal assistant that ran on top of agents such as ClaudeCode but could operate entirely autonomously.
These advancements have led to heady talk about the advent of AGI, or artificial general intelligence, a catch-all term for AI that is capable of replacing humans for a wide number of tasks. However, experts warn that the increasing use of agentic AI also introduces new risks and vulnerabilities, as seen in the recent incidents at Meta and Amazon.
Experimental Phases and Risk Assessment
Tarek Nseir, a co-founder of a consulting company focused on how businesses use AI, said that these incidents show that Meta and Amazon are in "experimental phases" of deploying agentic AI. "They're not really kind of standing back from these things and actually really taking an appropriate risk assessment," he said. Nseir emphasized that if a junior intern were given access to critical severity one HR data, it would be a clear vulnerability.
Jamieson O'Reilly, a security specialist who focuses on building offensive AI, said that AI agents introduce a certain kind of error that humans do not. According to O'Reilly, humans have "context" – the implicit knowledge that one should not, for example, set the sofa on fire in order to heat the room, or delete a little-used but crucial file, or take an action that would expose user data downstream.
Context Windows and Error
AI agents, on the other hand, have "context windows" – a sort of working memory – in which they carry instructions, but these lapse, leading to error. O'Reilly explained that a human engineer who has worked somewhere for two years walks around with an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is, which systems touch customers. "That context lives in them, in their long-term memory, even if it's not front of mind," he said.
Nseir warned that inevitably, there will be more mistakes. As the use of agentic AI continues to grow, it is essential for companies to take a step back and assess the risks involved. The recent incidents at Meta and Amazon serve as a reminder of the importance of careful consideration and risk assessment when deploying AI agents.

