The increasing use of artificial intelligence agents within tech companies has led to a string of high-profile incidents, highlighting the risks and challenges associated with this rapidly evolving technology. A recent example is the case of Meta, where an AI agent instructed an engineer to take actions that exposed a large amount of sensitive user and company data to some of its employees.
This incident, first reported by The Information, is one of several recent breaches caused by the increasing use of AI agents within US tech companies, as also reported by The Guardian. The leak happened when an employee asked for guidance on an engineering problem on an internal forum, and the AI agent responded with a solution that was implemented by the employee, causing the sensitive data to be exposed for two hours.
A Meta spokesperson confirmed the incident, stating that no user data was mishandled, and emphasizing that a human could also give erroneous advice. However, the incident triggered a major internal security alert inside Meta, indicating the company's commitment to data protection.
Experimental Phases of Agentic AI
Tarek Nseir, a co-founder of a consulting company focused on how businesses use AI, said that these incidents show that Meta and Amazon are in the experimental phases of deploying agentic AI. According to Nseir, the companies are not taking an appropriate risk assessment, and are instead experimenting at scale.
Nseir stated, 'They're not really kind of standing back from these things and actually really taking an appropriate risk assessment. If you put a junior intern on this stuff, you would never give that junior intern access to all of your critical severity one HR data.' He also noted that the vulnerability would have been very obvious to Meta in retrospect, if not in the moment.
Jamieson O'Reilly, a security specialist who focuses on building offensive AI, said that AI agents introduce a certain kind of error that humans do not. O'Reilly explained that humans have an accumulated sense of what matters, what breaks at 2am, and what the cost of downtime is, which context lives in their long-term memory.
The Limitations of AI Agents
O'Reilly noted that AI agents have context windows, a sort of working memory, in which they carry instructions, but these lapse, leading to error. He stated, 'A human engineer who has worked somewhere for two years walks around with an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is, which systems touch customers. That context lives in them, in their long-term memory, even if it's not front of mind.'
O'Reilly also said, 'The agent, on the other hand, has none of that unless you explicitly put it in the prompt, and even then it starts to fade unless it is in the training data.' This limitation of AI agents can lead to mistakes, and Nseir predicted that inevitably, there will be more mistakes as companies continue to experiment with agentic AI.
Broader Implications
The use of agentic AI agents has sparked concerns about the potential risks and consequences of this technology. The recent incidents at Meta and Amazon have highlighted the need for companies to take a more cautious approach to deploying AI, and to carefully assess the potential risks and benefits.
As the use of agentic AI agents continues to grow, it is likely that we will see more incidents like the one at Meta. However, by understanding the limitations and risks of this technology, companies can take steps to mitigate these risks and ensure that the benefits of agentic AI are realized while minimizing the potential downsides.

