Anthropic Pushes Back Against Pentagon's AI Security Fears

James Carter | Discover Headlines
0

As the US military weighs the risks and benefits of using Anthropic's generative AI model Claude, the company is pushing back against accusations that it could sabotage its tools during war. In a court filing on Friday, Thiyagu Ramasamy, Anthropic's head of public sector, wrote that the company has never had the ability to cause Claude to stop working or alter its functionality once it's in the hands of the US military.

Anthropic's statement comes in response to concerns from the Trump administration about the potential for the company to tamper with its AI tools during military operations. The Pentagon has been sparring with Anthropic for months over how its technology can be used for national security, and the company has filed two lawsuits challenging the constitutionality of a ban on its software.

Inside the Platform

The Pentagon's concerns about Anthropic's AI model are centered on the idea that the company could disrupt active military operations by turning off access to Claude or pushing harmful updates. However, Ramasamy rejected this possibility, stating that Anthropic does not maintain a back door or remote 'kill switch' for its technology.

According to Ramasamy, Anthropic personnel cannot log into a Department of Defense system to modify or disable the models during an operation, and the company would only be able to provide updates with the approval of the government and its cloud provider, Amazon Web Services. This raises questions about the level of control that Anthropic has over its AI model once it's in the hands of the military.

The Security Tradeoff

The Pentagon's decision to label Anthropic a supply-chain risk has significant implications for the company and the wider AI industry. The designation will prevent the Department of Defense from using Anthropic's software, including through contractors, over the coming months. Other federal agencies are also abandoning Claude, and customers have begun canceling deals with the company.

Anthropic executives maintain that the company does not want veto power over military tactical decisions, and are willing to guarantee as much in a contract. However, negotiations with the government have broken down, and the company is now seeking an emergency order to reverse the ban.

Regulatory Pressure Builds

The case highlights the regulatory challenges facing AI companies as they navigate the complex landscape of national security and military operations. The Pentagon's concerns about Anthropic's AI model are likely to be just the beginning, as more companies develop and deploy AI technologies for military use.

As the hearing in one of the cases approaches, the judge will need to weigh the competing interests of national security and the rights of AI companies to develop and deploy their technologies. The outcome will have significant implications for the future of AI in the military, and the balance of power between the government and the tech industry.

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!