The Anthropic Feud: Unpacking the Ethics of AI in Warfare

James Carter | Discover Headlines
0

The ongoing feud between Anthropic, an AI startup, and the US Department of Defense has brought to the forefront the ethical concerns surrounding the use of artificial intelligence in warfare. As reported by The Guardian, the dispute revolves around Anthropic's refusal to allow the federal government to use its Claude AI model for domestic mass surveillance or autonomous weapons systems.

Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University, who previously served in the United States air force, shared her insights on the matter. According to Kreps, the challenge for the military is that these technologies are so useful that they cannot wait until a military-grade version is available, but it's not surprising that they ran into cultural differences between the AI platform and the military.

One element in this feud is that Anthropic has branded itself as a safety-forward company, but then it did sign onto a deal with the military. Kreps noted that it's surprising that Anthropic would be surprised by where this ended up, given that the company had made the decision to corner the enterprise market and do business with organizations, rather than individual users.

The Puzzle of Anthropic's Decision

Kreps pointed out that Anthropic's decision to do business with the Pentagon and Palantir, a company that uses AI for questionable purposes, was surprising, given the brand that Anthropic was trying to curate. It seems that Anthropic was okay with a wide use of its technology, but there was a red line that they got to with domestic mass surveillance and lethal autonomous weapons.

According to Kreps, there are a couple of possibilities that led to this situation, including relationships between the people in Anthropic and the Trump administration, which led to a downward spiral of distrust. Additionally, the situation in Venezuela and the politics around ICE activities raised questions about what it means to use these technologies lawfully.

The Pentagon's Argument

The Pentagon's argument is that if there's a national defense issue, they shouldn't have to call up Dario Amodei to get approval. Kreps noted that this raises an actual question about the role of private tech companies in national security decision-making. The case of the San Bernardino killer's iPhone, where authorities demanded that Apple create a backdoor to grant them access, is a relevant example.

Kreps explained that the difference with Anthropic's AI is that once you hand this over to the military, you no longer need Anthropic's approval to use it as you see fit. The software can be repurposed and used in ways that maybe weren't part of the explicit agreement, but now can be justified on the basis of national security.

Longstanding Questions on AI Use in the Military

Kreps has been following these issues for a long time and noted that it's interesting to see how longstanding questions on AI use in the military are coming to a head. The challenge is how to ensure that there's actually a human in the loop, and that these systems are not being used in a fully autonomous way.

AI is already being used in warfare, and Kreps pointed out that it's extremely useful in a military setting. AI is good at pattern recognition, and can identify targets based on what it's been programmed to identify. However, people get more uncomfortable when AI is used in settings where the targets are not concrete, such as counter-terrorism strikes.

Kreps concluded that the fact that we are now involved in a conflict just kind of accelerates those timelines, and that the technology has gotten more and more sophisticated. The question of how to ensure that AI is used in a way that is safe and responsible is a pressing one, and one that requires careful consideration and planning.

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!