The ongoing feud between AI startup Anthropic and the US Department of Defense has brought to the forefront the complex ethical considerations surrounding the use of artificial intelligence in warfare. As reported by The Guardian, the dispute centers on Anthropic's refusal to allow the federal government to use its Claude AI model for domestic mass surveillance or autonomous weapons systems.
This controversy has sparked a broader debate about the role of private tech companies in national security decision-making. Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University, who previously served in the United States air force, has been following these issues closely. According to Kreps, the challenge for the military is that these technologies are so useful that they cannot wait until a military-grade version is available, but this haste can lead to cultural differences and conflicts with companies like Anthropic that prioritize safety.
Anthropic's decision to sign a deal with the military, despite its safety-forward brand, has been seen as surprising by some. Kreps notes that the company's attempt to corner the enterprise market may have led to a mismatch between its values and the purposes for which its technology is being used. The Pentagon's argument that it should not have to seek approval from Anthropic for every use of its technology has raised questions about the balance between national security and private company interests.
The Risks of Autonomous Weapons
The use of AI in warfare is not a new phenomenon, but the increasing sophistication of the technology has accelerated the timelines for its adoption. Kreps points out that AI is extremely useful in a military setting, particularly for pattern recognition and signal processing. However, the use of AI in counter-terrorism strikes, where the targets are often individuals with unclear characteristics, raises significant concerns about the potential for misidentification and civilian casualties.
The case of the San Bernardino killer's iPhone, where Apple refused to create a backdoor for the FBI, highlights the differences between hardware and software in terms of control and repurposing. Once AI software is handed over to the military, the company that developed it may lose all leverage over its use, and the technology can be repurposed in ways that may not have been part of the explicit agreement.
Kreps emphasizes that the current dispute between Anthropic and the Pentagon is not just about the company's refusal to allow its technology to be used for certain purposes but also about the broader questions surrounding the use of AI in warfare. The fact that Anthropic has lost leverage over its technology once it is in the hands of national security professionals raises concerns about accountability and transparency.
Existential Risks and Misappropriation
Kreps has been following the issues surrounding AI use in the military for a long time and believes that the current fight is a culmination of longstanding questions. The CEO of Anthropic has spoken about the existential risks and misappropriation of AI for bioterrorism, but Kreps thinks that the more mundane cases, such as the current dispute, are more significant risks.
The challenge of ensuring that AI systems are not used in a fully autonomous capacity is a concern that Anthropic has raised, and it is not clear what process the US military has in place to ensure that this does not happen. The fact that the technology has become more sophisticated has accelerated the timelines for its adoption, and it is likely that we will see more cases like the current dispute between Anthropic and the Pentagon.
The use of AI in warfare is a complex issue that raises significant ethical considerations. As the technology continues to evolve, it is essential to have a nuanced discussion about its potential risks and benefits. The current dispute between Anthropic and the Pentagon is a wake-up call for the tech industry, policymakers, and the broader public to engage in this conversation and ensure that the development and use of AI are aligned with human values and interests.

