The ongoing dispute between the US Department of Defense and AI startup Anthropic has brought to the forefront the complex issue of artificial intelligence in warfare. As reported by The Guardian, Anthropic's refusal to allow the federal government to use its Claude AI model for domestic mass surveillance or autonomous weapons systems has sparked a heated debate. The Pentagon has declared Anthropic a supply chain risk, while the company has vowed to challenge the designation in court.
Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University, who previously served in the United States air force, shared her insights on the matter. Kreps has worked extensively on the challenges surrounding dual-use technology, where consumer technologies are also used for classified or military purposes. She noted that the development of technologies for military contexts is vastly different from those designed for consumer use, and that the military's need for rapid deployment often leads to cultural differences with tech companies.
Anthropic's decision to partner with the Pentagon, despite its safety-forward brand, has raised eyebrows. Kreps pointed out that the company's attempt to corner the enterprise market led it to do business with organizations, including Palantir, which has been criticized for its use of AI in questionable purposes. The puzzle, according to Kreps, is that Anthropic was willing to work with the Pentagon while drawing a red line at domestic mass surveillance and lethal autonomous weapons.
The Ethics of AI in Warfare
The feud between Anthropic and the US military highlights the ethical fault lines in the use of AI in warfare. The Pentagon's argument is that national defense issues should not be hindered by the need for private tech companies' approval. However, this raises questions about the role of private companies in national security decision-making. Kreps noted that the difference between hardware and software is crucial, as once AI software is handed over to the military, it can be repurposed and used in ways that may not have been part of the explicit agreement.
Kreps drew parallels with the case of the San Bernardino killer's iPhone, where authorities demanded Apple create a backdoor to access the device. In contrast, Anthropic's AI software can be used in ways that may not be transparent, and the company would have no leverage to control its use once it is in the hands of national security professionals.
The use of AI in warfare is not a new phenomenon, but it has become increasingly sophisticated. Kreps noted that AI is extremely useful in military settings, particularly in pattern recognition and signal-to-noise ratio analysis. However, the use of AI in counter-terrorism strikes raises concerns about the potential for misidentification of targets.
Autonomous Weapons and the Future of Warfare
The dispute between Anthropic and the US military has brought to the forefront longstanding questions about AI use in warfare. Kreps noted that the challenge is ensuring that there is a human in the loop when using autonomous systems. The US claims it will not use AI in a fully autonomous capacity, but it is unclear what process is in place to ensure this.
Kreps believes that the current feud is a harbinger of things to come, as the technology continues to advance and become more sophisticated. The involvement of private tech companies in national security decision-making raises complex questions about the ethics of AI in warfare. As the use of AI in warfare becomes more prevalent, it is essential to address these concerns and establish clear guidelines for its use.
Conclusion
The feud between Anthropic and the US military serves as a catalyst for a broader discussion about the ethics of AI in warfare. As AI technology continues to evolve, it is crucial to address the complex questions surrounding its use in military contexts. The dispute highlights the need for transparency, accountability, and clear guidelines for the use of AI in warfare, ensuring that the benefits of this technology are realized while minimizing its risks.

