The ongoing dispute between AI startup Anthropic and the US Department of Defense has shed light on the ethical fault lines surrounding the use of artificial intelligence in warfare. As reported by The Guardian, the feud centers on Anthropic's refusal to allow the federal government to use its Claude AI model for domestic mass surveillance or autonomous weapons systems.
Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University, who previously served in the United States air force, shared her insights on the matter. Kreps noted that the challenge for the military is that these technologies are so useful they can't wait until a military-grade version is available, but it's not surprising that they ran into cultural differences between the AI platform and the military.
Anthropic has branded itself as a safety-forward company, but its decision to sign a deal with the military has raised questions. Kreps pointed out that Anthropic seems to have made the decision to corner the enterprise market, which means they're trying to do business with organizations rather than individual users. However, this decision was surprising given the company's brand and values.
The Ethics of AI in Warfare
Kreps highlighted the puzzle of Anthropic's decision to do business with the Pentagon and Palantir, which is in the business of using AI for purposes that some people consider questionable. The company's refusal to allow its technology to be used for domestic mass surveillance and lethal autonomous weapons suggests that there was a red line that they got to.
The Pentagon's argument was that if there's a national defense issue, they shouldn't have to call up Dario Amodei to get approval. This raises questions about the role of private tech companies in national security decision-making. Kreps noted that the difference between hardware and software is crucial, as once the military has access to the software, they no longer need Anthropic's approval to use it as they see fit.
Kreps also pointed out that the case of the San Bernardino killer's iPhone is relevant to this discussion. In 2016, the FBI demanded that Apple create a backdoor to grant them access to the phone, but Apple refused on privacy grounds. The difference with Anthropic's AI is that once the military has access to the software, they can repurpose it and use it in ways that may not have been part of the explicit agreement.
Autonomous Weapons and National Security
Kreps noted that the question of autonomous weapons is a longstanding one, and it's not clear what the process looks like for ensuring that these systems are not used in a fully autonomous capacity. The US says they are not going to use AI in a fully autonomous capacity, but it's not clear what that process looks like.
Kreps also discussed the use of AI in warfare, noting that it's extremely useful in a military setting for tasks such as pattern recognition and identifying targets. However, the use of AI in counter-terrorism strikes is more precarious, as it's harder to distinguish between combatants and civilians.
Implications and Future Directions
The feud between Anthropic and the Pentagon has highlighted the need for clearer guidelines and regulations around the use of AI in warfare. Kreps noted that the technology has gotten more sophisticated, and it's inevitable that we would go in this direction. The fact that we are now involved in a conflict just accelerates those timelines.
The use of AI in warfare raises important questions about the role of private tech companies in national security decision-making and the need for transparency and accountability. As the technology continues to evolve, it's essential to have a nuanced discussion about the ethics of AI in warfare and the implications for national security and human rights.

