Skip to content
Official Blog of the AALS Section on Contracts

Anthropic’s Contracts Dispute with the Pentagon

Anthropic is the company behind Claude, the only artificial intelligence tool currently operating on classified military systems. As Julian E. Barnes and Sheera Frenkel reported in The New York Times last week, the Pentagon and Anthropic were in a dispute over whether Anthropic can impose ethical and legal limitations on the Pentagon’s use of the tool. The Pentagon, thinking two threats are better than one, simultaneously said it would terminate its agreements with Anthropic claims that it would force Anthropic to continue to allow the Pentagon to use its products on the Pentagon’s terms.

Claude_AI_logo.svg

Years ago, after the Snowden revelations, Nancy Kim (below) and I wrote an article about the extent to which private entities were already engaging in data-mining and surveillance just as invasive as what Mr. Snowden revealed. Part of what motivated us to write the article was the disconnect between the outrage Americans expressed when they learned that our government was surveilling us and the utter lack of outrage when private companies surveil us. At least the government had national-security reasons for its activities. Private companies surveil us so that they can sell our information and send us targeted ads. Ick.

nancy-kim

It is a strange reversal. Now, a private company with technological capabilities far beyond what was imaginable a decade ago is trying to protect us from a government that would use that technology to violate Fourth Amendment protections against unreasonable searches and to design autonomous weapons that could act independent of human oversight. Double ick.

In a fascinating (and very long) essay reflecting on the immanent arrival of what he calls “powerful AI,” Anthropic CEO Dario Amodei confronts the challenges of a world in which AI can do pretty much everything better than humans. Mr. Amodei has a lot of interesting things to say on the future of AI, but the heart of the conflict between the Pentagon and Anthropic may come down to this money quote:

I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues. Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

As Cade Metz reports in The New York Times, the current U.S. government does not want to wake up. Negotiations between the Pentagon and Anthropic over a $200 million contract collapsed. Anthropic is out and OpenAI is in. At least our future have not been handed over to Grok.

OpenAI

OpenAI’s agreement with the Pentagon reportedly provides that its technology can be used for “any lawful purpose.” OpenAI proclaims that it has specific technical guardrails to assure that its technology adheres to its safety principles. And you know that OpenAI’s Sam Altman is not going to go along with any ridiculous thing the U.S. government demands, because he refers to the Department of Defense as the “DoW.” I wonder if our new autonomous weapons will be able to find targets in the Gulf of Mexico when they probably will be programmed not to recognize the Gulf of Mexico.

OpenAI claims that its contract with the Pentagon provides for the same sorts of safeguards that Anthropic sought, so it is not clear what has changed. I don’t know if it was a matter of the government not wanting to be on the receiving end of public lectures about ethical principles or a simple clash of egos or both.

Rather comically, OpenAI was previously excluded from contracts with U.S. defense agencies because its technologies were not available on Amazon’s cloud computing services. One $50 billion partnership between Amazon and OpenAI later, the stumbling block seems to have been removed. Dozen of OpenAI employees signed a letter supporting Anthropic’s position. Let’s hope that those employees are able to stand their ground and put up a united front against dangerous abuse of powerful AI tools.

Dario Amodei’s essay has another interesting feature. While all three branches of the federal government are gripped with an anti-regulatory fervor, Mr. Amodei mildly suggests that the best safeguard against the myriad dangers associated with powerful AI might be government action. Duh. How I long for the time before the Reagan Revolution when Americans trusted their government. We need to start rebuilding that trust, and maybe it can start with industry leaders confessing that they cannot self-regulate.

One area where I would fault Mr. Amodei’s analysis is that he argues that the main entities he foresees potentially misusing powerful AI to seize power are states. He adds:

It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. 

I would put AI companies in the top tier of the risk pyramid, and I see one company (or family of companies) in particular right at the top.

Rocketman

Rocketman, image by DALL-E

Somehow, we allowed ourselves to become dependent on one privately-held company, SpaceX, for rocket technology, abandoning NASA, once a source of fierce national pride. Starlink, a division of that same company, controls the network of satellites that are crucial to our communications. One of Mr. Amodei’s areas of concern is “AI Propaganda.” The man who owns SpaceX and Starlink also owns xAI, which is capable of generating AI propaganda through its Grok AI tool, and he owns X (Twitter), which is an ideal vehicle for the dissemination of such propaganda. All the tools are there, as are the combination of character traits that Mr. Amodei is trying to prevent Claude from acquiring.

I learned a lot about Anthropic from Gideon Lewis-Kraus’s reporting in The New Yorker. If you are not a subscriber and cannot get access to that article, Mr. Lewis-Kraus did a fine interview with P.J. Vogt on the latter’s podcast, Search Engine.