OpenAI Claims Safety ‘Red Lines’ in Pentagon Deal—But Users Aren’t Buying It
In brief
- OpenAI signed an agreement with the Pentagon to deploy AI in classified environments.
- The firm said it imposed “red lines,” but the contract allows “all lawful purposes,” a standard that ultimately depends on the government’s own interpretation.
- The controversy sparked the QuitGPT movement and drove a surge in Claude downloads.
OpenAI said this weekend that it reached an agreement with the Pentagon to deploy advanced AI systems in classified environments, marking a significant expansion of the company’s work with the U.S. military.
The announcement came less than 24 hours after the Trump administration blacklisted Anthropic, designating the rival AI firm a “supply chain risk to national security” following a dispute over contract language related to surveillance and autonomous weapons.
President Donald Trump also directed federal agencies to immediately cease using Anthropic’s technology, with Treasury Secretary Scott Bessent writing Monday on X that the agency “is terminating all use of Anthropic products, including the use of its Claude platform, within our department.”
https://platform.twitter.com/embed/Tweet.html?dnt=false&embedId=twitter-widget-4&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOltdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2ZvbGxvd2VyX2NvdW50X3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9iYWNrZW5kIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19yZWZzcmNfc2Vzc2lvbiI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfZm9zbnJfc29mdF9pbnRlcnZlbnRpb25zX2VuYWJsZWQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X21peGVkX21lZGlhXzE1ODk3Ijp7ImJ1Y2tldCI6InRyZWF0bWVudCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3Nob3dfYmlyZHdhdGNoX3Bpdm90c19lbmFibGVkIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19kdXBsaWNhdGVfc2NyaWJlc190b19zZXR0aW5ncyI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdXNlX3Byb2ZpbGVfaW1hZ2Vfc2hhcGVfZW5hYmxlZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdmlkZW9faGxzX2R5bmFtaWNfbWFuaWZlc3RzXzE1MDgyIjp7ImJ1Y2tldCI6InRydWVfYml0cmF0ZSIsInZlcnNpb24iOm51bGx9LCJ0ZndfbGVnYWN5X3RpbWVsaW5lX3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9mcm9udGVuZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9fQ%3D%3D&frame=false&hideCard=false&hideThread=false&id=2027497719678255148&lang=en&origin=https%3A%2F%2Fdecrypt.co%2F359927%2Fapple-iphone-hacking-kit-used-by-spies-crypto-scams-could-have-us-intelligence-origins&sessionId=8041529d08cf33086e9aad56d213fb53e9a2cdfa&siteScreenName=decryptmedia&theme=light&widgetsVersion=2615f7e52b7e0%3A1702314776716&width=550px
The timing of the AI announcements placed OpenAI’s deal under intense scrutiny. In a detailed blog post, the company outlined what it described as firm “red lines” and layered safeguards governing its Pentagon partnership.
The agreement, as presented by OpenAI, raises broader questions about how AI systems will be governed in national security settings, and how the company’s stated restrictions will be interpreted and enforced in practice.
When “lawful” isn’t enough
OpenAI’s blog post opens with three commitments framed as non-negotiable: no use of its technology for mass domestic surveillance, to independently direct autonomous weapons systems, or for high-stakes automated decisions like social credit scoring.
Then comes the actual contract language—which OpenAI notably calls “the relevant language,” not “the full agreement.”
“The Department of War may use the AI system for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols,” OpenAI said.
That is the exact phrase Anthropic said the government had been demanding throughout negotiations. The exact phrase that Anthropic refused to go along with. OpenAI signed it, yet argues its red lines remain fully intact.
However, “lawful” in national security contexts isn’t a fixed boundary—it lives inside a patchwork of statutes, executive orders, internal directives, and often classified legal interpretations. When a contract grants “all lawful purposes,” the practical limit becomes the government’s current legal envelope, not an independent standard set by the vendor.
A cluster of clauses
The weapons provision reads that the AI system “will not be used to independently direct autonomous weapons in any case where law, regulation, or department policy requires human control.”
The prohibition only applies where some other authority already requires human control—it borrows its teeth entirely from existing policy, specifically DoD Directive 3000.09. That directive requires autonomous systems to allow commanders to exercise “appropriate levels of human judgment over the use of force.”
And “appropriate” is as subjective as can be.

Trump Orders Federal Agencies to Dump ‘Woke’ Anthropic AI After Pentagon Dispute
President Donald Trump has directed all U.S. federal agencies to stop using artificial intelligence technology developed by Anthropic, escalating a dispute between the AI company and the Pentagon over how the military uses the technology. In a Truth Social post on Friday, Trump said agencies must “immediately cease” using Anthropic products, with a six-month phase-out period for departments that already use the company’s technology. “The United States of America will never allow a radical left,…
4 min read
Jason NelsonFeb 28, 2026
Human judgment is not human control. This distinction was not accidental. Defense scholars have noted that omitting “human-in-the-loop” language was deliberate, precisely to preserve operational flexibility.
OpenAI’s strongest counterargument is its cloud-only deployment architecture—fully autonomous lethal decision loops would require edge deployment on battlefield devices, which this contract doesn’t permit. That’s a real technical constraint.
But cloud-based AI can still perform target identification, pattern-of-life analysis, and mission planning. Those are kill-chain activities regardless of where the final trigger sits. The outcome for a target doesn’t differ based on which server the model runs on.
The surveillance clause follows a similar pattern. OpenAI’s stated red line: no mass domestic surveillance. The contract language: The system “shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities”—then lists the Fourth Amendment, FISA, and Executive Order 12333.

The Best AI Tools That Actually Respect Your Privacy
Last month, a security researcher found 300 million messages from 25 million users sitting in a publicly accessible database. No hack. Just a misconfigured backend on a wrapper chatbot built on top of Claude, ChatGPT, and Gemini. Medical questions, legal discussions, personal confessions, all of it free for the taking. The worst part? It wasn’t even an attack. Just negligence. It’s enough to give those concerned about privacy a scare, and then there’s the more deliberate stuff some companies ar…
ReviewsArtificial Intelligence
12 min read
Jose Antonio LanzMar 2, 2026
The word “unconstrained” implies a constrained version of mass surveillance would be permissible. And EO 12333 is the executive order the NSA has used to justify intercepting Americans’ communications when done outside U.S. borders.
And this is where Anthropic’s concerns about wording throughout the negotiations becomes noticeable. Anthropic’s argument was that current law hasn’t caught up with what AI makes possible. The government can legally purchase vast amounts of aggregated commercial data about Americans without a warrant—and has already done so.
OpenAI’s contract language, by anchoring its protections to existing legal frameworks, may not close the gap Anthropic was actually worried about.
Altman responds
On Saturday night, Altman held an AMA responding to thousands of questions about the deal. When asked what would cause OpenAI to walk away from a government partnership, he answered: “If we were asked to do something unconstitutional or illegal, we will walk away.”
Related Articles
TitanRWA Taps GoldFinger to Broaden Gold-Driven RWA Tokenization
TitanRWA, a blockchain-based RWA tokenization platform, has partnered with GoldFinger, a blockchain entity for gold’s tokenization into digital assets. The partnership aims to connect...
DEXE dumps 15% as seller dominance surges – Warning sign?
$DEXE fell sharply even as the broader crypto market rebounded on easing global tensions and ceasefire talks. The altcoin dropped from $9.2 to $7.3...
KOFIA Chairman Demands South Korea Join Global Financial Trend
In a significant development for Asian financial markets, Korea Financial Investment Association (KOFIA) Chairman Hwang Seong-hyeop has publicly called for South Korea to introduce...
Bitcoin (BTC) trades flat as index declines
CoinDesk Indices presents its daily market update, highlighting the performance of leaders and laggards in the CoinDesk 20 Index. The CoinDesk 20 is currently...
