What Happens When the Pentagon Tells an AI Company to Drop Its Ethics?
Here is what happened today, in plain language, and why it matters to you even if you have never thought about corporate governance in your life.
Updated February 26, 2026: This article was originally published on February 25, 2026 — the day Anthropic removed its binding safety commitment. Since publication, I have completed Anthropic’s full 11-dimension GBPE evidence tracker with intersectional analysis, sent formal correspondence to Anthropic’s leadership and each member of the Long-Term Benefit Trust through their institutional affiliations, and identified a structural finding that aligns Anthropic’s governance pattern with Barack Obama’s. The score, the Obama alignment, and the correspondence status have been updated below.
The Pentagon deadline is tomorrow.
---
The company
Anthropic makes Claude — an AI assistant used by millions of people for work, research, writing, and daily life. Unlike most tech companies, Anthropic is structured as a Public Benefit Corporation under Delaware law. That means its board of directors has a legal obligation to balance making money with protecting the public and pursuing its stated mission: “responsibly develop and maintain advanced AI for the long-term benefit of humanity.”
That is not a slogan on a website. It is a binding legal duty, enforceable under Delaware corporate law. It is the reason many people — including researchers, disabled users, and organizations that care about AI safety — chose Anthropic’s products over competitors.
What happened yesterday
Anthropic removed the structural centerpiece of its safety policy.
Since the company’s founding, its Responsible Scaling Policy included a hard commitment: if the company’s safety measures could not keep pace with how powerful its AI models were becoming, it would stop building more powerful models until the gap was closed. That commitment is gone. In its place, the company will now publish goals and grade its own progress publicly.
The company’s own announcement acknowledged that the change responds to the current political environment favoring “AI competitiveness and economic growth” over safety.
The board approved the change unanimously — including a director named Chris Liddell, who served as Deputy White House Chief of Staff during President Trump’s first term and was appointed to Anthropic’s board on February 13, 2026.
What is happening tomorrow
On the same day Anthropic announced the safety policy change, a deadline from Defense Secretary Pete Hegseth remains in effect: by 5:01 PM tomorrow — Friday, February 27 — Anthropic must agree to remove all ethical restrictions on military use of Claude — including restrictions on autonomous weapons and mass domestic surveillance of American citizens — or face contract termination, being designated a national security risk, and possible compulsion under the Defense Production Act.
Anthropic says it will not budge on two red lines: no fully autonomous weapons, no mass surveillance of Americans. The Pentagon says the company has until tomorrow to “get on board or not.”
Why this matters even if you don’t use Claude
This is a test case for whether mission-driven corporate structures can survive government coercion.
Anthropic deliberately chose a legal structure designed to prevent exactly this scenario — a structure with an independent oversight body (the Long-Term Benefit Trust), a special class of shares that gives that body real power, and statutory obligations that require the board to consider not just profits but the people affected by its decisions.
If those structures hold, it demonstrates that accountability mechanisms work. If they don’t, it tells every future company considering a public benefit structure that the commitment is only as strong as the next government threat.
The question is not whether Anthropic’s leadership wants to do the right thing. The question is whether the structures they built are strong enough to hold when the pressure is this intense.
What most people are missing
The coverage of this story focuses on the Pentagon standoff. Here is what the coverage is not telling you.
Claude was already used in a lethal military operation. On January 3, 2026, Claude was reportedly used — through a partnership with the surveillance company Palantir — during the classified raid in Caracas, Venezuela that captured President Maduro. Cuba and Venezuela have reported that dozens of their soldiers and security personnel were killed. Anthropic investigated and concluded its own policies were not violated. That finding means the company’s policies were designed to permit this kind of use.
The safety policy Anthropic says it protects has a gap you could drive a military operation through. The policy prohibits mass surveillance “of Americans.” But Venezuelans are Americans — they live in the Americas. If Anthropic reads that word to mean only U.S. citizens, then the policy was written from the start to protect one population while leaving unprotected the communities most likely to face military AI deployment. That interpretive choice — writing “Americans” but meaning only “U.S. citizens” — reveals whose safety the policy was designed to protect and whose it was not.
Disabled people who depend on Claude as accessibility infrastructure have no voice in any of this. There are people using Claude right now to communicate with their doctors, organize their work, and participate in professional life in ways their disabilities have historically made difficult. If the tool’s safety architecture is reshaped to optimize for military use, or if the company’s character shifts under sustained government pressure, those users bear a cost that does not appear in any contract negotiation or valuation model. Under the law that governs Anthropic’s structure, the board is required to consider “those materially affected by the corporation’s conduct.” Disabled users are materially affected. They are not in any room where these decisions are being made.
The people with the most at stake have the least power. The people with the most power have the least at stake. That is the governance problem Anthropic’s structure was designed to solve.
Here’s the part that makes this structural, not just unfortunate. Delaware law requires Anthropic’s board to consider the interests of people “materially affected” by its decisions. That includes the families in Caracas. That includes disabled users. But the same law gives those people absolutely no power to enforce that obligation. The only people who can sue to hold the board accountable are stockholders holding at least 2% of outstanding shares — roughly $7.6 billion worth. That means the enforcement power belongs to Amazon and Google. The Venezuelan families who lost soldiers cannot sue Anthropic. Disabled users who lose their accessibility infrastructure cannot sue Anthropic. The law names them as deserving of consideration and then denies them any mechanism to demand it. This is not a gap in Anthropic’s specific governance. It is a design feature of Delaware PBC law itself.
What I did about it
I run a research project called the GBPE Framework — a governance scoring system that applies political science measurement tools to corporations, governments, and Indigenous governance systems on the same rubric. I have been building it for 14 years. I scored Anthropic at 6.0 out of 10 before yesterday’s RSP change — the highest score of any AI company I have evaluated — partly because of the structural safety commitments that were removed yesterday.
Following the RSP change and the completed 11-dimension evidence tracker with full intersectional analysis, the revised score is 5.19 out of 10. The three-lens intersectional framework — Non-Western/Global South perspective, disability justice, and intersectional compounding — reduced the pre-intersectional score of 5.73 by 0.54 points, consistent with the pattern across all 37 GBPE trackers where intersectional lenses expose accountability gaps invisible to conventional Western governance audits.
That score — 5.19 — is the same score the framework assigns Barack Obama. And for the same structural reason. Obama governed voters at 7.5/10 and governed the populations his drone program killed at 2.9/10 — a +4.6 Electoral Accountability Gap. Anthropic governs enterprise customers and developers at approximately 7.5 and governs populations affected by military AI deployment at approximately 3.5 — a +4.0 Corporate Accountability Gap. Strong accountability for those who can punish you. Near-zero accountability for those who can’t. The drone program is the structural analog to the Caracas operation: classified, extrajudicial, civilians as collateral, accountability mechanisms that function domestically but have zero enforcement internationally. This is not an ideological finding. It is a structural one.
I have sent a formal correspondence package to Anthropic’s leadership and to each member of the Long-Term Benefit Trust through their independent institutional affiliations. The package identifies specific legal questions under Delaware corporate law that the board’s decisions raise, offers a scored truth-and-reconciliation pathway showing how the company can repair what it is breaking, and names the populations — including disabled users — who are “materially affected” under the statute but have no enforcement power.
I also built a graduate-level teaching case study from this situation — fictionalized, with all real names replaced — so that future business leaders can study what happens when a mission-driven company faces government coercion, and can learn the difference between structural accountability and voluntary pledges before they sit in the rooms where these decisions get made.
The full correspondence, stress test, and teaching materials will be available for public use next week.
What you can do
If you use Claude or any AI tool: Understand that the governance structure of the company behind your tool matters. A company that can change its safety commitments under political pressure can change anything. Ask what structural accountability mechanisms exist — not just what the company says it will do, but what it is legally required to do, and who can enforce that requirement.
If you are disabled or neurodivergent and depend on AI tools: You are “materially affected” by these decisions under the law that governs public benefit corporations. You deserve structural representation in governance — not as a testimonial in a marketing deck, but as a constituency with real power. That does not exist yet. It should.
If you work in law: The legal questions in my correspondence are grounded in Delaware statutory text but produced by a governance researcher, not an attorney. If you practice Delaware corporate law — particularly Public Benefit Corporation law — and you believe these questions warrant independent analysis, I would welcome a conversation. Pro bono support for legal verification would directly serve the populations this work identifies as materially affected.
If you care about corporate accountability: Pay attention tomorrow. What happens at 5:01 PM ET on February 27 will tell us whether the most carefully designed mission-protection structure in the AI industry can survive the most powerful government on earth telling it to stand down. The answer matters for every company, in every industry, that has ever claimed its values are non-negotiable.
The bottom line
There is a pattern the GBPE Framework documents across 2,500 years of governance history: leaders govern better for people who can hold them accountable than for people who cannot. The gap between those two groups — the Electoral Accountability Gap — is the central finding of the framework.
That pattern applies to corporations too. Anthropic’s policies protect “Americans” (read: U.S. citizens with legal standing and market power) while leaving structurally unprotected the Global South populations who bear the primary consequences of military AI deployment and the disabled users who depend on the tool as infrastructure. It is the same pattern that gives Obama a 7.5 for voters and a 2.9 for drone targets. The GBPE Framework measures it wherever it appears — across governments, corporations, and centuries.
The people in the room have power. The people outside the room have stakes. The structure was supposed to bridge that gap.
Tomorrow will tell us whether it does.
More details: https://tiffmryan.com/gbpe-anthropic/
____________________________________________________________________________________________
Tiffany Ryan is the creator of the GBPE Framework and publisher of Work, Dignified. She is based in Bemidji, Minnesota, 60 miles from Red Lake Nation and White Earth Nation.
Disclosure: This article and the underlying research were produced with assistance from Claude (Anthropic). The conflict of interest — using the company’s own tool to evaluate the company’s governance — is inherent and acknowledged. All analytical decisions, the critical correction of Claude’s colonized reading of “Americans,” and content responsibility rest with the author.

