Cloud Publica Cloud Publica
Investigation

The Loop

The AI that bombs the country that bombs the data centers that run the AI.

25+ primary sources March 17, 2026

Kristine Socall, MBA International Economic Development

Founder & Executive Director, Gifted Dreamers, Inc. 501(c)(3)

A visual representation of the feedback loop between AI targeting systems, military strikes, and data center infrastructure

On February 28, 2026, the United States launched Operation Epic Fury against Iran.

The targeting was generated by an AI called Claude. Claude runs on Palantir’s Maven Smart System. The Maven Smart System runs on Amazon Web Services.

On March 1, Iran bombed the AWS data centers.

The AI that generates the kill lists runs on the infrastructure the target is attacking. The target is attacking the infrastructure because of the kill lists.

This is The Loop.


The Maven Pipeline

Palantir’s Maven Smart System is the Pentagon’s primary AI targeting platform. The contract started at $480 million in May 2024. It was expanded in September 2024. The ceiling was raised by $795 million in May 2025. Total value: over $1.3 billion.

Claude — built by Anthropic — is integrated into the system via Palantir’s Artificial Intelligence Platform (AIP), running on classified AWS networks. What Claude does: it processes classified intelligence streams, generates target lists, ranks targets by strategic importance, and assesses post-strike impact. It consolidated eight separate intelligence workflows into one. The human workforce went from approximately 2,000 intelligence officers to roughly 20.

The numbers from the first weeks of Operation Epic Fury:

  • 1,000+ targets generated in the first 24 hours.
  • 6,000+ in the first three weeks.
  • 86 seconds: average time from intelligence input to targeting decision.
  • 20 seconds: average time for a human officer to sign off.

Eighty-six seconds to decide. Twenty seconds to approve. One thousand times a day.


The Strike

On March 1, Iranian drones struck two AWS facilities in the UAE (region ME-CENTRAL-1) and one in Bahrain (region ME-SOUTH-1).

The targeting was deliberate. The IRGC hit Bahrain specifically because it hosts US military workloads. Tasnim News Agency — aligned with the Revolutionary Guard — published the target list before the strikes: Amazon. Microsoft. Palantir. Oracle.

They knew what they were hitting. They knew why it mattered.

The damage: 38 AWS services went down in the UAE. 46 in Bahrain. Downstream, Anthropic’s Claude experienced its first major outage. It was not the last. Claude went down three times in March.

The AI that generated the targeting for the strikes on Iran was taken offline by the strikes on Iran.


The Contradiction

On February 27 — one day before Operation Epic Fury launched — the Pentagon designated Anthropic a “supply chain risk to national security.”

The same day, the Trump administration directed federal agencies to cease using Anthropic products. OpenAI got the Pentagon deal instead.

The dispute was simple. The Pentagon demanded that Anthropic permit “all lawful purposes” for Claude — including autonomous weapons systems and mass surveillance. Anthropic refused. Not all military applications. Not intelligence analysis. Not logistics or war planning or targeting. Those were permitted. What Anthropic refused was fully autonomous weapons and mass domestic surveillance.

Anthropic CEO Dario Amodei called the situation “inherently contradictory.”

On March 9, Anthropic sued the Pentagon.

On March 12, Palantir CEO Alex Karp confirmed in a CNBC interview that Claude was still active on the Maven Smart System despite the blacklist. Palantir had not switched models. The Pentagon had blacklisted its own primary targeting AI — while using it in active combat.

Pentagon Under Secretary Emil Michael, speaking at a Fortune conference on March 7, described a recent military operation in Venezuela as a “whoa moment.” His concern was not that AI was making kill decisions in 86 seconds. His concern was: “What if this software went down?”

It did go down. Three times.


The Dependency

Until the blacklist, Claude was the only AI model authorized to operate on classified Pentagon networks.

The only one.

The reduction from 2,000 intelligence officers to 20 means there is no manual fallback at operational tempo. You cannot process 1,000 targets per day with 20 people and no AI. You cannot process them with 20 people and a different AI that has not been certified for classified operations. The dependency is structural.

Palantir’s AIP supports other models — GPT-4, Llama, Mixtral. But switching foundation models on classified infrastructure during active combat operations is not a firmware update. It requires recertification, revalidation, and reauthorization through a process that normally takes months. The Pentagon blacklisted the model on a Thursday. The war started on a Friday.

There is a timing detail worth noting. On February 24 — four days before the war and three days before the blacklist — Anthropic released version 3.0 of its Responsible Scaling Policy. The revision dropped the company’s hard commitment to pause model training if safety evaluations were triggered. Critics noted the timing immediately. Anthropic said the revision had been in progress for months. The calendar says what it says.

What Anthropic refused, in the end, was narrow: mass domestic surveillance and fully autonomous weapons. Everything else — intelligence analysis, logistics planning, war planning, target generation, post-strike assessment — was permitted under their acceptable use policy. The line they drew was real. It was also a very specific line.


The Loop

Here is the feedback loop. Every step is documented.

  1. Claude generates targeting for strikes on Iran through the Maven Smart System.
  2. Iran bombs the data centers that run Claude.
  3. Claude goes down. Three outages in March. The targeting pipeline is disrupted.
  4. The Pentagon blacklists Anthropic — not for generating kill lists, but for having safety guardrails.
  5. Claude keeps running anyway because Palantir cannot switch models during active operations.
  6. Russia provides Iran with satellite intelligence on US military positions and infrastructure targets.
  7. Russia benefits from the $100+ oil prices caused by the war that Claude helps wage.
  8. The same Palantir that runs Maven also runs ICE’s ImmigrationOS — a $145 million contract — the ontology architecture documented in The Endgame.

The loop is not a metaphor. It is a literal description of the infrastructure.

An AI built in San Francisco generates targeting for bombs dropped on Iran. Iran bombs the data centers in the Gulf that run the AI. The AI goes down. The Pentagon punishes the AI company — not for the bombing, but for refusing to do more. The AI keeps running because nobody can replace it fast enough. A third country feeds intelligence to the target to keep the war going because the war keeps oil prices high. And the company that runs the targeting also runs the domestic surveillance system that tracks, detains, and deports people inside the United States.

One company. One architecture. One loop.


What This Means

Rep. Sara Jacobs, ranking member of the House Armed Services Subcommittee on Cyber and Innovation, said it plainly: “AI tools aren’t 100% reliable — they can fail in subtle ways that are difficult to detect.”

An ICE officer testifying about Palantir’s domestic system said the same thing differently: the system “could say 100%, and it’s wrong.” Same company. Same architecture. Different application. One generates targeting for airstrikes. The other generates targeting for deportation raids.

Eighty-six-second targeting decisions. Twenty-second sign-offs. Six thousand targets in three weeks. And the human who signs off has twenty seconds to override a system that consolidated the work of two thousand people.

The CEO of the company running this system — Alex Karp — told CNBC that AI is designed to reduce the economic power of educated women who vote Democratic. His co-founder, Curtis Yarvin, has called for public executions of political opponents. These are not leaked private statements. They said this on camera.

This is the ontology in wartime. The same classification system that sorts immigrants into deportation categories sorts Iranian infrastructure into strike categories. The same 20-second human review that approves an ICE raid approves an airstrike. The architecture does not distinguish between the two because it was not designed to distinguish between the two.

Read The Endgame for how that architecture works domestically.


The loop has no exit.

The infrastructure that wages the war is the infrastructure that surveils the population is the infrastructure that tilts the elections. One architecture. One company. 120 federal contracts.

$1.83 billion.

The AI that bombs the country that bombs the data centers that run the AI. The country that provides intelligence to the target profits from the war the AI helps wage. The company that runs the kill chain runs the deportation chain. The Pentagon blacklists the AI for having guardrails while using the AI without guardrails.

There is no contradiction. There is only the loop.


Sources

  1. Haskins, C. “Inside the AI System Generating Targets for the US Bombing of Iran.” WIRED, March 2026.
  2. Hartung, W. “AI and the Iran War: The Pentagon’s Dangerous Experiment.” Responsible Statecraft, March 2026.
  3. Konkel, F. “Palantir Maven Smart System Contract Ceiling Raised to $1.3B.” DefenseScoop, May 2025.
  4. Dilanian, K. & Ainsley, J. “Pentagon AI Targeting in Iran Raises Accountability Questions.” NBC News, March 2026.
  5. Bhattacharyya, S. “Iranian Drones Strike AWS Data Centers in UAE and Bahrain.” Rest of World, March 2026.
  6. “Amazon Cloud Services Disrupted After Iranian Strikes on Gulf Infrastructure.” Bloomberg, March 2026.
  7. “Iran Publishes Target List of US Cloud Providers Ahead of Strikes.” Tom’s Hardware, March 2026.
  8. Lashinsky, A. “Pentagon Under Secretary: ‘What If This Software Went Down?’” Fortune, March 7, 2026.
  9. “Anthropic Designated Supply Chain Risk by Pentagon.” CNBC, February 27, 2026.
  10. Siddiqui, F. “Trump Administration Orders Agencies to Stop Using Anthropic AI.” Axios, February 2026.
  11. “Anthropic Sues Pentagon Over Blacklist.” NPR, March 9, 2026.
  12. Chesney, R. “The Anthropic-Pentagon Dispute: Legal and Strategic Implications.” Lawfare, March 2026.
  13. Karp, A. Interview. CNBC, March 12, 2026. (Confirmed Claude still active on Maven.)
  14. Amodei, D. “On the Pentagon’s Decision.” Anthropic official statement, February 2026.
  15. Anthropic. “Responsible Scaling Policy v3.0.” February 24, 2026.
  16. Shavit, Y. et al. “Analysis of Anthropic RSP 3.0.” GovAI, March 2026.
  17. Palantir Technologies. “AIP for Defense: Technical Overview.” Palantir documentation, 2025.
  18. Jacobs, S. Statement to House Armed Services Subcommittee on Cyber and Innovation, March 2026.
  19. ICE officer testimony, Perdomo v. Noem proceedings, 2026.
  20. Karp, A. “AI and the Future of Western Civilization.” CNBC, 2025.
  21. “Anthropic Claude Outages: March 1, March 8, March 14.” Anthropic Status Page, March 2026.
  22. “Russian Satellite Intelligence Sharing with Iran.” Financial Times, March 2026.
  23. “Oil Prices Surge Past $100 on Iran Conflict.” CNBC, March 2026.
  24. Palantir Technologies. ICE ImmigrationOS contract, $145M. USAspending.gov.
  25. Palantir Technologies. Federal contract portfolio: 120 contracts, $1.83B total. USAspending.gov, accessed March 2026.

Related Reading


25+ primary sources. All verifiable. Updated March 17, 2026.