The AI honeymoon is officially over, and it’s messier than I expected.
TLDR:
- OpenAI’s Pentagon partnership triggered a massive user revolt with 2.5 million people fleeing to Claude
- Anthropic got blacklisted by the Pentagon in apparent retaliation for refusing military AI contracts
- Oracle is laying off up to 30,000 employees to fund AI data centers, making this transition brutally real
The Great AI Defection
I’ve been watching tech backlashes for years, but nothing prepared me for the #QuitGPT movement. When OpenAI announced their Pentagon deal, something visceral happened. Users didn’t just complain on Twitter, they actually left. ChatGPT uninstalls surged 295% while Claude’s signups quadrupled overnight.
This wasn’t your typical outrage cycle that fades by Friday. People genuinely felt betrayed by a company they’d invited into their daily workflows. The irony? Many of these users are probably discovering that AI fiction writing tools work just as well without the moral baggage.
Anthropic’s Impossible Position
Here’s where things get weird. Anthropic, the supposed “safety first” alternative, suddenly found itself blacklisted with a “supply chain risk” designation. A label previously reserved for foreign adversaries, now slapped on a San Francisco startup.
The company claims it’s retaliation for refusing unrestricted military access. Whether you believe that depends on how cynical you’ve become about corporate virtue signaling. Either way, they’re suing the Pentagon twice over, which feels like watching your favorite indie band fight the record label.
The Human Cost Gets Real
Oracle just made the abstract concrete: they’re firing 20,000 to 30,000 people to fund AI infrastructure. Not restructuring or rightsizing, just straight up trading humans for data centers.
This hits different when you think about creators trying to navigate AI image generation tools while watching illustrators lose work, or authors exploring publishing platforms as AI reshapes the entire industry.
What Actually Matters
LeCun walking away from Meta with a billion dollars to build “real” AI suggests even the architects think we’re on the wrong track. Maybe the chaos is necessary. Maybe watching users vote with their feet and companies choose sides will force some honesty about what we’re actually building.
The geopolitical angle isn’t theoretical anymore when domestic companies get foreign adversary treatment. We’re not just picking tools, we’re picking sides in a war most of us didn’t know we were fighting.