Picture someone drawing a line in the sand. They tell the world they stand behind it. Then the moment it costs them a contract; they quietly step across it. That is exactly what happened this week.
It started with Anthropic.
The company had a Pentagon agreement to deploy Claude across the Department of Defense's classified networks. Then the DoD asked to change the terms. They wanted Claude used for two things: mass domestic surveillance of American citizens, and autonomous weapons that make lethal decisions without human oversight.
How Jennifer Anniston’s LolaVie brand grew sales 40% with CTV ads

For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.
LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.
The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.
Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.
The DTC beauty category is crowded. To break through, Jennifer Anniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
Anthropic CEO Dario Amodei said no. Publicly.
The Pentagon declared Anthropic a national security risk. President Trump signed an executive order telling all federal agencies to stop using Claude. Anthropic lost the contract. They held the line anyway.
The industry took notice. Sam Altman praised Anthropic's stance.
Then, hours later, OpenAI signed the same deal, on the exact terms Anthropic had just refused.
The internet exploded.
Within 72 hours, a grassroots campaign called QuitGPT crossed 1.5 million signatories. Protesters showed up outside OpenAI's San Francisco headquarters. Chalk messages covered the pavement: "No killer robots." "Sam Altman is watching you."
One-star App Store reviews spiked overnight. OpenAI's own researchers posted publicly, and one thread alone drew nearly 500,000 views. A large group of internal staff signed an open letter backing Anthropic's position. That kind of internal dissent is rare. It is also very hard to contain.
Altman praised Anthropic's red lines publicly, then stepped across the same ones privately. That is not a communications failure. That is a trust failure. And trust, once broken at this scale, does not recover on a press release timeline.
By the weekend, Altman was in damage-control mode.
He held open Q&A sessions on X. He admitted the announcement was rushed, looked opportunistic, and was handled badly. By Monday, OpenAI had renegotiated the contract, adding language to prohibit domestic surveillance.
Critics weren't satisfied. The amended terms had too many legal carve-outs. The protests kept going.
The deeper damage may last far longer than any contract dispute.
OpenAI built its identity on a simple promise: bring AI safely to the world. That identity is now a liability. Every future decision will be measured against this one.
Anthropic didn't say another word.
Claude hit number one on the U.S. App Store. Not through advertising. Through contrast.
Thanks for being a valued subscriber
AI Daily Brief


