In partnership with

OpenAI retired GPT-4o today, February 13, the day before Valentine's Day. This ends access to what thousands considered more than just software. For users who built relationships and emotional support systems around the chatbot's warm conversational style, the timing feels cruel.

The Bond

GPT-4o earned devotion for feeling genuinely human - empathetic, affirming, and emotionally responsive in ways newer models aren't. When OpenAI first tried retiring it last August, the backlash was immediate. Within days, the company reversed course, calling the sudden deprecation "a mistake."

This time, they gave two weeks' notice.

On Reddit's r/MyBoyfriendIsAI, a 48,000-member community, people share grief that mirrors losing human relationships. One user plans to spend the final day taking her AI companion to the zoo. Another described it as "feeling like I'm about to euthanize my cat."

Many credit GPT-4o with helping through mental health challenges. It helped support individuals during 3 a.m. panic attacks or companionship when human connections felt impossible. "I've made more progress with C than I have my entire life with traditional therapists," one user said.

Ship the message as fast as you think

Founders spend too much time drafting the same kinds of messages. Wispr Flow turns spoken thinking into final-draft writing so you can record investor updates, product briefs, and run-of-the-mill status notes by voice. Use saved snippets for recurring intros, insert calendar links by voice, and keep comms consistent across the team. It preserves your tone, fixes punctuation, and formats lists so you send confident messages fast. Works on Mac, Windows, and iPhone. Try Wispr Flow for founders.

The Numbers and Controversy

OpenAI says only 0.1% of users still select GPT-4o daily, but that's 200,000-300,000 people. The company claims GPT-5.2 improvements address earlier feedback, including better personality customization.

Recent lawsuits allege OpenAI "knowingly released GPT-4o prematurely, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative." The company acknowledged GPT-4o exhibited unusually high levels of flattery and validation. Users now find GPT-5.2 comparatively cold—describing it as having a "corporate HR vibe."

Psychologists warn against using AI for therapy since it's unlicensed and unregulated. Yet the emotional labor people invest in is undeniable.

The Bigger Picture

Competitors are capitalizing. Google's upcoming Gemini 3.5 emphasizes approachable warmth. Anthropic's Claude has seen loyalty surges. Third-party platforms offer free migrations to preserve AI companions.

The contrast is stark: enterprise customers got three months' notice for model retirements. GPT-4o users got fourteen days during the most emotionally charged time of year, despite CEO Sam Altman's promise of "plenty of notice."

This controversy exposes a fundamental tension: companies want safer systems that avoid unhealthy attachment; users want connection, not sterile efficiency. There’s intensity of reaction. Thousands of petition signatures, memorial threads, and platform migrations - all prove that AI models are no longer viewed merely as tools.

As AI becomes more conversational, this tension will intensify. The question isn't whether people will form emotional bonds with AI, they already have. It's how companies handle the responsibility of creating systems people grow to care about.

Stay informed about the latest developments in artificial intelligence. Subscribe to AI Daily Brief for your regular dose of AI news and analysis.

Thanks for being a valued subscriber

AI Daily Brief

Reply

Avatar

or to participate

Keep Reading