In partnership with

Here's an uncomfortable truth nobody in the AI industry wants to say out loud: most people are using AI like a vending machine. Insert prompt. Grab output. Walk away. The technology is everywhere, adoption numbers are climbing, and the average user is still essentially pressing buttons without understanding what's happening behind the glass. That's not fluency. That's pattern-matching dressed up as productivity.

Anthropic decided to measure this. Their new AI Fluency Index was part of their ongoing Education Report series. The measure cuts through the hype and analyzed nearly 10,000 real conversations across a single week in January 2026. It tracked 11 specific behaviors that separate genuine AI collaboration from passive consumption. What they found should make every power user, casual dabbler, and enterprise team stop and take stock.

More fluency behaviors in iterative vs. one-and-done chats

The Process Guide

Sponsored

The Process Guide

Business, finance, compliance, and IT in a changing enterprise.

Subscribe

Finding #1: Staying in the Conversation Is the Whole Game

The data is almost embarrassingly clear on this one. People who push back, follow up, and keep refining their AI conversations show dramatically higher fluency across every single behavior in the study. Researchers call it "iteration and refinement," but a simpler label works just fine: not giving up after the first answer.

This one habit showed up in 85.7% of conversations and in those exchanges, users averaged 2.67 more fluency behaviors than people who sent a single prompt and moved on. The gap in critical thinking was staggering. Iterative users were more than five times more likely to question the AI's reasoning, and four times more likely to catch missing context in a response.

Meet America’s Newest $1B Unicorn

A US startup just hit a $1 billion private valuation, joining billion-dollar private companies like SpaceX, OpenAI, and ByteDance. Unlike those other unicorns, you can invest.

Why all the interest? EnergyX’s patented tech can recover up to 3X more lithium than traditional methods. That's a big deal, as demand for lithium is expected to 5X current production levels by 2040. Today, they’re moving toward commercial production, tapping into 100,000+ acres of lithium deposits in Chile, a potential $1.1B annual revenue opportunity at projected market prices.

Right now, you can invest at this pivotal growth stage for $11/share. But only through February 26. Become an early-stage EnergyX shareholder before the deadline.

This is a paid advertisement for EnergyX Regulation A offering. Please read the offering circular at invest.energyx.com. Under Regulation A, a company may change its share price by up to 20% without requalifying the offering with the Securities and Exchange Commission.

The first answer an AI gives you isn't the answer; it's the opening bid in a negotiation. Most people never counter-offer.

What this means practically: every time you accept a response without a follow-up, you're forfeiting the most valuable part of the interaction. The best AI users aren't the ones with the cleverest prompts — they're the ones who refuse to stop at "good enough."

Finding #2: Polished Outputs Are a Cognitive Trap

This is the finding that should genuinely alarm people. About one in eight conversations in the study involved creating something tangible: code, a document, an app, and a formatted report. And the pattern that emerged in those exchanges was one of the most dangerous things AI could normalize right now.

Users in artifact conversations started stronger; they gave clearer instructions, specified formats, provided examples. But the moment the AI handed back something that looked finished, critical thinking evaporated. They were less likely to fact-check. Less likely to probe for gaps. Less likely to ask the AI to justify its reasoning. Down 3 to 5 percentage points across every single evaluative behavior.

AI is now good enough at looking right, that it can fool you into skipping the part where you check if it actually is. That's not a UX problem, that's a judgment problem.

The mechanism is psychological, not technical. A clean interface, a properly formatted document, code that compiles without errors, etc., these signals trigger a mental shortcut that says, "job done." But the AI doesn't know what it doesn't know. The more polished the output, the more deliberately you need to fight the instinct to accept it at face value. Because the errors inside a beautiful wrapper are the ones that ship.

3 Ways to Level Up

Stop Using AI Passively. Start Here; Moves That Actually Change Your Output Quality

1.     Refuse to accept the first response. Make it a rule.

Every AI reply is a draft. Push back on anything that feels thin. Ask it to go deeper, reconsider, or try a different angle. This single habit unlocks nearly every other fluency behavior in the study. It's not optional; it's the foundation.

2.     When the output looks great, dig harder and ask.

What's missing? Where could this be wrong? What assumptions did you make that I didn't give you? The cleaner and more complete something appears, the more important it is to interrogate it, because that's exactly when your brain wants to stop.

3.     Set the rules before the conversation starts.

Only 30% of users bothered to tell the AI how they wanted it to behave. Be in that 30%. Open with: "Challenge my thinking if something's off," "Show your reasoning before conclusions," or "Tell me what you're unsure about." You're not being demanding, you're being smart. The AI will perform differently because you asked.

The Bigger Picture

Fluency Is the New Literacy and the Gap Is Already Opening

Here's what the study doesn't say, but the data implies: a split is forming. On one side, users iterate, question, and engage deeply. They are compounding that advantage with every conversation. On the other, the majority who prompt-and-go, treat AI as a faster search engine rather than a thinking partner.

The research is honest about its limits. The sample skews toward early adopters. People are already comfortable in multi-turn AI conversations. This means that the real population-wide picture is likely worse. And 13 of the 24 fluency behaviors the researchers care about most, e.g. the ethical ones, the transparency ones, the downstream judgment calls, all happen entirely outside the chat window and couldn't be measured at all.

That means we're measuring the easy stuff, and the numbers are already mixed. Future work will track whether fluency builds over time, whether nudging toward iteration improves critical thinking, and what skills develop naturally versus which ones need to be deliberately taught. The goal isn't a report card, it's a roadmap.

The window to develop real AI fluency before it becomes a hard competitive differentiator is open right now. It won't stay open indefinitely. The question isn't whether AI is in your workflow, it almost certainly is. The question is whether you're using it, or whether it's using you.

Quick Takeaways

  • Iterate or you're wasting the tool. Users who refine conversations show 2.67× more fluency behaviors. One-and-done prompting isn't a strategy, it's leaving value behind every single time.

  • Fight the polish instinct. The moment an output looks finished is exactly when to press harder. Beautiful outputs hide ugly errors. Make interrogating them a non-negotiable habit.

  • Set your terms before you start. 70% of users never tell the AI how to behave. Join the 30% who do. It changes what you get back immediately and consistently.

  • The most important behaviors can't be measured yet. Ethics, transparency, downstream judgment, all those happen outside the chat. Build them deliberately, because no dataset is going to hold you accountable.

  • Fluency compounds. Start building it now. The gap between high-fluency and low-fluency users is widening with every conversation. Frequency without intention doesn't close it, it widens it.

Thanks for being a valued subscriber.

 AI Daily Brief

SOURCE: ANTHROPIC EDUCATION REPORT · THE AI FLUENCY INDEX · FEBRUARY 2026. New research from Anthropic tracks the habits of nearly 10,000 real conversations to reveal who's truly mastering AI collaboration and where almost everyone is falling short.

Reply

Avatar

or to participate

Keep Reading