SECURITY ALERT
Imagine you put a lock on your most private email messages about business deals, legal matters, or sensitive company plans. Now imagine that your own AI helper ignored that lock and read everything anyway. That’s essentially what just happened to thousands of businesses using Microsoft’s Copilot AI tool.
What really happened
For nearly a month, a coding error in Microsoft 365 Copilot allowed the AI assistant to access and summarize emails that were clearly marked as confidential. These weren’t random emails; they were messages that companies had specifically flagged with security labels that were supposed to tell Copilot, “Do not touch this.” The AI touched them anyway.
The bug was first spotted by customers on January 21, but Microsoft didn’t officially acknowledge the problem until February 3. A fix only started rolling out on February 10, and as of today, some organizations still haven’t received the patch. That’s roughly four weeks where confidential emails sitting in people’s Sent Items and Drafts folders were fair game for the AI.
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
Why does this matter so much?
Think about what lives inside corporate email. Hospitals send messages about patients. Law firms discuss cases that are protected by attorney-client privilege. Banks share trading information and financial data. Companies negotiate mergers worth millions of dollars. When organizations label these emails as confidential and set up security rules to block AI from processing them, they’re not being paranoid. They’re following the law and protecting their people.
Microsoft has argued that the bug didn’t expose anyone’s emails to other people, Copilot only processed emails belonging to the person who was already signed in. But that defense misses the point. Companies set up these protections specifically to keep AI from reading and summarizing sensitive content at all. The entire purpose of confidentiality labels is to draw a hard line that automated tools aren’t allowed to cross.
KEY TAKEAWAY: When AI tools bypass the very security rules designed to control them, it raises a serious question: can businesses trust AI assistants with their most sensitive work? |
The timing makes things even more uncomfortable for Microsoft. Just this week, the European Parliament banned built-in AI features on all staff devices. They fear that AI tools could leak confidential government communications. That decision looks less like an overreaction and more like common sense.
Microsoft says the root cause has been fixed and a “targeted code fix” is rolling out across affected systems, with full resolution expected by February 24. But the company still hasn’t disclosed how many customers were affected or how many confidential emails were processed by the AI during the exposure window.
The bottom line
This incident highlights one of the biggest tensions in the AI era. Companies are racing to add AI features to every product to boost productivity, but the security frameworks meant to keep that AI in check aren’t keeping up. When an AI assistant can silently override the digital locks that protect your most sensitive communications, it forces a hard conversation about whether the productivity gains are worth the risk.
If your organization uses Microsoft 365 Copilot, check your admin portal for service advisory CW1226324, verify that the patch has reached your systems, and consider pausing Copilot on sensitive workflows until full remediation is confirmed. This is one AHA moment the industry won’t forget anytime soon.
Thanks for being a valued subscriber.
- The AI Daily Brief Team



