I've been noticing different issues crop up frequently, both on the web and in Claude Code. So I decided to look into how often this has been happening.
Here's the number of incidents per month based on their own status page https://status.claude.com/history as of today:
February 2026 : 10 incidents (we’re only 4 days in) January 2026 : 26 incidents December 2025 : 21 incidents
At least 16 of these directly affected their most capable model Claude Opus 4.5:
3 incidents (Dec 21-23) 9 incidents (Jan 7, 12, 13, 14, 20, 25-26, 28 x2) 4 incidents (Feb 1, 2, 3, 4)
Ten more are related to the claude.ai platform itself. And that's not even counting how buggy it is day to day. I don't think I'm the only one who's had it generate a nearly complete response, only for something to go wrong and wipe the entire thing from the conversation. No way to recover it, just wasted tokens.
How is Anthropic not addressing this? They are one of the highest valued AI companies out there. Clearly they have the resources and engineers to fix these issues. Why isn’t reliability a priority?
0 comments