A company with real goodwill — and a growing trust problem
Anthropic built a lot of goodwill with users and developers. Claude became known for thoughtful writing, strong coding support, long-context work and a product tone that felt less frantic than the rest of the AI market. For many people, Claude was the assistant they trusted when they wanted careful reasoning rather than hype.
That goodwill has been tested. Stricter usage rules, more visible refusals, shifting access limits and poor communication around policy decisions have frustrated users who feel they paid for a capable assistant but sometimes receive a lecture, a refusal or a vague explanation instead. The result is not that Claude suddenly became a bad product. It is that trust has become less automatic.
The user complaint: rules that feel unpredictable
The strongest criticism is not simply that Anthropic has rules. Every serious AI company has safety policies, abuse controls and limits. The complaint is that Claude can feel unpredictable: one day a task is fine, the next it is blocked or softened in a way that makes the product feel less useful.
Developers are especially sensitive to this. If an API is powering a customer workflow, inconsistent refusals are not just annoying; they are operational risk. Teams need to know whether a model will follow a reasonable instruction, produce a useful answer and fail gracefully when it cannot comply. When policy behaviour feels opaque, developers start looking for alternatives.
The Anthropic side: safety and brand trust matter
The other side is important. Anthropic has made safety central to its identity, and that is not just PR. Enterprises, public-sector buyers and regulated industries care deeply about misuse, harmful outputs, privacy and governance. A company that is willing to say no can be more attractive to risk-sensitive customers.
There is also a genuine argument that stronger guardrails protect the long-term health of the AI ecosystem. If AI assistants become too permissive, regulators and large customers may react harshly. Anthropic’s caution can be read as a strategic attempt to keep powerful models deployable in serious environments.
Where the PR has gone wrong
The problem is communication. Users are more forgiving when limits are explained clearly, consistently and practically. They are less forgiving when a product feels more restrictive without a simple explanation of what changed, why it changed and how paying customers should adapt.
Good PR in this area would not mean pretending every user complaint is correct. It would mean treating developers like partners: publish clearer policy examples, explain refusal categories, give better debugging tools, and separate hard safety boundaries from product-quality failures. Anthropic still has an excellent technical reputation, but reputation can decay quickly when customers feel talked down to.
API cost comparison: Anthropic versus OpenAI
At the API level, Anthropic and OpenAI are now surprisingly close at the high end. Claude Opus 4.7 is listed at $5 per million input tokens and $25 per million output tokens. OpenAI’s GPT-5.5 is listed at $5 per million input tokens and $30 per million output tokens. In simple token terms, Opus is slightly cheaper on output, while GPT-5.5 is directly comparable on input.
The middle tier is where OpenAI can look especially attractive. Claude Sonnet 4.6 is listed at $3 input and $15 output per million tokens. OpenAI GPT-5.4 is listed at $2.50 input and $15 output, while GPT-5.4 mini is much cheaper at $0.75 input and $4.50 output. For everyday tasks, coding assistance and high-volume workflows, that lower-cost OpenAI tier can be compelling.
Anthropic does have strong options: Sonnet remains a very good balance of quality and speed, and Haiku 4.5 at $1 input and $5 output gives developers a lower-cost Claude route. But if a team is comparing flexibility, model range and cost ladders, OpenAI currently gives a very clear path from efficient mini models up to frontier GPT-5.5.
Subscription plans: $20, $100 and $200
For individual users, both companies compete around the familiar $20 tier. Claude Pro is listed at $20 per month when billed monthly, with an annual discount shown as $17 per month. ChatGPT Plus is commonly positioned at the same $20 monthly level. At this tier, the decision is less about raw price and more about personality, limits, tools and which model family suits the user’s work.
The $100 tier is more Anthropic-shaped. Claude Max starts from $100 per month, aimed at heavier users who want substantially more Claude access. OpenAI does not have a directly equivalent mainstream $100 consumer plan in the same way; its pricing ladder more clearly jumps from Plus-style access to Pro-style access, while business and Codex usage can be structured differently.
At the $200 level, OpenAI’s ChatGPT Pro is a strong value proposition for users who want the most capable OpenAI experience and heavier usage. Anthropic also has higher-end Claude Max style usage at this level, but the goodwill issue matters more here: the more a user pays, the less patience they have for unclear limits, unexpected refusals or vague PR.
What Anthropic needs to repair
Anthropic does not need to abandon safety to win back goodwill. It needs to make safety feel professional rather than paternalistic. That means clearer rules, better developer controls, more transparent usage limits and a product experience where legitimate work is helped as much as possible before being refused.
If Anthropic can do that, Claude’s strengths remain substantial: strong writing, long context, careful reasoning and excellent models in Sonnet and Opus. But if the company lets policy friction define the product, users will continue comparing it unfavourably with OpenAI, DeepSeek and other platforms that feel more permissive, predictable or cost-efficient.
Bottom line
The balanced view is this: Anthropic has not destroyed its product, but it has damaged some of the goodwill that made Claude feel special. The company’s safety-first approach is defensible, and in some enterprise contexts it is a selling point. The trouble is that strict usage rules and awkward PR have made too many users feel constrained rather than protected.
Trust is now the real battleground. Anthropic still has excellent models, but OpenAI currently looks stronger on product breadth, cost flexibility and day-to-day confidence. If Anthropic wants to regain the emotional advantage it once had, it needs to make Claude feel not only safe, but reliably useful.



