Reading these comments aren't we missing the obvious?
Claude Code is a lock in, where Anthropic takes all the value.
If the frontend and API are decoupled, they are one benchmark away from losing half their users.
Some other motivations: they want to capture the value. Even if it's unprofitable they can expect it to become vastly profitable as inference cost drops, efficiency improves, competitors die out etc. Or worst case build the dominant brand then reduce the quotas.
Then there's brand - when people talk about OpenCode they will occasionally specify "OpenCode (with Claude)" but frequently won't.
Then platform - at any point they can push any other service.
Look at the Apple comparison. Yes, the hardware and software are tuned and tested together. The analogy here is training the specific harness,caching the system prompt, switching models, etc.
But Apple also gets to charge Google $billions for being the default search engine. They get to sell apps. They get to sell cloud storage, and even somehow a TV. That's all super profitable.
At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.
CuriouslyC1 day ago
Anthropic is going to be on the losing side with this. Models are too fungible, it's really about vibes, and Claude Code is far too fat and opinionated. Ironically, they're holding back innovation, and it's burning the loyalty the model team is earning.
keeganpoppen1 day ago
the fat and opinionated has always been true for them (especially compared to openai), and to all appearances remains a feature rather than a bug. i can’t say the approach makes my heart sing, personally, but it absolutely has augured tremendous success among thought workers / the intelligensia
tom_m1 day ago
I thought Anthropic would fall after OpenAI, but they just might be racing to the bottom faster here.
wolvoleo17 hours ago
I think they're doing a great job on the coding front though
NaN years ago
undefined
burnte1 day ago
Maybe for coding but the number of normie users flooding to Claude over OAI is huge.
mikestorrent1 day ago
I think their branding is cementing in place for a lot of people, and the lived experience of people trying a lot of models often ends up with a simple preference for Claude, likely using a lot of the same mental heuristics as how we choose which coworkers we enjoy working with. If they can keep that position, they will have it made.
tracker11 day ago
I'm a very experienced developer with a lot of diverse knowledge and experience in both technical and domain knowledge. I've only tried a handful of AI coding agents/models... I found most of them ranging from somewhat annoying to really annoying. Claude+Opus (4.5 when I started) is the first one I've used where I found it more useful than annoying to use.
I think Github Co-Pilot is most annoying from what I've tried... it's great for finishing off a task that's half done where the structure is laid out, as long as you put blinders keeping it focused on it. OpenAI and Google's options seem to get things mostly right, but do some really goofy wrong things from my own experiences.
They all seem to have trouble using state of the art and current libraries by default, even when you explicitly request them.
NaN years ago
undefined
NaN years ago
undefined
empath751 day ago
I think you have it exactly backwards, and that "owning the stack" is going to be important. Yes the harness is important, yes the model is important, but developing the harness and model together is going to pay huge dividends.
This coding agent is minimal, and it completely changed how I used models and Claude's cli now feels like extremely slow bloat.
I'd not be surprised if you're right in that this is companies / management will prefer to "pay for a complete package" approach for a long while, but power-users should not care for the model providers.
I have like 100 lines of code to get me a tmux controls & semaphore_wait extension in the pi harness. That gave me a better orchestration scheme a month ago when I adopted it, than Claude has right now.
As far as I can tell, the more you try to train your model on your harness, the worse they get. Bitter lesson #2932.
NaN years ago
undefined
CuriouslyC1 day ago
That was true more mid last year, but now we have a fairly standard flow and set of core tools, as well as better general tool calling support. The reality is that in most cases harnesses with fewer tools and smaller system prompts outperform.
The advances in the Claude Code harness have been more around workflow automation rather than capability improvements, and truthfully workflows are very user-dependent, so an opinionated harness is only ever going to be "right" for a narrow segment of users, and it's going to annoy a lot of others. This is happening now, but the sub subsidy washes out a lot of the discontent.
quikoa1 day ago
If Claude Code is so much better why not make users pay to use it instead of forcing it on subscribers?
popcorncowboy1 day ago
You're right, because owning the stack means better options for making tons of money. Owning the stack is demonstrably not required for good agents, there are several excellent (frankly way better than ol' Claude Code) harnesses in the wild (which is in part why so many people are so annoyed by Anthropic about this move - being forced back onto their shitty cli tool).
eshaham781 day ago
The competition angle is interesting - we're already seeing models like Step-3.5-Flash advertise compatibility with Claude Code's harness as a feature. If Anthropic's restrictions push developers toward more open alternatives, they might inadvertently accelerate competitor adoption. The real question is whether the subscription model economics can sustain the development costs long-term while competitors offer more flexible terms.
rurp1 day ago
I don't think many are confused about why Anthropic wants to do this. The crux is that they appear to be making these changes solely for their own benefit at the expense of their users and people are upset.
There are parallels to the silly Metaverse hype wave from a few years ago. At the time I saw a surprising number of people defending the investment saying it was important for Facebook to control their own platform. Well sure it's beneficial for Facebook to control a platform, but that benefit is purely for the company and if anything it would harm current and future users. Unsurprisingly, the pitch to please think of this giant corporation's needs wasn't a compelling pitch in the end.
mccoyb1 day ago
"Training the specific harness" is marginal -- it's obvious if you've used anything else. pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude.
This whole game is a bizarre battle.
In the future, many companies will have slightly different secret RL sauces. I'd want to use Gemini for documentation, Claude for design, Codex for planning, yada yada ... there will be no generalist take-all model, I just don't believe RL scaling works like that.
I'm not convinced that a single company can own the best performing model in all categories, I'm not even sure the economics make it feasible.
Good for us, of course.
thepasch1 day ago
> pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude
And that’s out of the box. With how comically extensible pi is and how much control it gives you over every aspect of the pipeline, as soon as you start building extensions for your own, personal workflow, Claude Code legimitely feels like a trash app in comparison.
I don’t care what Anthropic does - I’ll keep using pi. If they think they need to ban me for that, then, oh well. I’ll just continue to keep using pi. Just no longer with Claude models.
throwaw1215 hours ago
As a Claude Code user looking for alternatives, I am very intrigued by this statement.
Can you please share good resources I can learn from to extend pi?
NaN years ago
undefined
m11a22 hours ago
Don't think that's a valid comparison.
Apple can do those things because they control the hardware device, which has physical distribution, and they lock down the ecosystem. There is no third party app store, and you can't get the Photos app to save to Google Drive.
With Claude Code, just export an env variable or use a MITM proxy + some middleware to forward requests to OpenAI instead. It's impossible to have lock in. Also, coding agent CLIs are a commodity.
chasd001 day ago
> At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.
i've been wondering how anthropic is going to survive long term. If they could build out an infrastructure and services to complete with the hyperscalers but surfaced as a tool for claude to use then maybe. You pay Anthropic $20/user/month for ClaudeCode but also $100k/month to run your applications.
ksec1 day ago
>Claude Code is a lock in, where Anthropic takes all the value.
I wouldn't all the value, but how else are you going to run the business? Allow other to take all the value you provide?
marscopter1 day ago
> Reading these comments aren't we missing the obvious?
AI companies: "You think you own that code?"
thenaturalist1 day ago
???
Use an API Key and there's no problem.
They literally put that in plain words in the ToS.
9cb14c1ec01 day ago
Using an API key is orders of magnitude more expensive. That's the difference here. The Claude Code subscriptions are being heavily subsidized by Anthropic, which is why people want to use their subscriptions in everything else.
naveen991 day ago
They are subsidized by people who underuse their subscriptions. There must be a lot of them.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
thenaturalist1 day ago
Be the economics as they may, there is no lock in as OP claims.
This statement is plainly wrong.
If you boost and praise AI usage, you have to face the real cost.
Can't have your cake and eat it, too.
anonym291 day ago
The people mad about this feel they are entitled to the heavily subsidized usage in any context they want, not in the context explicitly allowed by the subsidizer.
It's kind of like a new restaurant started handing out coupons for "90% off", wanting to attract diners to the restaurant, customers started coming in and ordering bulk meals then immediately packaging them in tupperware containers and taking it home (violating the spirit of the arrangement, even if not the letter of the arrangement), so the restaurant changed the terms on the discount to say "limited to in-store consumption only, not eligible for take-home meals", and instead of still being grateful that they're getting food for 90% off, the cheapskate customers are getting angry that they're no longer allowed to exploit the massive subsidy however they want.
bilekas1 day ago
It might be some confirmation bias here on my part but it feels as if companies are becoming more and more hostile to their API users. Recently Spotify basically nuked their API with zero urgency to fix it, redit has a whole convoluted npm package your obliged to use to create a bot, Facebook requires you to provide registered company and tax details even for development with some permissions. Am I just old man screaming at cloud about APIs used to being actually useful and intuitive?
Loic1 day ago
They put no limits on the API usage, as long as you pay.
Here, they put limits on the "under-cover" use of the subscription. If they can provide a relatively cheap subscription against the direct API use, this is because they can control the stuff end-to-end, the application running on your system (Claude Code, Claude Desktop) and their systems.
As you subscribe to these plans, this is the "contract", you can use only through their tools. If you want full freedom, use the API, with a per token pricing.
For me, this is fair.
rglullis1 day ago
> If they can provide a relatively cheap subscription against the direct API use
Except they can't. Their costs are not magically lower when you use claude code vs when you use a third-party client.
> For me, this is fair.
This is, plain and simple, a tie-in sale of claude code. I am particularly amused by people accepting it as "fair" because in Brazil this is an illegal practice.
nerdjon1 day ago
> This is, plain and simple, a tie-in sale of claude code. I am particularly amused by people accepting it as "fair" because in Brazil this is an illegal practice
I am very curious what is particularly illegal about this. On the sales page nowhere do they actually talk about the API https://claude.com/pricing
Now we all know obviously the API is being used because that is how things work, but you are not actually paying a subscription for the API. You are paying for access to Claude Code.
Is it also illegal that if you pay for Playstation Plus that you can't play those games on an Xbox?
Is it illegal that you can't use third party netflix apps?
I really don't want to defend and AI company here but this is perfectly normal. In no other situation would we expect access to the API, the only reason this is considered different is because they also have a different service that gives access to the API. But that is irrelevant.
NaN years ago
undefined
NaN years ago
undefined
theturtletalks1 day ago
I've heard they actually cache the full Claude Code system prompt on their servers and this saves them a lot of money. Maybe they cache the MCP tools you use and other things. If another harness like Opencode changes that prompt or adds significantly to it, that could increase costs for them.
What I don't understand is why start this game of cat and mouse? Just look at Youtube and YT-DLP. YT-DLP, and the dozens of apps that use it, basically use Youtube's unofficial web API and it still works even after Youtube constantly patches their end. Though now, YT-DLP has to use a makeshift JS interpreter and maybe even spawn Chromium down the line.
NaN years ago
undefined
NaN years ago
undefined
rogerthis1 day ago
Unless it's illegal in more places, I think they won't care. In my experience, the percentage of free riders in Brazil is higher (due to circumstances, better said).
pigpop1 day ago
While the cost may not be lower the price certainly can be if they are operating like any normal company and adding margin.
canibal1 day ago
But they could charge the third-party client for access to the API.
NaN years ago
undefined
NaN years ago
undefined
narrator1 day ago
I think what most people don't realize is running an agent 24/7 fully automated is burning a huge hole in their profitability. Who even knows how big it is. It could be getting it on the 8/9 figures a day for all we know.
There's this pervasive idea left over from the pre-llm days that compute is free. You want to rent your own H200x8 to run your Claude model, that's literally going to cost $24/hour. People are just not thinking like that. I have my home PC, it does this stuff I can run it 24/7 for free.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
stavros1 day ago
I don't see how it's fair. If I'm paying for usage, and I'm using it, why should Anthropic have a say on which client I use?
I pay them $100 a month and now for some reason I can't use OpenCode? Fuck that.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
arghwhat1 day ago
Their subscriptions aren't cheap, and it has nothing really to do with them controlling the system.
It's just price differentiation - they know consumers are price sensitive, and that companies wanting to use their APIs to build products so they can slap AI on their portfolio and get access to AI-related investor money can be milked. On the consumer-facing front, they live off branding and if you're not using claude code, you might not associate the tool with Anthropic, which means losing publicity that drives API sales.
skerit1 day ago
It would be less of an issue if Claude-Code was actually the best coding client, and would actually somehow reduce the amount of tokens used. But it's not. I get more things done with less tokens via OpenCode. And in the end, I hit 100% usage at the end of the week anyway.
NaN years ago
undefined
fauigerzigerk1 day ago
It doesn't really make sense to me because the subscriptions have limits too.
But I agree they can impose whatever user hostile restrictions they want. They are not a monopoly. They compete in a very competitive market. So if they decide to raise prices in whatever shape or form then that's fine.
Arbitrary restrictions do play a role for my own purchasing decisions though. Flexibility is worth something.
randusername1 day ago
I'm with the parent comment. It was inevitable Netflix would end password-sharing. It was inevitable you'd have to pick between freeform usage-based billing and a constrained subscription experience. Using the chatbot subscription as an API was a weird loophole. I don't feel betrayed.
tom_m1 day ago
They tier it. So you are limited until you pay more. So you can't just right away get the access you need.
agentic_lawyer1 day ago
[dead]
throwaway247781 day ago
I don't and would never pay for an LLM, but presumably they also want for force ads down your throat eventually, yea? Hard to do if you're just selling API access.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
sbarre1 day ago
Every garden eventually becomes a walled garden once enough people are inside.
Can you sell ads via api? If answer is no then this “feature” would be at the bottom of the list
input_sh1 day ago
They can sell API access via transparent pricing.
Instead, many, many websites (especially in the music industry) have some sort of funky API that you can only get access to if you have enough online clout. Very few are transparent about what "enough clout" even means or how much it'd cost you, and there's like an entire industry of third-party API resellers that cost like 10x more than if you went straight to the source. But you can't, because you first have to fulfill some arbitrary criteria that you can't even know about ahead of time.
It's all very frustrating to deal with.
3D304974201 day ago
Plus, use of the API is a way to avoid ads. So double-strike against good/available APIs.
miroljub1 day ago
Of course they can [1].
Though, in this case, you get free API access to the model.
I think that these companies are understanding that as the barrier to entry to build a frontend gets lower and lower, APIs will become the real moat. If you move away from their UI they will lose ad revenue, viewer stats, in short the ability to optimize how to harness your full attention. It would be great to have some stats on hand and see if and how much active API user has increased decreased in the last
two years, as I would not be surprised if it had increased at a much faster pace than in the past.
whiplash4511 day ago
> the barrier to entry to build a frontend gets lower
My impression is the opposite: frontend/UI/UX is where the moat is growing because that's where users will (1) consume ads (2) orchestrate their agents.
NaN years ago
undefined
NaN years ago
undefined
admx81 day ago
What ad revenue? In their terminal cli?
theshrike795 hours ago
"Are becoming", you sweet summer child.
It all started with Facebook closing pretty much everything and making FB Messenger a custom protocol instead of XMPP.
And whatever API access is still available is so shit and badly managed that even a household name billion dollar gaming company couldn't get a fast-lane for approval to use specific API endpoints.
The final straw was Twitter effectively closing up their API "to protect from bots", which in fact did NOT protect anyone from bots. All it did was prevent legitimate entertaining and silly bots from acting on the platform, the actual state-controlled trolls just bought the blue checkmark and continued as-is.
mamami1 day ago
I don't it's particularly hard to figure it out: APIs have been particularly at risk of being exploited for negative purposes due the explosion of AI powered bots
throwaway247781 day ago
This trend well predates widespread use of chatbots.
sceptic1231 day ago
It's just the continued slow death of the open internet
simianwords1 day ago
I’m predicting that there would be a new movement to make everything an MCP. It’s now easier to consume an api by non technical people.
DataSpace1 day ago
You're correct in your observations. In the age of agents, the walls are going up. APIs are no longer a value-add; they're a liability. MCP and the equivalent will be the norm interface. IMO.
xnx1 day ago
What is given can be taken away. Despite the extra difficult this is why unofficial methods (e.g. scraping) are often superior. Soon we'll see more fully independent data scraping done by cameras and microphones.
whiplash4511 day ago
APIs leak profit and control vs their counterpart SDK/platforms. Service providers use them to bootstrap traffic/brand, but will always do everything they can to reduce their usage or sunset them entirely if possible.
pirsquare1 day ago
Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica. They are holding businesses who touches your personal data responsible.
bilekas1 day ago
> Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica.
There is nothing here stopping cambridge analytica from doing this again, they will provide whatever details needed. But a small pre launch personal project work that might use a facebook publishing application can't be developed or tested without first going through all the bureaucracy.
Nevermind the non profit 'free' application you might want to create on the FB platform, lets say a share chrome extension "Post to my FB", for personal use, you can't do this because you can't create an application without a company and IVA/TAX documents. It's hostile imo.
Before, you could create an app, link your ToS, privacy policy etc, verify your domain via email, and then if users wanted to use your application they would agree, this is how a lot of companies still do it. I'm actually not sure why FB do this specifically.
wobfan1 day ago
Facebook knew very early and very well about the data harvesting that was going on at Cambridge Analytica through their APIs. They acted so incredibly slowly and not-harsh that it's IMO hard to believe that they did not implicitly support it.
> to protect consumers
We are talking about Meta. They have never, and will never, protect customers. All they protect is their wealth and their political power.
wiseowise1 day ago
Is it? I’ve never touched Facebook api, but it sounds ridiculous that you need to provide tax details for DEVELOPMENT. Can’t they implement some kind of a sandbox with dummy data?
NaN years ago
undefined
NaN years ago
undefined
is_true1 day ago
They just want people to use facebook. If you can see facebook content without being signed in they have a harder time tracking you and showing you ads.
endymi0n1 day ago
„Open Access APIs are like a subway. You use them to capture a market and then you get out.“
— Erdogan, probably.
iamacyborg1 day ago
Given the Cambridge Analytica scandal, I don’t take too much issue to FB making their APIs a little tougher to use
baby1 day ago
Not sure how relevant this comment is
windexh8er1 day ago
Everyone has heard the word "enshittification" at this point and this falls in line. But if you haven't read the book [0] it's a great deep dive into the topical area.
But the real issue is that these companies, once they have any market leverage, do things in their best interest to protect the little bit of moat they've acquired.
- Start being very open, as this brings people developing over the platforms and generates growth
- As long as they are growing, VC money will come to pay for everything. This is the scale up phase
- Then comes the VC exit, IPO or whatever
- Now the new owners don't want user growth, they want margin growth. This is the company phase
- Companies then have monetize their users (why not ads?), close up free, or high-maintenance stuff that do not bring margin
- and report that sweet $$$ growth quarter after quarter
...until a new startup comes in and starts the cycle over again, destroying all the value the old company had.
A mix of Enshittification and Innovators Dilemma theories
mdrzn1 day ago
APIs are the best when they let you move data out and build cool stuff on top. A lot of big platforms do not really want that anymore. They want the data to stay inside their silo so access gets slower harder and more locked down. So you are not just yelling at the cloud this feels pretty intentional.
systemBuilder1 day ago
Google now wants $30,000 a month for customsearch (minimum charge), up from 1c per search or thereabouts in January 2026...
BoredPositron1 day ago
There is no moat except market saturation and gate keeping for most platforms.
IAmGraydon1 day ago
It's because AI is being trained on all of these APIs and the platforms are at risk of losing what makes them valuable (their data). So they have to take the API down or charge enough that it wouldn't be worth it for an AI.
TZubiri1 day ago
But this ban is precisely on circumventing the API.
canibal1 day ago
You're not wrong. Reddit & Elon started it and everyone laughed at them and made a stink. But my guess is the "last dying gasp of the freeloader" /s wasn't enough to dissuade other companies from jumping on the bandwagon, cause fiduciary responsibility to shareholders reigns supreme at the end of the day.
jauntywundrkind1 day ago
This is sort of true!
Spotify in particular is just patently the very worst. They released an amazing and delightful app sdk, allowing for making really neat apps in the desktop app in 2011. Then cancelled it by 2014. It feels like their entire ecosystem has only ever gone downhill. Their car device was cancelled nearly immediately. Every API just gets worse and worse. Remarkable to see a company have only ever such a downward slide. The Spotify Graveyard is, imo, a place of singnificantly less honor than the Google Graveyard. https://web.archive.org/web/20141104154131/https://gigaom.co...
But also, I feel like this broad repulsive trend is such an untenable position now that AI is here. Trying to make your app an isolated disconnected service is a suicide pact. Some companies will figure out how to defend their moat, but generally people are going to prefer apps that allow them to use the app as they want, increasingly, over time. And they are not going to be stopped even if you do try to control terms!
Were I a smart engaged company, I'd be trying to build WebMCP access as soon as possible. Adoption will be slow, this isn't happening fast, but people who can mix human + agent activity on your site are going to be delighted by the experience, and that you will spread!
WebMCP is better IMHO than conventional APIs because it layers into the experience you are already having. It's not a separate channel; it can build and use the session state of your browsing to do the things. That's a huge boon for users.
MillionOClock1 day ago
I really hope someone from any of those companies (if possible all of them) would publish a very clear statement regarding the following question: If I build a commercial app that allows my users to connect using their OAuth token coming from their ChatGPT/Claude etc. account, do they allow me (and their users) to do this or not?
I totally understand that I should not reuse my own account to provide services to others, as direct API usage is the obvious choice here, but this is a different case.
I am currently developing something that would be the perfect fit for this OAuth based flow and I find it quite frustrating that in most cases I cannot find a clear answer to this question. I don't even know who I would be supposed to contact to get an answer or discuss this as an independent dev.
EDIT: Some answers to my comment have pointed out that the ToS of Anthropic were clear, I'm not saying they aren't if taken in a vacuum, yet in practice even after this being published some confusion remained online, in particular regarding wether OAuth token usage was still ok with the Agent SDK for personal usage. If it happens to be, that would lead to other questions I personally cannot find a clear answer to, hence my original statement. Also, I am very interested about the stance of other companies on this subject.
Maybe I am being overly cautious here but I want to be clear that this is just my personal opinion and me trying to understand what exactly is allowed or not. This is not some business or legal advice.
paxys1 day ago
I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.
Subscriptions are for first-party products (claude.com, mobile and desktop apps, Claude Code, editor extensions, Cowork).
Everything else must use API billing.
firloop1 day ago
The biggest reason why this is confusing is the Claude Agent SDK[0] will use subscription/oauth credentials if present. The terms update implies that there's some use cases where that's ok and other use cases (commercial?) where using their SDK on a user's device violates terms.
And at that point, you might as well use OpenRouter's PKCE and give users the option to use other models..
These kinds of business decisions show how these $200.00 subscriptions for their slot/infinite jest machines basically light that $200.00 on fire, and in general how unsustainable these business models are.
Can't wait for it all to fail, they'll eventually try to get as many people to pay per token as possible, while somehow getting people to use their verbose antigentic tools that are able to inflate revenue through inefficient context/ouput shenanigans.
NaN years ago
undefined
NaN years ago
undefined
MillionOClock1 day ago
You are talking about Anthropic and indeed compared to OpenAI or GitHub Copilot they have seemed to be the ones with what I would personally describe as a more restrictive approach.
On the other hand OpenAI and GitHub Copilot have, as far as I know, explicitly allowed their users to connect to at least some third party tools and use their quotas from there, notably to OpenCode.
What is unclear to me is whether they are considering also allowing commercial apps to do that. For instance if I publish a subscription based app and my users pay for the app itself rather than for LLM inference, would that be allowed?
NaN years ago
undefined
qwertox1 day ago
Then why does the SDK support subscription usage? Can I at least use my subscription for my own use of the SDK?
theturtletalks1 day ago
What if you wrap the service using their Agent SDK?
NaN years ago
undefined
NaN years ago
undefined
Imustaskforhelp1 day ago
Quick question but what if I use claude code itself for the purpose?
This can make Opencode work with Claude code and the added benefit of this is that Opencode has a Typescript SDK to automate and the back of this is still running claude code so technically should work even with the new TOS?
So in the case of the OP. Maybe Opencode TS SDK <-> claude code (using this tool or any other like this) <-> It uses the oauth sign in option of Claude code users?
Also, zed can use the ACP protocol itself as well to make claude code work iirc. So is using zed with CC still allowed?
> I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.
This is confusing quite frankly, there's also the claude agent sdk thing which firloop and others talked about too. Some say its allowed or not. Its all confusing quite frankly.
artdigital1 day ago
That’s very clearly a no, I don’t understand why so many people think this is unclear.
You can’t use Claude OAuth tokens for anything. Any solution that exists worked because it pretended/spoofed to be Claude Code. Same for Gemini (Gemini CLI, Antigravity)
Codex is the only one that got official blessing to be used in OpenClaw and OpenCode, and even that was against the ToS before they changed their stance on it.
adastra221 day ago
Is Codex ok with any other third party applications, or just those?
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
croes1 day ago
But why does it matter which program consumes the tokens?
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
SeanAnderson1 day ago
I think you're just trying to see ambiguity where it doesn't exist because the looser interpretation is beneficial to you. It totally makes sense why you'd want that outcome and I'm not faulting you for it. It's just that, from a POV of someone without stake in the game, the answer seems quite clear.
eleventyseven1 day ago
It is pretty obviously no. API keys billed by the token, yes, Oauth to the flat rate plans no.
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
MillionOClock1 day ago
If you look at this tweet [1] and in particular responses under it, it still seems to me like some parts of it need additional clarification. For instance, I have seen some people interpret the tweet as meaning using the OAuth token is actually ok for personal experimentation with the Agent SDK, which can be seen as a slight contradiction with what you quoted. A parent tweet also mentioned the docs clean up causing some confusion.
None of this is legal advice, I'm just trying to understand what exactly is allowed or not.
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai.
I think this is pretty clear - No.
merb1 day ago
So it’s forbidden to use the Claude Mac app. I would say the ToS as it is, can’t be enforced
laksjhdlka1 day ago
Anthropic has published a very clear statement. It's "no".
kovek1 day ago
Does https://happy.engineering/ need to use the API keys or can use oauth? It's basically a frontend for claude-cli.
kzahel1 day ago
It doesn't even touch auth right?
"""
Usage policy
Acceptable use
Claude Code usage is subject to the Anthropic Usage Policy. Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK
"""
That tool clearly falls under ordinary individual use of Claude code. https://yepanywhere.com/ is another such tool. Perfectly ordinary individual usage.
The TOS are confusing because just below that section it talks about authentication/credential use. If an app starts reading api keys / credentials, that starts falling into territory where they want a hard line no.
resonious1 day ago
If it's a wrapper that invokes the `claude` binary then I believe it's fine.
NaN years ago
undefined
azuanrb1 day ago
Usually, it is already stated in their documentation (auth section). If a statement is vague, treat it as a no. It is not worth the risk when they can ban you at any time. For example, ChatGPT allows it, but Claude and Gemini do not.
Maybe I am missing something from the docs of your link, but I unfortunately don't think it actually states anything regarding allowing users to connect and use their Codex quota in third party apps.
NaN years ago
undefined
NaN years ago
undefined
itissid1 day ago
One set of applications to build with subscription is to use the claude-go binary directly. Humanlayer/Codelayer projects on GitHub do this. Granted those are not ideal for building a subscription based business to use oathu tokens from Claude and OpenaAI. But you can build a business by building a development env and gating other features behind paywall or just offering enterprise service for certain features like vertical AI(redpanada) offerings knowledge workers, voice based interaction(there was a YC startup here the other day doing this I think), structured outputs and workflows. There is lots to build on.
direwolf201 day ago
Not allowed. They've already banned people for this.
OGEnthusiast1 day ago
[dead]
andreagrandi1 day ago
I'm only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.
Opus has gone down the hill continously in the last week (and before you start flooding with replies, I've been testing opus/codex in parallel for the last week, I've plenty of examples of Claude going off track, then apologising, then saying "now it's all fixed!" and then only fixing part of it, when codex nailed at the first shot).
I can accept specific model limits, not an up/down in terms of reliability. And don't even let me get started on how bad Claude client has become. Others are finally catching up and gpt-5.3-codex is definitely better than opus-4.6
Everyone else (Codex CLI, Copilot CLI etc...) is going opensource, they are going closed. Others (OpenAI, Copilot etc...) explicitly allow using OpenCode, they explicitly forbid it.
This hostile behaviour is just the last drop.
super2561 day ago
OpenAI forces users to verify with their ID + face scan when using Codex 5.3 if any of your conversations was redeemed as high risk.
They haven't asked me yet (my subscription is from work with a business/team plan). Probably my conversations as too boring
NaN years ago
undefined
abm531 day ago
I’m unsure exactly in what way you believe it has gone “down the hill” so this isn’t aimed at you specifically but more a general pattern I see.
That pattern is people complaining that a particular model has degraded in quality of its responses over time or that it has been “nerfed” etc.
Although the models may evolve, and the tools calling them may change, I suspect a huge amount of this is simply confirmation bias.
seu1 day ago
> Opus has gone down the hill continously in the last week
Is a week the whole attention timespan of the late 2020s?
latexr1 day ago
We’re still in the mid-late 2020s. Once we really get to the late 2020s, attention spans won’t be long enough to even finish reading your comment. People will be speaking (not typing) to LLMs and getting distracted mid-sentence.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
_kb1 day ago
Unfortunately, and “Attention Is All You Need”.
marcus_holmes1 day ago
oh shit we're in the late 2020's now
NaN years ago
undefined
ifwinterco1 day ago
Opus 4.6 genuinely seems worse than 4.5 was in Q4 2025 for me. I know everyone always says this and anecdote != data but this is the first time I've really felt it with a new model to the point where I still reach for the old one.
I'll give GPT 5.3 codex a real try I think
Esophagus41 day ago
Huh… I’ve seen this comment a lot in this thread but I’ve really been impressed with both Anthropic’s latest models and latest tooling (plugins like /frontend-design mean it actually designs real front ends instead of the vibe coded purple gradient look). And I see it doing more planning and making fewer mistakes than before. I have to do far less oversight and debugging broken code these days.
But if people really like Codex better, maybe I’ll try it. I’ve been trying not to pay for 2 subscriptions at once but it might be worth a test.
NaN years ago
undefined
mosselman1 day ago
I asked Codex 5.3 and Opus 4.6 to write me a macos application with a certain set of requirements.
Opus 4.6 wrote me a working macos application.
Codex wrote me a html + css mockup of a macos application that didn't even look like a macos application at all.
Opus 4.5 was fine, but I feel that 4.6 is more often on the money on its implementations than 4.5 was. It is just slower.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
kilroy1231 day ago
I agree with you. Codex 5.3 is good it's just a bit slower.
NaN years ago
undefined
trillic1 day ago
The rate limit for my $20 OpenAI / Codex account feels 10x larger than the $20 claude account.
choilive1 day ago
YES. I hit the rate limit in about ~15 mins on Claude. But it will take me a few hours with Codex. A/B testing them on the same tasks. Same $20/mo.
GorbachevyChase1 day ago
I was underwhelmed by Opus4.6. I didn’t get a sense of significant improvement, but the token usage was excessive to the point that I dropped the subscription for codex. I am suspect that all the models are so glib that they can create a quagmire for themselves in a project. I have not yet found a satisfying strategy for non-destructive resets when the systems own comments and notes poisons new output. Fortunately, deleting and starting over is cheap.
dannersy1 day ago
No offense, but this is the most predicable outcome ever. The software industry at large does this over and over again and somehow we're surprised. Provide thing for free or for cheap, and then slowly draw back availability once you have dominant market share or find yourself needing money (ahem).
The providers want to control what AI does to make money or dominate an industry so they don't have to make their money back right away. This was inevitable, I do not understand why we trust these companies, ever.
NamlchakKhandro1 day ago
because it's easier than paying $50k for local llm setup that might not last 5 years.
NaN years ago
undefined
NaN years ago
undefined
andreagrandi1 day ago
No offense taken here :)
First, we are not talking about a cheap service here. We are talking about a monthly subscription which costs 100 USD or 200 USD per month, depending on which plan you choose.
Second, it's like selling me a pizza and pretending I only eat it while sitting at your table. I want to eat the pizza at home. I'm not getting 2-3 more pizzas, I'm still getting the same pizza others are getting.
neya1 day ago
It's the most overrated model there is. I do Elixir development primarily and the model sucks balls in comparison to Gemini and GPT-5x. But the Claude fanboys will swear by it and will attack you if you ever say even something remotely negative about their "god sent" model. It fails miserably even in basic chat and research contexts and constantly goes off track. I wired it up to fire up some tasks. It kept hallucinating and swearing it did when it didn't even attempt to. It was so unreliable I had to revert to Gemini.
resiros1 day ago
It might simply be that it was not trained enough in Elixir RL environments compared to Gemini and gpt.
I use it for both ts and python and it's certainly better than Gemini. For Codex, it depends on the task.
thepasch1 day ago
> I’m only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.
I have a feeling Anthropic might be in for an extremely rude awakening when that happens, and I don’t think it’s a matter of “if” anymore.
submain1 day ago
> And don't even let me get started on how bad Claude client has become
The latest versions of claude code have been freezing and then crashing while waiting on long running commands. It's pretty frustrating.
WarmWash1 day ago
My favorite conspiracy explanation:
Claude has gotten a lot of popular media attention in the last few weeks, and the influx of users is constraining compute/memory on an already compute heavy model.
So you get all the suspected "tricks" like quantization, shorter thinking, KV cache optimizations.
It feels like the same thing that happened to Gemini 3, and what you can even feel throughout the day (the models seem smartest at 12am).
Dario in his interview with dwarkesh last week also lamented the same refrain that other lab leaders have: compute is constrained and there are big tradeoffs in how you allocate it. It feels safe to reason then that they will use any trick they can to free up compute.
bbstats1 day ago
all this because of a single week?
andreagrandi1 day ago
No, it's not the first time their models degrade for some time.
cactusplant73741 day ago
No developer writes the same prompt twice. How can you be sure something has changed?
kasey_junk1 day ago
I regularly run the same prompts twice and through different models. Particularly, when making changes to agent metadata like agent files or skills.
At least weekly I run a set of prompts to compare codex/claude against each other. This is quite easy the prompt sessions are just text files that are saved.
The problem is doing it enough for statistical significance and judging the output as better or not.
andreagrandi1 day ago
I suspect you may not be writing code regularly...
If I have to ask Claude the same things three times and it keeps saying "You are right, now I've implemented it!" and the code is still missing 1 out of 3 things or worse, then I can definitely say the model has become worse (since this wasn't happening before).
NaN years ago
undefined
NaN years ago
undefined
SkyPuncher1 day ago
When I use Claude daily (both professionally and personally with a Max subscription), there are things that it does differently between 4.5 and 4.6. It's hard to point to any single conversation, but in aggregate I'm finding that certain tasks don't go as smoothly as they used to. In my view, Opus 4.6 is a lot better at long standing conversations (which has value), but does worse with critical details within smaller conversations.
A few things I've noticed:
* 4.6 doesn't look at certain files that it use to
* 4.6 tends to jump into writing code before it's fully understood the problem (annoying but promptable)
* 4.6 is less likely to do research, write to artifacts, or make external tool calls unless you specifically ask it to
* 4.6 is much more likely to ask annoying (blocking) questions that it can reasonably figure out on it's own
* 4.6 is much more likely to miss a critical detail in a planning document after being explicitly told to plan for that detail
* 4.6 needs to more proactively write its memories to file within a conversation to avoid going off track
* 4.6 is a lot worse about demonstrating critical details. I'm so tired of it explaining something conceptually without it thinking about how it implements details.
NaN years ago
undefined
baq1 day ago
Ralph Wiggum would like a word
NaN years ago
undefined
vicchenai1 day ago
The economic tension here is pretty clear: flat-rate subscriptions are loss leaders designed to hook developers into the ecosystem. Once third parties can piggyback on that flat rate, you get arbitrage - someone builds a wrapper that burns through $200/month worth of inference for $20/month of subscription cost, and Anthropic eats the difference.
What is interesting is that OpenAI and GitHub seem to be taking the opposite approach with Copilot/OpenCode, essentially treating third-party tool access as a feature that increases subscription stickiness. Different bets on whether the LTV of a retained subscriber outweighs the marginal inference cost.
Would not be surprised if this converges eventually. Either Anthropic opens up once their margins improve, or OpenAI tightens once they realize the arbitrage is too expensive at scale.
sambull1 day ago
these subscriptions have limits.. how could someone use $200 worth on $20/month.. is that not the issue with the limits they set on a $20 plan, and couldn't a claude code user use that same $200 worth on $20/month? (and how do i do this?)
tappio1 day ago
The limits in the max subscriptions are more generous and power users are generating loss.
I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.
What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.
With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
somenameforme1 day ago
I'd agree on this. I ended up picking up a Claude Pro sub and am very less than impressed at the volume allowance. I generally get about a dozen queries (including simple follow up/refinements/corrections) across a relatively small codebase, with prompts structured to minimize the parts of the code touched - and moving onto fresh contexts fairly rapidly, before getting cut off for their ~5 hour window. Doing that ~twice a day ends up getting cut off on the weekly limit with about a day or two left on it.
I don't entirely mind, and am just considering it an even better work:life balance, but if this is $200 worth of queries, then all I can say is LOL.
NaN years ago
undefined
ac291 day ago
The usage limit on your $20/month subscription is not $20 of API tokens (if it was, why subscribe?). Its much much higher, and you can hit the equivalent of $20 of API usage in a few days.
NaN years ago
undefined
NaN years ago
undefined
energy1231 day ago
The median subscriber generates about 50% gross margin, but some subscribers use 10x the amount of inference compute as other subscribers (due to using it more...), and it's a positive skewness distribution.
paxys1 day ago
I don't think it's a secret that AI companies are losing a ton of money on subscription plans. Hence the stricter rate limits, new $200+ plans, push towards advertising etc. The real money is in per-token billing via the API (and large companies having enough AI FOMO that they blindly pay the enormous invoices every month).
mirzap1 day ago
They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.
Banning third-party tools has nothing to do with rate limits. They’re trying to position themselves as the Apple of AI companies -a walled garden. They may soon discover that screwing developers is not a good strategy.
They are not 10× better than Codex; on the contrary, in my opinion Codex produces much better code. Even Kimi K2.5 is a very capable model I find on par with Sonnet at least, very close to Opus. Forcing people to use ONLY a broken Claude Code UX with a subscription only ensures they loose advantage they had.
rjh291 day ago
> "just a few dollars per million tokens"
Google AI Pro is like $15/month for practically unlimited Pro requests, each of which take million tokens of context (and then also perform thinking, free Google search for grounding, inline image generation if needed). This includes Gemini CLI, Gemini Code Assist (VS Code), the main chatbot, and a bunch of other vibe-coding projects which have their own rate limits or no rate limits at all.
It's crazy to think this is sustainable. It'll be like Xbox Game Pass - start at £5/month to hook people in and before you know it it's £20/month and has nowhere near as many games.
NaN years ago
undefined
NaN years ago
undefined
gbear6051 day ago
I’m not familiar with the Claude Code subscription, but with Codex I’m able to use millions of tokens per day on the $200/mo plan. My rough estimate was that if I were API billing, it would cost about $50/day, or $1200/mo. So either the API has a 6x profit margin on inference, the subscription is a loss leader, or they just rely on most people not to go anywhere near the usage caps.
NaN years ago
undefined
NaN years ago
undefined
dgellow1 day ago
Inference might be cheap, but I'm 100% sure Anthropic has been losing quite a lot of money with their subscription pricing with power users. I can literally see comparison between what my colleagues Claude cost when used with an API key vs when used with a personal subscription, and the delta is just massive
MikeNotThePope1 day ago
I wonder how many people have a subscription and don’t fully utilize it. That’s free money for them, too.
NaN years ago
undefined
bildung1 day ago
Of course they bundle R&D with inference pricing, how else could you the recoup that investment.
The interesting question is: In what scenario do you see any of the players as being able to stop spending ungodly amounts for R&D and hardware without losing out to the competitors?
NaN years ago
undefined
maplethorpe1 day ago
Didn't OpenAI spend like 10 billion on inference in 2025? Which is around the same as their total revenue?
Why do people keep saying inference is cheap if they're losing so much money from it?
NaN years ago
undefined
hhh1 day ago
What walled garden man? There’s like four major API providers for Anthropic.
NaN years ago
undefined
andersmurphy1 day ago
Except all those GPUs running inference need to be replaced every 2 years.
NaN years ago
undefined
KingMob1 day ago
> They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.
You've described every R&D company ever.
"Synthesizing drugs is cheap - just a few dollars per million pills. They're trying to bundle pharmaceutical research costs... etc."
There's plenty of legit criticisms of this business model and Anthropic, but pointing out that R&D companies sink money into research and then charge more than the marginal cost for the final product, isn't one of them.
NaN years ago
undefined
mvdtnz1 day ago
"They're not losing money on subscriptions, it's just their revenue is smaller than their costs". Weird take.
NaN years ago
undefined
sambull1 day ago
The secret is there is no path on making that back.
stingrae1 day ago
the path is by charging just a bit less than the salary of the engineers they are replacing.
NaN years ago
undefined
JimmaDaRustla1 day ago
My crude metaphor to explain to my family is gasoline has just been invented and we're all being lent Bentley's to get us addicted to driving everywhere. Eventually we won't be given free Bentley's, and someone is going to be holding the bag when the infinite money machine finally has a hiccup. The tech giants are hoping their gasoline is the one that we all crave when we're left depending on driving everywhere and the costs go soaring.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
snihalani1 day ago
how do I understand what is the sustainable pricing?
fulafel1 day ago
Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.
Someone12341 day ago
I agree; unfortunately when I brought up that they're losing before I get jumped on demanding me to "prove it" and I guess pointing at their balance sheets isn't good enough.
mattas1 day ago
The question I have: how much are they _also_ losing on per-token billing?
imachine1980_1 day ago
From what I understand, they make money per-token billing. Not enough for how much it costs to train, not accounting for marketing, subscription services, and research for new models, but if they are used, they lose less money.
Finance 101 tldr explanation:
The contribution margin (= price per token -variable cost per token ) this is positive
Profit (= contribution margin x cuantity- fix cost)
NaN years ago
undefined
dcre1 day ago
Why do you think they're losing money on subscriptions?
andersmurphy1 day ago
Does a GPU doing inference server enough customers for long enough to bring in enough revenue to pay for a new replacement GPU in two years (and the power/running cost of the GPU + infrastructure). That's the question you need to be asking.
If the answer is not yes, then they are making money on inference. If the answer is no, the market is going to have a bad time.
Yossarrian221 day ago
Because they're not saying they are making a profit
NaN years ago
undefined
croes1 day ago
But why does it matter which program you use to consume the tokens?
The sounds like a confession that claude code is somewhat wasteful at token use.
airstrike1 day ago
No, it's a confession they have no moat other than trying to hold onto the best model for a given use case.
I find that competitive edge unlikely to last meaningfully in the long term, but this is still a contrarian view.
More recently, people have started to wise up to the view that the value is in the application layer
Honestly I think I am already sold on AI, who is the first company that is going to show us all how much it really costs and start enshitification? First to market wins right?
What a PR nightmare, on top of an already bad week. I’ve seen 20+ people on X complaining about this and the related confusion.
azuanrb1 day ago
No, it is prohibited. They're just updating the docs to be more clear about their position, which haven't changed. Their docs was unclear about it.
stingraycharles1 day ago
Yes, it was always prohibited, hence the OpenCode situation one or two months ago.
RamblingCTO1 day ago
They really need to correct that. I understand jack shit. Is openclaw banned under these terms? Or just abuse where I build a business on top of that? And why does it matter anyway? I have my token restrictions ... So let me do what I want.
mh22661 day ago
woof, does Anthropic not have a comms team and a clear comms policy for employees that aren’t on that comms team?
chaos_emergent1 day ago
Probably not, they’re like four years old and they’re 2500 people at the company. My guess is that there are but a handful of PMs.
JimmaDaRustla1 day ago
Incorrect, the third-party usage was already blocked (banned) but it wasn't officially communicated or documented. This post is simply identifying that official communication rather than the inference of actual functionality.
jspdown1 day ago
I pay a Max subscription since a long time, I like their model but I hate their tools:
- Claude Desktop looks like a demo app. It's slow to use and so far behind the Codex app that it's embarassing.
- Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU. Let's not talk about the feature parity with other agents.
- Claude Agent SDK is poorly documented, half finished, and is just thin wrapper around a CLI tool…
Oh and none of this is open source, so I can do nothing about it.
My only option to stay with their model is to build my own tool. And now I discover that using my subscription with the Agent SDK is against the term of use?
I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider.
mihau1 day ago
I agree that Claude Code is buggy as hell, but:
> Let's not talk about the feature parity with other agents.
What do you mean feature parity with other agents? It seems to me that other CLI agents are quite far from Claude Code in this regard.
skerit1 day ago
Which other CLI agents are that? Because I've found OpenCode to be A LOT better than Claude-Code.
NaN years ago
undefined
NewsaHackO1 day ago
>I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider
It's funny, you are probably in the cohort that made Antropic have to pursue this type of decision so aggressively.
co_king_51 day ago
> Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU
FWIW this aligns completely with the LLM ethos. Inefficiency is a virtue.
WXLCKNO1 day ago
I had a Claude code instance using 55 GB of RAM yesterday.
DefineOutside1 day ago
I got so tired of cursor that I started writing down every bug I encountered. The list is currently at 30 entries, some of them major bugs such as pressing "apply" on changes not actually applying changes or models getting stuck in infinite loops and burning 50 million tokens.
chick3ndinn3r1 day ago
I tried to have Cursor change a list of US States and Provinces from a list to a dictionary and it did, but it also randomly deleted 3 states.
deadbabe1 day ago
I regret ever promoting that Claude Code crap. I remember when it was nothing but glowing reviews everywhere. Honestly AI companies should stick to what they are good at: direct API interface to powerful models.
We are heading toward a $1000/month model just to use LLMs in the cloud.
chickensong1 day ago
Your core customers are clearly having a blast building their own custom interfaces, so obviously the thing to do is update TOS and put a stop to it! Good job lol.
I know, I know, customer experience, ecosystem, gardens, moats, CC isn't fat, just big boned, I get it. Still a dick move. This policy is souring the relationship, and basically saying that Claude isn't a keeper.
I'll keep my eye-watering sub for now because it's still working out, but this ensures I won't feel bad about leaving when the time comes.
Update: yes yes, API, I know. No, I don't want that. I just want the expensive predictable bill, not metered corporate pricing just to hack on my client.
nostromo1 day ago
They'll all do this eventually.
We're in the part of the market cycle where everyone fights for marketshare by selling dollar bills for 50 cents.
When a winner emerges they'll pull the rug out from under you and try to wall off their garden.
Anthropic just forgot that we're still in the "functioning market competition" phase of AI and not yet in the "unstoppable monopoly" phase.
barrenko1 day ago
"Naveen Rao, the Gen AI VP of Databricks, phrased it quite well:
all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose."
> new Amp Free (10$) access is also closed up since of last night
NaN years ago
undefined
bambax1 day ago
Unstoppable monopoly will be extremely hard to pull off given the number of quality open (weights) alternatives.
I only use LLMs through OpenRouter and switch somewhat randomly between frontier models; they each have some amount of personality but I wouldn't mind much if half of them disappeared overnight, as long as the other half remained available.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
JumpCrisscross1 day ago
> They'll all do this eventually
And if the frontier continues favouring centralised solutions, they'll get it. If, on the other hand, scaling asymptotes, the competition will be running locally. Just looking at how much Claude complains about me not paying for SSO-tier subscriptions to data tools when they work perfectly fine in a browser is starting to make running a slower, less-capable model locally competitive with it in some research contexts.
g-mork1 day ago
Imagine having a finite pool of GPUs worth more than their weight in gold, and an infinite pool of users obsessed with running as many queries against those GPUs in parallel as possible, mostly to review and generate copious amounts of spam content primarily for the purposes of feeling modern, and all in return for which they offer you $20 per month. If you let them, you must incur as much credit liability as OpenAI. If you don't, you get destroyed online.
It almost makes me feel sorry for Dario despite fundamentally disliking him as a person.
chickensong1 day ago
Hello old friend, I've been expecting you.
First of all, custom harness parallel agent people are so far from the norm, and certainly not on the $20 plan, which doesn't even make sense because you'd hit token limit in about 90 seconds.
Second, token limits. Does Anthropic secretly have over-subscription issues? Don't know, don't care. If I'm paying a blistering monthly fee, I should be able to use up to the limit.
Now I know you've got a clear view of the typical user, but FWIW, I'm just an aging hacker using CC to build some personal projects (feeling modern ofc) but still driving, no yolo or gas town style. I've reached the point where I have a nice workflow, and CC is pretty decent, but it feels like it's putting on weight and adding things I don't want or need.
I think LLMs are an exciting new interface to computers, but I don't want to be tied to someone else's idea of a client, especially not one that's changing so rapidly. I'd like to roll my own client to interface with the model, or maybe try out some other alternatives, but that's against the TOS, because: reasons.
And no, I'm not interested in paying metered corporate rates for API access. I pay for a Max account, it's expensive, but predictable.
The issue is Anthropic is trying for force users into using their tool, but that's not going to work for something so generic as interfacing with an LLM. Some folks want emacs while others want vim, and there will never be a consensus on the best editor (it's nvim btw), because developers are opinionated and have strong preferences for how they interface with computers. I switched to CC maybe a year ago and haven't looked back, but this is a major disappointment. I don't give a shit about Anthropic's credit liability, I just want the freedom to hack on my own client.
NaN years ago
undefined
echelon1 day ago
Why do you fundamentally dislike him as a person?
The only thing I've seen from him that I don't like is the "SWEs will be replaced" line (which is probably true and it's more that I don't like the factuality of it).
NaN years ago
undefined
baq1 day ago
Don’t be mad at it, be happy you were able to throw some of that sweet free vc money at your hobbies instead of paying the market rate.
chickensong1 day ago
Oh I'm not mad, it's more of a sad clown type of thing. I'm still stoked to use it for now. We can always go back to the old ways if things don't work out.
charcircuit1 day ago
They offer an API for people who want to build their own clients. They didn't stop people from being able to use Claude.
dawnerd1 day ago
at a significantly higher price... which of course is why they're doing this.
weird-eye-issue1 day ago
That's what the API is for.
lvl1551 day ago
So basically you are saying Anthropic models are indispensable but you are too cheap to pay for it.
chickensong1 day ago
Nowhere did I say they're indispensable, and I explicitly said I'm still paying for it. If all AI companies disappear tomorrow that's fine. I'm just calling out what I think is tone-deaf move, by a company I pay a large monthly bill to.
TZubiri1 day ago
Sure they are having a blast, they are paying 20$ instead of getting charged hundreds for forr tokens.
It's simple, follow the ToS
sanex1 day ago
Going to keep using the agents sdk with my pro subscription until I get banned.
It's not openclaw it's my own project. It started by just proxying requests to claude code though the command line, the sdk just made it easier. Not sure what difference it makes to them if I have a cron job to send Claude code requests or an agent sdk request. Maybe if it's just me and my toy they don't care. We'll see how the clarify tomorrow.
atlgator1 day ago
AI is the new high-end gym membership. They want you to pay the big fee and then not use what you paid for. We'll see more and more roadblocks to usage as time goes on.
turblety1 day ago
This was the analogy I was looking for! It feels like a very creepy way to make money, almost scammy and the gym membership/overselling hits the nail.
petesergeant1 day ago
This feels more like the gym owner clarifying it doesn't want you using their 24-hour gym as a hotel just because you find their benches comfortable to lie down on, rather than a "roadblock to usage"
redox991 day ago
Not really, these subscriptions have a clear and enforced 5h and weekly limit.
co_king_51 day ago
Sorry but if you're not paying the big fee there's no way you're going to have a job by the late-2020s.
ddxv1 day ago
The pressure is to boost revenue by forcing more people to use the API to generate huge numbers of tokens they can charge more for.
LLMs are becoming common commodities as open weight models keep catching up. There are similarities with pirating in the 90s when users realize they can ctrl+c ctrl+v to copy a file/model and you don't need to buy a cd/use their paid API.
chii1 day ago
And that is how it should be - the knowledge that the LLM trained on should be free, and cannot (and should never be) gatekept behind money.
It's merely the hardware that should be charged for - which ought to drop in price if/when the demand for it rises. However, this is a bottleneck at the moment, and hard to see how it gets resolved amidst the current US environment on sanctioning anyone who would try.
nucleative1 day ago
Is there no value in how the training was done such that it's accessible via inference in a particularly useful way?
NaN years ago
undefined
xyzsparetimexyz1 day ago
No, a lot of the data they were trained on was pirated.
troyvit1 day ago
I think I've made two good decisions in my life. The first was switching entirely to Linux around '05 even though it was a giant pain in the ass that was constantly behind the competition in terms of stability and hardware support. It took awhile but wow no regrets.
The second appears to be hitching my wagon to Mistral even though it's apparently nowhere as powerful or featureful as the big guys. But do you know how many times they've screwed me over? Not once.
Maybe it's my use cases that make this possible. I definitely modified my behavior to accommodate Linux.
WXLCKNO1 day ago
They're too small to screw you over. But you've got more time until they do at least.
marcel-felix1 day ago
[dead]
dgdosen1 day ago
Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?
ac291 day ago
The good enough alternative models are here or will be soon, depending on your definition of good enough. MiniMax-M2.5 looks really competitive and its a tenth of the cost of Sonnet-4.6 (they also have subscriptions).
Running locally is going to require a lot of memory, compute, and energy for the foreseeable future which makes it really hard to compete with ~$20/mo subscriptions.
kevstev1 day ago
Personally I am already there- I go to Qwen and Deepseek locally via ollama for my dumb questions and small tasks, and only go to Claude if they fail. I do this partially because I am just so tired of everything I do over a network being logged, tracked, mined and monetized, and also partially because I would like my end state to be using all local tools, at least for personal stuff.
irishcoffee1 day ago
People running models locally has always been the scare for the sama's of the world. "Wait, I don't need you to generate these responses for me? I can get the same results myself?"
trillic1 day ago
He can't buy all the RAM
seyz1 day ago
This is how you gift wrap the agentic era to the open source chinese LLMs. devs don't need the best model, they need one without lawyers attached.
4d4m1 day ago
introducing moderation, steerage and censorship in your LLM is a great way to not even show up to the table with a competitive product. builders have woken up to this reality and are demanding local models
rglullis1 day ago
I just cancelled my Pro subscription. Turns out that Ollama Cloud with GLM-5 and qwen-coder-next are very close in quality to Opus, I never hit their rate limits even with two sessions running the whole day and there zero advantage for me to use Claude Code compared to OpenCode.
touristtam22 hours ago
Is that on the $20 sub?
saganus1 day ago
Thariq has clarified that there are no changes to how SDK and max suscriptions work:
On a different note, it's surprising that a company that size has to clarify something as important as ToS via X
tick_tock_tick1 day ago
> On a different note, it's surprising that a company that size has to clarify something as important as ToS via X
Countries clarify nation policy on X. Seriously it feels like half of the EU parliament live on twitter.
touristtam22 hours ago
Which makes the whole 'EU first' movement looks super weak when the politicians are part of the worse offenders.
adastra221 day ago
FYI a Twitter post that contradicts the ToS is NOT a clarification.
sawjet1 day ago
What's wrong with using X?
minimaxir1 day ago
In the case you are asking in good faith, a) X requires logging in to view most of its content, which means that much of your audience will not see the news because b) much of your audience is not on X, either due to not having social media or have stopped using X due to its degradation to put it generally.
NaN years ago
undefined
NaN years ago
undefined
saganus1 day ago
Not bad per se but how much legal weight does it actually carry?
I presume zero.. but nonetheless seems like people will take it as valid anyway.
That can be dangerous I think.
touristtam22 hours ago
ideologically or practically?
jes51991 day ago
there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense. If anthropic wanted to own that market, they could introduce a bring-your-own-Claude metaphor, where you login with Claude and token costs get billed to your personal account (after some reasonable monthly freebies from your subscription).
But the big guys don’t seem interested in this, maybe some lesser known model will carve out this space
avaer1 day ago
This is going to happen. Unfortunately.
I shudder to think what the industry will look like if software development and delivery becomes like Youtubing, where the whole stack and monetization is funneled through a single company (or a couple) get to decide who gets how much money.
MillionOClock1 day ago
I am a bit worried that this is the situation I am in with my (unpublished) commercial app right now: one of the major pain points I have is that while I have no doubt the app provides value in itself, I am worried about how many potential users will actually accept paying inference per token...
As an independent dev I also unfortunately don't have investors backing me to subsidize inference for my subscription plan.
Imustaskforhelp1 day ago
I recommend kimi. It's possible for people to haggle with it to get cheap for the first month and as such try out your project and best part of the matter is that kimi intentionally supports api usage in any of their subscribed plan and they also recently changed their billing to be more token usage based like others instead of their previous tool calling limits
It's seriously one of the best models. very comparable to sonnet/opus although kimi isn't the best in coding. I think its a really great solid model overall and might just be worth it in your use case?
Is the use case extremely coding intensive related (where even some minor improvement can matter for 10-100x cost) or just in general. Because if not, then I can recommend Kimi.
NaN years ago
undefined
herbturbo1 day ago
>> there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense
Maybe they are not worth building at all then. Like MoviePass wasn’t.
solresol1 day ago
I got banned for violating terms of use apparently, but I'm mystified as to what I rule I broke, and appealing just vanishes into the ether.
Argonaut9981 day ago
Two accounts of mine were banned for some reason and my sub was refunded. Literally from just inane conversations. Conversations also disappear and break randomly, but this happens on ChatGPT too sometimes
tiffanyh1 day ago
In enterprise software, this is an embedded/OEM use case.
And historically, embedded/OEM use cases always have different pricing models for a variety of reasons why.
How is this any different than this long established practice?
ziml771 day ago
It's not, but do you really think the people having Claude build wrappers around Claude were ever aware of how services like this are typically offered.
theahura2 days ago
From the legal docs:
> Authentication and credential use
> Claude Code authenticates with Anthropic’s servers using OAuth tokens or API keys. These authentication methods serve different purposes:
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
> Developers building products or services that interact with Claude’s capabilities, including those using the Agent SDK, should use API key authentication through Claude Console or a supported cloud provider. Anthropic does not permit third-party developers to offer Claude.ai login or to route requests through Free, Pro, or Max plan credentials on behalf of their users.
> Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice.
spullara1 day ago
why wouldn't they just make it so the SDK can't use claude subs? like what are they doing here?
adastra221 day ago
When your company happens upon a cash cow, you can either become a milk company or a meat company.
miroljub1 day ago
Anthropic is dead. Long live open platforms and open-weight models. Why would I need Claude if I can get Minimax, Kimi, and Glm for the fraction of the price?
piokoch1 day ago
To get comparable results you need to run those models on at least prosumer hardware and it seems that two beef-up Mac Studios are the minimum. Which means that instead of buying this hardware you can purchase Claude, Codex and many other subscriptions for next 20 years.
miroljub1 day ago
Or you purchase a year's worth of almost unlimited MiniMax coding plan for a price you'd pay for 15 days of limited Claude usage.
And as a bonus, you can choose your harness. You don't have to suffer CC.
And if something better appears tomorrow, you switch your model, while still using your harness of choice.
NaN years ago
undefined
ChaitanyaSai1 day ago
OK I hope someone from anthropic reads this. Your API billing makes it really hard to work with it in India. We've had to switch to openrouter because anthropic keeps rejecting all the cards we have tried. And these are major Indian banks. This has been going on for MONTHS
woutr_be1 day ago
It’s the same here in Hong Kong. I can’t use any of my cards (personal or corporate) for OpenAI or Anthropic.
Have to do everything through Azure, which is a mess to even understand.
scwoodal1 day ago
Why does it matter to Anthropic if my $200 plan usage is coming from Claude Code or a third party?
Doesn’t both count towards my usage limits the same?
bluegatty1 day ago
If you buy a 'Season's Pass' for Disneyland, you cant 'sublet' it to another kid to use on the days you don't; It's not really buying a 'daily access rate'.
Anthropic subs are not 'bulk tokens'.
It's not an unreasonable policy and it's entirely inevitable that they have to restrict.
scwoodal1 day ago
I’m not subletting my sub to anyone. I’m the only one using the third party harness.
I’m using their own SDK in my own CLI tool.
NaN years ago
undefined
NaN years ago
undefined
croes1 day ago
It’s still me going to Disneyland, I just take a different route
JimmaDaRustla1 day ago
Disingenuous analogy.
It's more buying a season pass for Disneyland, then getting told you can't park for free if you're entering the park even though free parking is included with the pass. Still not unreasonable, but brings to light the intention of the tool is to force the user into an ecosystem rather.
NaN years ago
undefined
digdugdirk1 day ago
They don't get as much visibility into your data, just the actual call to/from the api. There's so much more value to them in that, since you're basically running the reinforcement learning training for them.
hackingonempty1 day ago
Increasing the friction of switching providers as much as possible is part of their strategy to push users to higher subscription tiers and deny even scraps to their competitors.
operatingthetan1 day ago
Probably because the $20 plan is essentially a paid demo for the higher plans.
zb31 day ago
They're losing money on this $200 plan and they're essentially paying you to make you dependent on Claude Code so they can exploit this (somehow) in the future.
esafak1 day ago
It's a bizarre plan because nobody is 'dependent' on Claude Code; we're begging to use alternatives. It's the model we want!
NaN years ago
undefined
NaN years ago
undefined
psoundy1 day ago
When using Claude Code, it's possible to opt out of having one's sessions be used for training. But is that opt out for everything? Or only message content, such that there could remain sufficient metadata to derive useful insight from?
minimaxir1 day ago
Any user who is using a third-party client is likely self-selected into being a power user who is less profitable.
raffkede1 day ago
At this point, where Kimi K2.5 on Bedrock with a simple open source harness like pi is almost as good the big labs will soon have to compete for users,... openai seems to know that already? While anthropic bans bans bans
hrpnk1 day ago
Do you know by any chance if Bedrock custom model import also works with on-demand use, without any provisioned capacity? I'm still puzzled why they don't offer all qwen3 models on Bedrock by default.
raffkede1 day ago
I see a lot of Qwen3 in us west 2
And i have no experience with custom model on bedrock
lsaferite1 day ago
That page is... confusing.
> Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK.
This is literally the last sentence of the paragraph before the "Authentication and credential use"
bothlabs1 day ago
I would expect, it still is only enforced in a semi-strict way.
I think what they want to achieve here is less "kill openclaw" or similar and more "keep our losses under control in general". And now they have a clear criteria to refer when they take action and a good bisection on whom to act on.
In case your usage is high they would block / take action. Because if you have your max subscription and not really losing them money, why should they push you (the monopoly incentive sounds wrong with the current market).
ed_mercer1 day ago
Openclaw is unaffected by this as the Claude Code CLI is called directly
Veen1 day ago
Many people use the Max subscription OAuth token in OpenClaw. The main chat, heartbeat, etc., functionality does not call the Claude Code CLI. It uses the API authenticated via subscription OAuth tokens, which is precisely what Anthropic has banned.
There are many other options too: direct API, other model providers, etc. But Opus is particularly good for "agent with a personality" applications, so it's what thousands of OpenClaw users go with, mostly via the OAuth token, because it's much cheaper than the API.
Rapzid1 day ago
Their moat is evaporating before our eyes. Anthropic is Microsoft's side piece, but Microsoft is married with kids to OpenAI.
And OpenAI just told Microsoft why they shouldn't be seeing Anthropic anymore; Gpt-5.3-codex.
RIP Anthropic.
gregjw1 day ago
And because of this i'll obviously opt to not subscribe to a Claude plan, when i can just use something like Copilot and use the models that way via OpenCode.
anhner1 day ago
how comparable are the usage limits?
akulbe1 day ago
Is this a direct shot at things like OpenClaw, or am I reading it wrong?
planckscnst1 day ago
They even block Claude Code of you've modified it via tweakcc. When they blocked OpenCode, I ported a feature I wanted to Claude Code so I could continue using that feature. After a couple days, they started blocking it with the same message that OpenCode gets. I'm going to go down to the $20 plan and shift most of my work to OpenAI/ChatGPT because of this. The harness features matter more to me than model differences in the current generation.
mapontosevenths1 day ago
Opencode as well. Folks have been getting banned for abusing the OAuth login method to get around paying for API tokens or whatever. Anthropic seems to prefer people pay them.
serf1 day ago
its not that innocent.
a 200 dollar a month customer isn't trying to get around paying for tokens, theyre trying to use the tooling they prefer. opencode is better in a lot of ways.
tokens get counted and put against usage limits anyway, unless theyre trying to eat analytics that are CC exclusive they should allow paying customers to consume to the usage limits in however way they want to use the models.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
numpad01 day ago
I wonder if it has to do with Grok somehow. They had a suspiciously high reputation until they just binarily didn't, after Anthropic said they did something.
baconner1 day ago
For sure, yes. They already added attempts to block opencode, etc.
ogimagemaker1 day ago
The fundamental tension here is that AI companies are selling compute at a loss to capture market share, while users are trying to maximize value from their subscriptions.
From a backend perspective, the subscription model creates perverse incentives. Heavy users (like developers running agentic workflows) consume far more compute than casual users, but pay the same price. Third-party tools amplify this asymmetry.
Anthropic's move is economically rational but strategically risky. Models are increasingly fungible - Gemini 3.1 and Claude 4.5 produce similar results for most tasks. The lock-in isn't the model; it's the tooling ecosystem.
By forcing users onto Claude Code exclusively, they're betting their tooling moat is stronger than competitor models. Given how quickly open-source harnesses like pi have caught up, that's a bold bet.
chasd001 day ago
is the tooling moat and secret sauce in Claude Code the client? That's super risky given the language it was written in ( javascript ). I bet Claude Code itself can probably reverse engineer the minimized javascript, trace the logic, and then name variables something sensible for readability. Then the secret sauce is exposed for all to see.
Also, can you not setup a proxy for the cert and a packet sniffer to watch whatever ClaudeCode is doing with respect to API access? To me, if you have "secret sauce" you have to keep it server side and make the client as dumb as possible. Especially if your client is executes as Javascript.
2001zhaozhao1 day ago
There is a new breed of agent-agnostic tools that call the Claude Code CLI as if it's an API (I'm currently trying out vibe-kanban).
This could be used to adhere to Claude's TOS while still allowing the user to switch AI companies at a moment's notice.
Right now there's limited customizability in this approach, but I think it's not far-fetched to see FAR more integrated solutions in the future if the lock-in trend continues. For example: one MCP that you can configure into a coding agent like Claude Code that overrides its entire behavior (tools, skills, etc.) to a different unified open-source system. Think something similar to the existing IntelliJ IDEA's MCP that gives a separate file edit tool, etc. than the one the agent comes with.
Illustration of what i'm talking about:
- You install Claude Code with no configuration
- Then you install the meta-agent framework
- With one command the meta-agent MCP is installed in Claude Code, built-in tools are disabled via permissions override
- You access the meta-agent through a different UI (similar to vibe-kanban's web UI)
- Everything you do gets routed directly to Claude Code, using your Claude subscription legally. (Input-level features like commands get resolved by meta-agent UI before being sent to claude code)
- Claude Code must use the tools and skills directly from meta-agent MCP as instructed in the prompt, and because its own tools are permission denied (result: very good UI integration with the meta-agent UI)
- This would also work with any other CLI coding agent (Codex, Gemini CLI, Copilot CLI etc.) should they start getting ideas of locking users in
- If Claude Code rug-pulls subscription quotas, just switch to a competitor instantly
All it requires is a CLI coding agent with MCP support, and the TOS allowing automatic use of its UI (disallowing that would be massive hypocrisy as the AI companies themselves make computer use agents that allow automatic use of other apps' UI)
chasd001 day ago
Could you think of it as ClaudeCode is just a tool used by another agent and that other agent is instructed to use the ClaudeCode tool for everything? Makes sense, i don't see why we can't have agents use these agents for us, just like the AI companies are proposing to use their agents in place of everything else we currently use.
Also, why not distribute implementation documentation so claudecode can write OpenCode itself and use your oauth token. Now you have opencode for personal use, you didn't get it from anywhere your agent created it for you and only you.
gandreani1 day ago
This is funny. This change actually pushes me into using a competitor more (https://www.kimi.com). I was trying out this provider with oh-my-pi (https://github.com/can1357/oh-my-pi) and was lamenting that it didn't have web search implemented using kimi.
For what it's worth, I built an alternative specifically because of the ToS risk. GhostClaw uses proper API keys stored in AES-256-GCM + Argon2id encrypted vault -no OAuth session tokens, no subscription credentials, no middleman. Skills are signed with Ed25519 before execution. Code runs in a Landlock + seccomp kernel sandbox. If your key gets compromised you rotate it; if a session token gets compromised in someone else's app you might not even know.
That's should be illegal. They used the excuse it was there to take or just burnt evidence literally of pirated books.
What they are doing is implicitly changing the contract of usage of their services.
adastra221 day ago
What is the point of developing against the Agent SDK after this change.
saneshark1 day ago
OpenClaw, NanoClaw, et al all use AgentSDK which will from now on be forbidden.
They are literally alienating a large percentage of OpenClaw, NanoClaw, PicoClaw, customers because those customers will surely not be willing to pay API pricing, which is at least 6-10x Max Plan pricing (for my usage).
This isn’t too surprising to me since they probably have a direct competitor to openclaw et al in the works right now, but until then I am cancelling my subscription and porting my nanoclaw fork with mem0 integration to work with OpenAI instead.
Thats not a “That’ll teach ‘em” statement, it is just my own cost optimization. I am quite fond of Anthropic’s coding models and might still subscribe again at the $20 level, but they just priced me out for personal assistant, research, and 90% of my token use case.
Tepix1 day ago
What does Anthropic have to gain from users who use a very high amount of tokens for OpenClaw, NanoClaw etc and pay them only $20?
slopinthebag1 day ago
how can they even enforce this? can't you just spoof all your network requests to appear like it's coming from claude code?
in any case Codex is a better SOTA anyways and they let you do this. and if you aren't interested in the best models, Mistral lets you use both Vibe and their API through your vibe subscription api key which is incredible.
Uehreka1 day ago
> how can they even enforce this?
Many ways, and they’re under no obligation to play fair and tell you which way they’re using at any given time. They’ve said what the rules are, they’ve said they’ll ban you if they catch you.
So let’s say they enforce it by adding an extra nonstandard challenge-response handshake at the beginning of the exchange, which generates a token which they’ll expect on all requests going forward. You decompile the minified JS code, figure out the protocol, try it from your own code but accidentally mess up a small detail (you didn’t realize the nonce has a special suffix). Detected. Banned.
You’ll need a new credit card to open a new account and try again. Better get the protocol right on the first try this time, because debugging is going to get expensive.
Let’s say you get frustrated and post on Twitter about what you know so far. If you share info, they’ll probably see it eventually and change their method. They’ll probably change it once a month anyway and see who they catch that way (and presumably add a minimum Claude Code version needed to reach their servers).
They’ve got hundreds of super smart coders and one of the most powerful AI models, they can do this all day.
slopinthebag1 day ago
the internet has hundreds of thousands of super smart coders with the most powerful ai models as well, I think it's a bit harder than you're assuming.
you just need to inspect the network traffic with Claude code and mimic that
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
stanguc1 day ago
easily "bypassable", trust me :)
Imustaskforhelp1 day ago
see my comment here but I think instead of worrying about the decompile minified JS code etc., you can just essentially use claude code in the background and still do it even using opencode/its SDK thus giving sort of API access over CC subscription https://news.ycombinator.com/item?id=47069299#47070204
I am not sure how they can detect this. I can be wrong, I usually am but I think its still possible to use CC etc. even after this change if you really wanted to
But at this point, to me the question of GP that is that is it even worth it is definitely what I am thinking?
I think not. There are better options out there, they mentioned mistral and codex and I think kimi also supports maybe GLM/z.ai as well
paxys1 day ago
Pretty easy to enforce it - rather than make raw queries to the LLM Claude Code can proxy through Anthropic's servers. The server can then enforce query patterns, system prompts and other stuff that outside apps cannot override.
cjpartridge1 day ago
And once all the Claude subscribers move over to Codex subscriptions, I'd bet a large sum that OpenAI will make their own ToS update preventing automated/scripted usage.
baconner1 day ago
They can't catch everything but they can make your product you're building on top of it non viable when it gets popular enough to look for, like they did with opencode.
slopinthebag1 day ago
at least with open code you can just use a third party plugin to authenticate
tbrownaw1 day ago
> how can they even enforce this?
I would think that different tools would probably have different templates for their prompts?
charcircuit1 day ago
You could tell by the prompt being used.
techpression1 day ago
We don’t enforce speed limits, but it sucks when you get caught.
OpenAI will adjust, their investors will not allow money to be lost on ”being nice” forever, not until they’re handsomely paid back at least.
mchaver1 day ago
¡Quick reminder! We are in the golden era of big company programming agents. Enjoy it while you can because it is likely going to get worse over time. Hopefully, there were will be competitive open source agents and some benevolent nerds put together a reasonable service. Otherwise I can see companies investing in their own AI infrastructure and developers who build their own systems becoming the top performers.
This is the VC funded startup playbook. It has been repeated many times, but maybe for the younger crowd it is new. Start a new service that is relatively permissive, then gradually restrict APIs and permissions. Finally, start throwing in ads and/or making it more expensive to use. Part of the reason is in the beginning they are trying to get as many users as possible and burning VC money. Then once the honey moon is over, they need to make a profit so they cut back on services, nerf stuff, increase prices and start adding ads.
techgnosis1 day ago
This feels perfectly justifiable to me. The subscription plans are super cheap and if they insist you use their tool I understand. Ya'll seem a bit entitled if I'm being honest.
brothrock1 day ago
This article is somewhat reassuring to me, someone experimenting with openclaw on a Max subscription. But idk anything about the blog so would love to hear thoughts.
In my opinion (which means nothing). If you are using your own hardware and not profiting directly from Claude’s use (as in building a service powered by your subscription). I don’t see how this is a problem. I am by no means blowing through my usage (usually <50% weekly with max x5).
Frannky1 day ago
I feel like they want to be like Apple, and open-code + open-source models are Linux. The thing is, Apple is (for some) way better in user experience and quality. I think they can pull it off only if they keep their distance from the others. But if Google/Chinese models become as good as Claude, then there won’t be a reason — at least for me — to pay 10x for the product
whs1 day ago
The analogy I like to use when people say "I paid" is that you can't pay for a buffet then get all the food take-home for free.
small_model1 day ago
Not sure what the problem is, I am on Max and use Claude Code, never get usage issues, that's what I pay for and want that to always be an option (capped monthly cost). For other uses it makes sense to go through their API service. This is less confusing and provides clarity for users, if you are a first party user use Claude's tools to access's the models otherwise API
mccoyb1 day ago
OpenAI has endorsed OAuth from 3rd party harnesses, and their limits are way higher. Use better tools (OpenCode, pi) with an arguably better model (xhigh reasoning) for longer …
wyre1 day ago
I am looking forward to switching to OpenAI once my claude max account is banned for using pi....
agentifysh1 day ago
I wrote a mcp bridge so that I don't have to copy and paste prompt back and forth between CLI and claude, chatgpt, grok, gemini
Does this mean I have to remove claude now and go back to copy & pasting prompts for a subscription I am paying for ?!
wth happened to fair use ?
wg01 day ago
That's it. That's all the moat they have.
mns1 day ago
Does this mean that in an absurd way you can get banned if you use CodexBar https://github.com/steipete/CodexBar to keep track of your usage? It does use your credentials to fetch the usage, could they be so extreme that this would be an issue?
bad_haircut721 day ago
This is a signal that everyone making AI apps should build on Gemini/OpenAI, and since there is a dance of code and model to get good results, inevitably Anthropic are now writing themselves out of being the backend for everyone elses AI apps going forward
8cvor6j844qw_d61 day ago
Not surprised, its the official stance by Anthropic.
I'm more surprised by people using subscription auth for OpenClaw when its officially not allowed.
edg50001 day ago
Their model actually doesn't have that much of a moat if at all. Their agent harness also doesn't, at least not for long. Writing an agent harness isn't that difficult. They are desperately trying to stay in power. I don´t like being a customer of this company and am investing lots of my time in moving away from them completely.
avaer1 day ago
They are obviously losing money on these plans, just like all of the other companies in the space.
They are all desperately trying to stay in power, and this policy change (or clarification) is a fart in the wind in the grand scheme of what's going on in this industry.
gdorsi1 day ago
I think that their main problem is that they don't have enough resources to serve too many users, so they resort to this kind of limitations to keep Claude usage under control.
Otherwise I wouldn't be able to explain a commercial move that limits their offer so strongly in comparison to competitors.
obsidianbases11 day ago
Product usage subsidized by company, $100.
Users inevitably figure out how to steal those subsidies, agents go brrrrr.
Users mad that subsidy stealing gets cut off and completely ignore why they need to rely on subsidies in the first place, priceless.
vldszn1 day ago
At this point, are there decent alternatives to Anthropic models for coding that allow third-party usage?
syntaxing1 day ago
OpenAI have been very generous in their plans in terms of token and what you use it with. Is Codex better or as good as Opus for coding? No. Is it a decent alternative? Very.
vldszn1 day ago
Thanks for the reply. Need to try Codex
Imustaskforhelp1 day ago
Kimi is amazing for this. They offer API usage as well iirc if you buy their subscription.
Bolwin1 day ago
Not regular api usage, just the kimi coding plan, which you can only use in some coding agents
raffkede1 day ago
Also the .99c deal has API Access
NaN years ago
undefined
vldszn1 day ago
Thanks, will explore Kimi. Haven’t tried it yet
ramon1561 day ago
This month was the first month i spent >$100 on it and it didn't feel like it was money well spent. I feel borderline scammed.
I'm just going to accept that my €15 (which with vat becomes €21) is just enough usage to automate some boring tasks.
OJFord1 day ago
Seems fair enough really, not that I like it either, but they could easily not offer the plans and only have API pricing. Makes it make more sense to have the plans be 'the Claude Code pricing' really.
stiiv1 day ago
I'm wondering: why now, in early 2026? Why not last year? Why not in July? What changed? What does this teach us about Anthropic and what can we infer about their competition?
ac291 day ago
Especially how generous Anthropic has been recently with subscribers - extra usage in December, $50 credit earlier this month, $20 subscription getting access to Opus.
It suggests to me Anthropic is less concerned with the financial impact of letting subscribers use alternative tools and more concerned with creating lock in to their products and subscriptions. It very well might backfire though, I was not considering alternative models yesterday, but today I am actively exploring other options and considering cancelling my sub. I've been using my subscription primarily through pi recently, so if they aren't interested in me as a customer, pretty much everyone else is.
butlike1 day ago
Sounds like a panicking company grasping and clawing for a moat
ksec1 day ago
In the old days, think Gmail, or before the "unlimited" marketing scam. People genuinely are smart enough to know they are doing something that they are not suppose to be doing. Even Pirating software, say Windows or Adobe. I mean who can afford those when they were young?
Things get banned, but that is OK along as they give us weeks or days to prep for alternative solution. Users ( Not Customers ) are happy with it. Too bad, the good days are over.
Somewhere along the line, no just in software but even in politics, the whole world on entitlement. They somehow believe they deserve this, what they were doing were wrong but if it is allowed in the first place they should remain allowed to do so.
Judging from account opening time and comments we can also tell the age group and which camp they are on.
wiseowise1 day ago
I don’t understand which camp are you on?
NamlchakKhandro1 day ago
The one you're not in
NaN years ago
undefined
0x500x791 day ago
The telemetry from claude code must be immensely valuable for training. Using it is training your replacement!
Ajedi321 day ago
Hot take: trying to restrict what front end people use to access your service is almost always an anti-competitive, anti-consumer freedom move which should be legally prohibited for those reasons. (Not just for AI, I'm talking about any and all cloud services.)
Regarding consumer freedom, I believe software running on user machines should serve the interests of the user, not the company who wrote the software or anyone else for that matter. Trying to force users to run a particular client written by your company violates this principle.
Regarding competition, forcing users to run a particular client is a form of anti competitive bundling, a naked attempt to prevent alternative clients from being able to enter the market unless they are able to build a competing backed as well. Such artificial "moats" are great for companies but harmful to consumers.
qwertox1 day ago
It's a bit unclear to me. I'm building a system around the Claude Agent SDK. Am I allowed to use it or not? Apparently not.
bob10291 day ago
I'm a bit lost on this one.
I can get a ridiculous amount of tokens in and out of something like gpt-5.2 via the API for $100.
Is this primarily about gas town and friends?
arjunchint1 day ago
Honestly seeing throttling of AI usuage across all providers:
- Google reduced AI Studio's free rate limits by 1/10th
There has been a false narrative that AI will get cheaper and more ubiquitous, but model providers have been stuck in a race for ever more capabilities and performance at higher costs.
infecto1 day ago
Am I the only one perplexed why folks find this stunning or meaningful. While LLMs are novel and different in that the subscription gives you access to compute, it does not feel foreign from the subscription, free or paid, landscape. I cannot recall many (if any?) companies that would freely let you use compute, private internal APIs or anything similar just because you have a login. Maybe I come from a different era of tech but it seems both reasonable and not surprising.
Why now? It would not surprise me that this was simply an after thought and once it hit critical mass (opencode) they locked it down.
shortsunblack1 day ago
Anthropic has no authority to do as such. Users and third apps are protected by interoperability exceptions found in copyright case law.
Trying to prevent competitors from interoperating with the service also may be construed as anticompetitive behaviour.
The implementation details of an authentication process do not beget legal privileges to be a monopolist. What an absurd thought.
halayli1 day ago
What about using claude -p as an api interface?
zb31 day ago
This confirms they're selling those subscriptions at a loss which is simply not sustainable.
Gigachad1 day ago
They probably are but I don’t think that’s what this confirms. Most consumer flat rate priced services restrict usage outside of the first party apps, because 3rd party and scripted users can generate orders of magnitude more usage than a single user using the app can.
So it makes sense to offer simple flat pricing for first party apps, and usage priced apis for other usage. It’s like the difference between Google Drive and S3.
zb31 day ago
I get your point - they might count on the user not using their full quota they're officially allowed to use (and if that's the case, Anthropic is not losing money). But then still - IF the user used the whole quota, Anthropic loses.. so what's advertised is not actually honest.
For me, flat rates are simply unfair either ways - if I'm not using the product much, I'm overpaying (and they're ok with that), otherwise it magically turns out that it's no longer ok when I actually want to utilize what I paid for :)
NaN years ago
undefined
TechSquidTV1 day ago
My alt Google accounts were all banned from Gemini access. Luckily Google left my main account alone. They are all cracking down.
sciencejerk1 day ago
From 3rd party AI app use?
TechSquidTV1 day ago
Using a proxy to switch accounts
_sh2j1 day ago
Why do I get the nagging suspicion their 1 million LOC codebase is backdoored?
the subscription already has usage caps. if the caps are the caps, why does the client matter. if the caps aren't actually the caps, that's a different conversation.
tmo9d1 day ago
Claude Code is a lock-in play. Use Cursor or OpenCode.
avereveard1 day ago
Too bad will stick with codex as thinker and glm5 as hands, at a fraction of the cost.
thepasch1 day ago
The people who they’re going to piss off the most with this are the exact people who are the least susceptible to their walled garden play. If you’re using OpenCode, you’re not going to stop using it because Anthropic tells you to; you’re just going to think ‘fuck Anthropic’, press whatever you’ve bound “switch model” to, and just continue using OpenCode. I think most power users have realized by now that Claude Code is sub-par software and probably actively holding back the models because Anthropic thinks they can’t work right without 20,000 tokens worth of system prompt (my own system prompt has around 1,000 and outperforms CC at every test I throw it at).
They’re losing the exact crowd that they want in their corner because it’s the crowd that’s far more likely to be making the decisions when companies start pivoting their workflows en-masse. Keep pissing on them and they’ll remember the wet when the time comes to decide whom to give a share from the potentially massive company’s potentially massive coffers.
chazftw1 day ago
You need a company with a market cap in the trillions to succeed here
drivebyhooting1 day ago
How does this impact open router?
Can’t this restriction for the time being be bypassed via -p command line flag?
minimaxir1 day ago
OpenRouter uses the API and does not use any subscription auth.
cedws1 day ago
The reason I find this so egregious is because I don’t want to use Claude Code! It’s complete rubbish, completely sidelines security, and nobody seems to care. So I’m forced to use their slop if I want to use Claude models without getting a wallet emptying API bill? Forget it, I will use Codex or Gemini.
Claude Code is not the apex. We’re still collectively figuring out the best way to use models in software, this TOS change kills innovation.
giamma1 day ago
So even simple apps that are just code usage monitors are banned?
jeroenhd1 day ago
Always have been, unless you're using the API meant for apps.
But if you're doing something very basic, you might be able to slop together a tool that does local inferencing based on a small, local model instead, alleviating the need to call Claude entirely.
vcryan1 day ago
You can use Claude CLI as a relay - yes, it needs to be there -but its not that different than use the API
yamirghofran1 day ago
Cancelled my Claude and bought GLM coding plan + Codex.
piokoch1 day ago
This is something I think Anthropic does not get. They want to be Microsoft of AI, make people their solution, so they will not to move to the other provided. Thing is, giving access to a text prompt is not something that you can monopolize easily. Even if you provide some stuff like skills, MCP server integration, that is not a big deal.
singularity20011 day ago
important they have clarified that it's OK to use it for personal experimentation if you don't build a business out of it!
aydyn1 day ago
Sonnet literally just recommended using a subscription token for openclaw. Even anthropic's own AI doesn't understand its own TOS.
minimaxir1 day ago
Sonnet was not trained with this information and extremely-recent-information-without-access-to-a-Web-Search-tool is the core case of hallucination.
aydyn1 day ago
Sonnet does have search available FYI.
NaN years ago
undefined
lucasyvas1 day ago
Isn’t this flawed anyway? If an application communicates with Claude Code over ACP (like Zed), it works fine?
Instead of using SDKs, this will just shift the third party clients to use ACP to get around it - Claude Code is still under the hood but you’re using a different interface.
This all seems pretty idiotic on their part - I know why they’re trying it but it won’t work. There will always be someone working around it.
okokwhatever1 day ago
You guys are acting like coke addicts... dont you see?
neya1 day ago
Anthropic is just doing this out of spite. They had a real scenario to win mindshare and marketshare and they fucked up instead. They could have done what Open AI did - hired the OpenClaw/d founder. Instead, they sent him a legal notice for trademark violation. And now they're just pissed he works for their biggest competitor. Throw all tantrums you want, you're on the wrong side of this one, Anthropic.
skerit1 day ago
Agreed!
I don't understand how so many people on here seem to think it is completely reasonable for Anthropic to act like this.
neya1 day ago
Apple/OpenAI = god
Anthropic = good
Google = evil
That's pretty much HN crowd logic to be honest
oger1 day ago
So here goes my OpenClaw integration with Anthropic via OAuth…
While I see their business risk I also see the onboarding path for new paying customers. I just upgraded to Max and would even consider the API if cost were controllable. I hope that Anthropic finds a smart way to communicate with customers in a constructive way and offers advice for the not so skilled OpenClaw homelabbers instead of terminating their accounts…
Is anybody here from Anthropic that could pick up that message before a PR nightmare happens?
hedora1 day ago
Oh crap. I just logged into HN to ask if anyone knew of a working alternative to the Claude Code client. It's lost Claude's work multiple times in the last few days, and I'm ready to switch to a different provider. (4.6 is mildly better than 4.5, but the TUI is a deal breaker.)
So, I guess it's time to look into OpenAI Codex. Any other viable options? I have a 128GB iGPU, so maybe a local model would work for some tasks?
simpleusername1 day ago
QWEN models are quite nice for local use. Gemini 3 Pro is much better than Codex IMO.
edg50001 day ago
Local? No, not currently. You need about 1TB VRAM. There are many harnesses in development at the time, keep a good look out. Just try many of them, look at the system prompts in particular. Consider DeepSeek using the official API. Consider also tweaking system prompts for whatever tool you end up using. And agree that TUI is meh; we need GUI.
Imustaskforhelp1 day ago
Zed with CC using ACP?
Opencode with CC underneath using Gigacode?
OpenAI codex is also another viable path for what its worth.
I think the best model to my liking open source is kimi k2.5, so maybe you can run that?
Qwen is releasing some new models so I assume keep an eye on those and maybe some model can fit your use case as well?
maxbond1 day ago
Zed's ACP client is a wrapper around Agents SDK. So that will be a TOS violation.
mercurialsolo1 day ago
Codex has now caught up to Claude Opus and this is a defensive move by Anthropic
tallesborges921 day ago
Thanks codex allows using their subscription and it’s working very well for me. I will not miss anything from Anthropic. BTW bad move, shame on you
lvl1551 day ago
People on here are acting like school children over this. It’s their product that they spent billions to make. Yet here we are complaining about why they should let you use third party products specifically made to compete against Anthropic.
You can still simply pay for API.
deanc1 day ago
Just a friendly reminder also to anyone outside the US that these subscriptions cannot be used for commercial work. Check the consumer ToS when you sign up. It’s quite clear.
andersmurphy1 day ago
Yeah for context the TOS outside the US has:
Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.
kosolam1 day ago
May we still use the agent sdk for our own private use with the max account? I’m a bit confused.
j451 day ago
That's too bad, in a way it was a bit of an unofficial app store for Anthropic - I am sure they've probably looked at that and hopefully this means there's something on it's way.
sandeepkd1 day ago
Not really sure if its even feasible to enforce it unless the idea is to discourage the big players from doing it.
jongjong1 day ago
I have no issues with this. Anthropic did a great job with Claude Code.
It's a little bit sleazy as a business model to try to wedge one's self between Claude and its users.
OpenAI acquiring OpenClaw gives me bad vibes. How did OpenClaw gain so much traction so quickly? It doesn't seem organic.
I definitely feel much more aligned with Anthropic as a company. What they do seems more focused, meritocratic, organic and genuine.
OpenAI essentially appropriated all their current IP from the people... They basically gutted the non-profit and stole its IP. Then sold a huge chunk to Microsoft... Yes, they literally sold the IP they stole to Microsoft, in broad daylight. Then they used media spin to make it sound like they appropriated it from Elon because Elon donated a few millions... But Elon got his tax deduction! The public footed the bill for those deductions... The IP belonged to the non-profit; to the public, not Elon, nor any of the donors. I mean let's not even mention Suchir Balaji, the OpenAI researcher who supposedly "committed suicide" after trying to warn everyone about the stolen IP.
OpenAI is clearly trying to slander Anthropic, trying to present themselves as the good guys after their OpenClaw acquisition and really rubbing it in all over HN... Over which they have much influence.
tstrimple1 day ago
The entitlement from many HN posters is astounding. "Companies must provide services in the way I want billed how I want and with absolutely zero restrictions at all!" Get over yourselves. You're not that important. Don't like it. Don't use it. Seems pretty straightforward.
jamiemallers1 day ago
[dead]
andrewflnr1 day ago
> The real value in these tools is not the model, it is the harness... And that is the part that is easiest to replicate.
> The companies that will win long-term are the ones building open protocols and letting users bring their own model.
These seem contradictory. It sounds like you're saying that the long term winners are the ones who do the easy part. The future I see is open source harnesses talking to commodity models.
herzigma1 day ago
More than just the real value: the real intelligence is in the harness.
__MatrixMan__1 day ago
I agree, but that's not something you can maintain an advantage on for long.
Perhaps there's enough overlap among the low hanging fruit that you can initially sell a harness that makes both genomics researchers and urban planners happy... but pretty quickly you're going to need to be the right kind of specialist to build an effective harness for it.
mccoyb1 day ago
The opposite is true.
There is barely any magic in the harness, the magic is in the model.
Try it: write your own harness with (bash, read, write, edit) ... it's trivial to get a 99% version of (pick your favorite harness) -- minus the bells and whistles.
The "magic of the harness" comes from the fun auxiliary orchestration stuff - hard engineering for sure! - but seriously, the model is the key item.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
anvevoice1 day ago
[dead]
littlebrutus1 day ago
[dead]
johnjames871 day ago
[dead]
cranberryturkey1 day ago
They really ficked up by not embracing openclaw now I use codex 5.3
exabrial1 day ago
The number one thing we need is cheap abundant decentralized clean energy, and these things are laughable.
Unfortunately neither political party can get all of the above.
minimaxir1 day ago
Are you implying that no one would use LLM SaaSes and everyone would self-host if energy costs were negligible?
That is...not how it works. People self-hosting don't look at their electricity bill.
hedora1 day ago
I was stuck on the part where they said neither party could provide cheap abundant decentralized clean energy. Biden / Obama did a great job of providing those things, to the point where dirty coal and natural gas are both more expensive than solar or wind.
So, which two parties could they be referring to? The Republicans and the Freedom Caucus?
NaN years ago
undefined
mkw50531 day ago
And I just bought my mac mini this morning... Sorry everyone
eleventyseven1 day ago
You know that if you are just using a cloud service and not running local models, you could have just bought a raspberry pi.
mkw50531 day ago
Yeah. I know it’s dumb but it’s also a very expensive machine to run BlueBubbles, because iMessage requires a real Mac signed into an Apple ID, and I want a persistent macOS automation host with native Messages, AppleScript, and direct access to my local dev environment, not just a headless Linux box calling APIs.
NaN years ago
undefined
NaN years ago
undefined
renewiltord1 day ago
Harder to get at the Apple ecosystem. I have an old Macbook that just serves my reminders over the internet.
NaN years ago
undefined
theptip1 day ago
I think this is shortsighted.
The markets value recurring subscription revenue at something like 10x “one-off” revenue, Anthropic is leaving a lot of enterprise value on the table with this approach.
In practice this approach forces AI apps to pay Anthropic for tokens, and then bill their customers a subscription. Customers could bring their own API key but it’s sketchy to put that into every app you want to try, and consumers aren’t going to use developer tools. And many categories of free app are simply excluded, which could in aggregate drive a lot more demand for subscriptions.
If Anthropic is worried about quota, seems they could set lower caps for third-party subscription usage? Still better than forcing API keys.
(Maybe this is purely about displacing other IDE products, rather than a broader market play.)
herbturbo1 day ago
I think they are smart making a distinction between a D2C subscription which they control the interface to and eat the losses for vs B2B use where they pay for what they use.
Allows them to optimize their clients and use private APIs for exclusive features etc. and there’s really no reason to bootstrap other wannabe AI companies who just stick a facade experience in front of Anthropic’s paying customer.
edg50001 day ago
> eat the losses
Look at your token usage of the last 30 days in one of the JSON files generated by Claude Code. Compare that against API costs for Opus. Tell me if they are eating losses or not. I'm not making a point, actually do it and let me know. I was at 1 million. I'm paying 90 EUR/m. That means I'm subsidizing them (paying 3-4 times what it would cost with the API)! And I feel like I'm a pretty heavy user. Although people running it in a loop or using Gas Town will be using much more.
NaN years ago
undefined
bluegatty1 day ago
There's no decision to be made here, it's just way too expensive to have 3rd parties soak up the excess tokens, that's not the product being sold.
Especially as they are subsidized.
techpression1 day ago
That’s not true, the market loves pay per use, see ”cloud”. It outperforms subscriptions by a lot, it’s not ”one-off”.
And your example is not how companies building on top tend to charge, you either have your own infrastructure (key) or get charged at-cost + fees and service costs.
I don’t think Anthropic has any desire to be some B2C platform, they want high paying reliable customers (B2B, Enterprise).
theptip1 day ago
> the market loves pay per use, see ”cloud”.
Cloud goes on the books as recurring revenue, not one-off; even though it's in principle elastic, in practice if I pay for a VM today I'll usually pay for one tomorrow.
(I don't have the numbers but the vast majority of cloud revenue is also going to be pre-committed long-term contracts from enterprises.)
> I don’t think Anthropic has any desire to be some B2C platform
This is the best line of argument I can see. But still not clear to me why my OP doesn't apply for enterprise, too.
Maybe the play is just to force other companies to become MCPs, instead of enabling them to have a direct customer relationship.
Reading these comments aren't we missing the obvious?
Claude Code is a lock in, where Anthropic takes all the value.
If the frontend and API are decoupled, they are one benchmark away from losing half their users.
Some other motivations: they want to capture the value. Even if it's unprofitable they can expect it to become vastly profitable as inference cost drops, efficiency improves, competitors die out etc. Or worst case build the dominant brand then reduce the quotas.
Then there's brand - when people talk about OpenCode they will occasionally specify "OpenCode (with Claude)" but frequently won't.
Then platform - at any point they can push any other service.
Look at the Apple comparison. Yes, the hardware and software are tuned and tested together. The analogy here is training the specific harness,caching the system prompt, switching models, etc.
But Apple also gets to charge Google $billions for being the default search engine. They get to sell apps. They get to sell cloud storage, and even somehow a TV. That's all super profitable.
At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.
Anthropic is going to be on the losing side with this. Models are too fungible, it's really about vibes, and Claude Code is far too fat and opinionated. Ironically, they're holding back innovation, and it's burning the loyalty the model team is earning.
the fat and opinionated has always been true for them (especially compared to openai), and to all appearances remains a feature rather than a bug. i can’t say the approach makes my heart sing, personally, but it absolutely has augured tremendous success among thought workers / the intelligensia
I thought Anthropic would fall after OpenAI, but they just might be racing to the bottom faster here.
I think they're doing a great job on the coding front though
undefined
Maybe for coding but the number of normie users flooding to Claude over OAI is huge.
I think their branding is cementing in place for a lot of people, and the lived experience of people trying a lot of models often ends up with a simple preference for Claude, likely using a lot of the same mental heuristics as how we choose which coworkers we enjoy working with. If they can keep that position, they will have it made.
I'm a very experienced developer with a lot of diverse knowledge and experience in both technical and domain knowledge. I've only tried a handful of AI coding agents/models... I found most of them ranging from somewhat annoying to really annoying. Claude+Opus (4.5 when I started) is the first one I've used where I found it more useful than annoying to use.
I think Github Co-Pilot is most annoying from what I've tried... it's great for finishing off a task that's half done where the structure is laid out, as long as you put blinders keeping it focused on it. OpenAI and Google's options seem to get things mostly right, but do some really goofy wrong things from my own experiences.
They all seem to have trouble using state of the art and current libraries by default, even when you explicitly request them.
undefined
undefined
I think you have it exactly backwards, and that "owning the stack" is going to be important. Yes the harness is important, yes the model is important, but developing the harness and model together is going to pay huge dividends.
https://mariozechner.at/posts/2025-11-30-pi-coding-agent/
This coding agent is minimal, and it completely changed how I used models and Claude's cli now feels like extremely slow bloat.
I'd not be surprised if you're right in that this is companies / management will prefer to "pay for a complete package" approach for a long while, but power-users should not care for the model providers.
I have like 100 lines of code to get me a tmux controls & semaphore_wait extension in the pi harness. That gave me a better orchestration scheme a month ago when I adopted it, than Claude has right now.
As far as I can tell, the more you try to train your model on your harness, the worse they get. Bitter lesson #2932.
undefined
That was true more mid last year, but now we have a fairly standard flow and set of core tools, as well as better general tool calling support. The reality is that in most cases harnesses with fewer tools and smaller system prompts outperform.
The advances in the Claude Code harness have been more around workflow automation rather than capability improvements, and truthfully workflows are very user-dependent, so an opinionated harness is only ever going to be "right" for a narrow segment of users, and it's going to annoy a lot of others. This is happening now, but the sub subsidy washes out a lot of the discontent.
If Claude Code is so much better why not make users pay to use it instead of forcing it on subscribers?
You're right, because owning the stack means better options for making tons of money. Owning the stack is demonstrably not required for good agents, there are several excellent (frankly way better than ol' Claude Code) harnesses in the wild (which is in part why so many people are so annoyed by Anthropic about this move - being forced back onto their shitty cli tool).
The competition angle is interesting - we're already seeing models like Step-3.5-Flash advertise compatibility with Claude Code's harness as a feature. If Anthropic's restrictions push developers toward more open alternatives, they might inadvertently accelerate competitor adoption. The real question is whether the subscription model economics can sustain the development costs long-term while competitors offer more flexible terms.
I don't think many are confused about why Anthropic wants to do this. The crux is that they appear to be making these changes solely for their own benefit at the expense of their users and people are upset.
There are parallels to the silly Metaverse hype wave from a few years ago. At the time I saw a surprising number of people defending the investment saying it was important for Facebook to control their own platform. Well sure it's beneficial for Facebook to control a platform, but that benefit is purely for the company and if anything it would harm current and future users. Unsurprisingly, the pitch to please think of this giant corporation's needs wasn't a compelling pitch in the end.
"Training the specific harness" is marginal -- it's obvious if you've used anything else. pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude.
This whole game is a bizarre battle.
In the future, many companies will have slightly different secret RL sauces. I'd want to use Gemini for documentation, Claude for design, Codex for planning, yada yada ... there will be no generalist take-all model, I just don't believe RL scaling works like that.
I'm not convinced that a single company can own the best performing model in all categories, I'm not even sure the economics make it feasible.
Good for us, of course.
> pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude
And that’s out of the box. With how comically extensible pi is and how much control it gives you over every aspect of the pipeline, as soon as you start building extensions for your own, personal workflow, Claude Code legimitely feels like a trash app in comparison.
I don’t care what Anthropic does - I’ll keep using pi. If they think they need to ban me for that, then, oh well. I’ll just continue to keep using pi. Just no longer with Claude models.
As a Claude Code user looking for alternatives, I am very intrigued by this statement.
Can you please share good resources I can learn from to extend pi?
undefined
Don't think that's a valid comparison.
Apple can do those things because they control the hardware device, which has physical distribution, and they lock down the ecosystem. There is no third party app store, and you can't get the Photos app to save to Google Drive.
With Claude Code, just export an env variable or use a MITM proxy + some middleware to forward requests to OpenAI instead. It's impossible to have lock in. Also, coding agent CLIs are a commodity.
> At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.
i've been wondering how anthropic is going to survive long term. If they could build out an infrastructure and services to complete with the hyperscalers but surfaced as a tool for claude to use then maybe. You pay Anthropic $20/user/month for ClaudeCode but also $100k/month to run your applications.
>Claude Code is a lock in, where Anthropic takes all the value.
I wouldn't all the value, but how else are you going to run the business? Allow other to take all the value you provide?
> Reading these comments aren't we missing the obvious?
AI companies: "You think you own that code?"
???
Use an API Key and there's no problem.
They literally put that in plain words in the ToS.
Using an API key is orders of magnitude more expensive. That's the difference here. The Claude Code subscriptions are being heavily subsidized by Anthropic, which is why people want to use their subscriptions in everything else.
They are subsidized by people who underuse their subscriptions. There must be a lot of them.
undefined
undefined
undefined
undefined
undefined
Be the economics as they may, there is no lock in as OP claims.
This statement is plainly wrong.
If you boost and praise AI usage, you have to face the real cost.
Can't have your cake and eat it, too.
The people mad about this feel they are entitled to the heavily subsidized usage in any context they want, not in the context explicitly allowed by the subsidizer.
It's kind of like a new restaurant started handing out coupons for "90% off", wanting to attract diners to the restaurant, customers started coming in and ordering bulk meals then immediately packaging them in tupperware containers and taking it home (violating the spirit of the arrangement, even if not the letter of the arrangement), so the restaurant changed the terms on the discount to say "limited to in-store consumption only, not eligible for take-home meals", and instead of still being grateful that they're getting food for 90% off, the cheapskate customers are getting angry that they're no longer allowed to exploit the massive subsidy however they want.
It might be some confirmation bias here on my part but it feels as if companies are becoming more and more hostile to their API users. Recently Spotify basically nuked their API with zero urgency to fix it, redit has a whole convoluted npm package your obliged to use to create a bot, Facebook requires you to provide registered company and tax details even for development with some permissions. Am I just old man screaming at cloud about APIs used to being actually useful and intuitive?
They put no limits on the API usage, as long as you pay.
Here, they put limits on the "under-cover" use of the subscription. If they can provide a relatively cheap subscription against the direct API use, this is because they can control the stuff end-to-end, the application running on your system (Claude Code, Claude Desktop) and their systems.
As you subscribe to these plans, this is the "contract", you can use only through their tools. If you want full freedom, use the API, with a per token pricing.
For me, this is fair.
> If they can provide a relatively cheap subscription against the direct API use
Except they can't. Their costs are not magically lower when you use claude code vs when you use a third-party client.
> For me, this is fair.
This is, plain and simple, a tie-in sale of claude code. I am particularly amused by people accepting it as "fair" because in Brazil this is an illegal practice.
> This is, plain and simple, a tie-in sale of claude code. I am particularly amused by people accepting it as "fair" because in Brazil this is an illegal practice
I am very curious what is particularly illegal about this. On the sales page nowhere do they actually talk about the API https://claude.com/pricing
Now we all know obviously the API is being used because that is how things work, but you are not actually paying a subscription for the API. You are paying for access to Claude Code.
Is it also illegal that if you pay for Playstation Plus that you can't play those games on an Xbox?
Is it illegal that you can't use third party netflix apps?
I really don't want to defend and AI company here but this is perfectly normal. In no other situation would we expect access to the API, the only reason this is considered different is because they also have a different service that gives access to the API. But that is irrelevant.
undefined
undefined
I've heard they actually cache the full Claude Code system prompt on their servers and this saves them a lot of money. Maybe they cache the MCP tools you use and other things. If another harness like Opencode changes that prompt or adds significantly to it, that could increase costs for them.
What I don't understand is why start this game of cat and mouse? Just look at Youtube and YT-DLP. YT-DLP, and the dozens of apps that use it, basically use Youtube's unofficial web API and it still works even after Youtube constantly patches their end. Though now, YT-DLP has to use a makeshift JS interpreter and maybe even spawn Chromium down the line.
undefined
undefined
Unless it's illegal in more places, I think they won't care. In my experience, the percentage of free riders in Brazil is higher (due to circumstances, better said).
While the cost may not be lower the price certainly can be if they are operating like any normal company and adding margin.
But they could charge the third-party client for access to the API.
undefined
undefined
I think what most people don't realize is running an agent 24/7 fully automated is burning a huge hole in their profitability. Who even knows how big it is. It could be getting it on the 8/9 figures a day for all we know.
There's this pervasive idea left over from the pre-llm days that compute is free. You want to rent your own H200x8 to run your Claude model, that's literally going to cost $24/hour. People are just not thinking like that. I have my home PC, it does this stuff I can run it 24/7 for free.
undefined
undefined
undefined
undefined
undefined
I don't see how it's fair. If I'm paying for usage, and I'm using it, why should Anthropic have a say on which client I use?
I pay them $100 a month and now for some reason I can't use OpenCode? Fuck that.
undefined
undefined
undefined
undefined
undefined
undefined
Their subscriptions aren't cheap, and it has nothing really to do with them controlling the system.
It's just price differentiation - they know consumers are price sensitive, and that companies wanting to use their APIs to build products so they can slap AI on their portfolio and get access to AI-related investor money can be milked. On the consumer-facing front, they live off branding and if you're not using claude code, you might not associate the tool with Anthropic, which means losing publicity that drives API sales.
It would be less of an issue if Claude-Code was actually the best coding client, and would actually somehow reduce the amount of tokens used. But it's not. I get more things done with less tokens via OpenCode. And in the end, I hit 100% usage at the end of the week anyway.
undefined
It doesn't really make sense to me because the subscriptions have limits too.
But I agree they can impose whatever user hostile restrictions they want. They are not a monopoly. They compete in a very competitive market. So if they decide to raise prices in whatever shape or form then that's fine.
Arbitrary restrictions do play a role for my own purchasing decisions though. Flexibility is worth something.
I'm with the parent comment. It was inevitable Netflix would end password-sharing. It was inevitable you'd have to pick between freeform usage-based billing and a constrained subscription experience. Using the chatbot subscription as an API was a weird loophole. I don't feel betrayed.
They tier it. So you are limited until you pay more. So you can't just right away get the access you need.
[dead]
I don't and would never pay for an LLM, but presumably they also want for force ads down your throat eventually, yea? Hard to do if you're just selling API access.
undefined
undefined
undefined
Every garden eventually becomes a walled garden once enough people are inside.
#earthisamoat
[flagged]
undefined
Spotify are probably reacting to https://annas-archive.li/blog/backing-up-spotify.html where basically the whole archive was downloaded
that was later.
undefined
Can you sell ads via api? If answer is no then this “feature” would be at the bottom of the list
They can sell API access via transparent pricing.
Instead, many, many websites (especially in the music industry) have some sort of funky API that you can only get access to if you have enough online clout. Very few are transparent about what "enough clout" even means or how much it'd cost you, and there's like an entire industry of third-party API resellers that cost like 10x more than if you went straight to the source. But you can't, because you first have to fulfill some arbitrary criteria that you can't even know about ahead of time.
It's all very frustrating to deal with.
Plus, use of the API is a way to avoid ads. So double-strike against good/available APIs.
Of course they can [1].
Though, in this case, you get free API access to the model.
[1]: https://x.com/badlogicgames/status/2017063228094709771
There is a world where approaches like HTTP 402 are implemented to monetize API usage.
undefined
What kind of ads they sell in terminal Claude code? Are you bor?
See [1] for a solution.
[1] https://i.programmerhumor.io/2025/03/778c56a79115a582edb9949...
I think that these companies are understanding that as the barrier to entry to build a frontend gets lower and lower, APIs will become the real moat. If you move away from their UI they will lose ad revenue, viewer stats, in short the ability to optimize how to harness your full attention. It would be great to have some stats on hand and see if and how much active API user has increased decreased in the last two years, as I would not be surprised if it had increased at a much faster pace than in the past.
> the barrier to entry to build a frontend gets lower
My impression is the opposite: frontend/UI/UX is where the moat is growing because that's where users will (1) consume ads (2) orchestrate their agents.
undefined
undefined
What ad revenue? In their terminal cli?
"Are becoming", you sweet summer child.
It all started with Facebook closing pretty much everything and making FB Messenger a custom protocol instead of XMPP.
And whatever API access is still available is so shit and badly managed that even a household name billion dollar gaming company couldn't get a fast-lane for approval to use specific API endpoints.
The final straw was Twitter effectively closing up their API "to protect from bots", which in fact did NOT protect anyone from bots. All it did was prevent legitimate entertaining and silly bots from acting on the platform, the actual state-controlled trolls just bought the blue checkmark and continued as-is.
I don't it's particularly hard to figure it out: APIs have been particularly at risk of being exploited for negative purposes due the explosion of AI powered bots
This trend well predates widespread use of chatbots.
It's just the continued slow death of the open internet
I’m predicting that there would be a new movement to make everything an MCP. It’s now easier to consume an api by non technical people.
You're correct in your observations. In the age of agents, the walls are going up. APIs are no longer a value-add; they're a liability. MCP and the equivalent will be the norm interface. IMO.
What is given can be taken away. Despite the extra difficult this is why unofficial methods (e.g. scraping) are often superior. Soon we'll see more fully independent data scraping done by cameras and microphones.
APIs leak profit and control vs their counterpart SDK/platforms. Service providers use them to bootstrap traffic/brand, but will always do everything they can to reduce their usage or sunset them entirely if possible.
Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica. They are holding businesses who touches your personal data responsible.
> Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica.
There is nothing here stopping cambridge analytica from doing this again, they will provide whatever details needed. But a small pre launch personal project work that might use a facebook publishing application can't be developed or tested without first going through all the bureaucracy.
Nevermind the non profit 'free' application you might want to create on the FB platform, lets say a share chrome extension "Post to my FB", for personal use, you can't do this because you can't create an application without a company and IVA/TAX documents. It's hostile imo.
Before, you could create an app, link your ToS, privacy policy etc, verify your domain via email, and then if users wanted to use your application they would agree, this is how a lot of companies still do it. I'm actually not sure why FB do this specifically.
Facebook knew very early and very well about the data harvesting that was going on at Cambridge Analytica through their APIs. They acted so incredibly slowly and not-harsh that it's IMO hard to believe that they did not implicitly support it.
> to protect consumers
We are talking about Meta. They have never, and will never, protect customers. All they protect is their wealth and their political power.
Is it? I’ve never touched Facebook api, but it sounds ridiculous that you need to provide tax details for DEVELOPMENT. Can’t they implement some kind of a sandbox with dummy data?
undefined
undefined
They just want people to use facebook. If you can see facebook content without being signed in they have a harder time tracking you and showing you ads.
„Open Access APIs are like a subway. You use them to capture a market and then you get out.“
— Erdogan, probably.
Given the Cambridge Analytica scandal, I don’t take too much issue to FB making their APIs a little tougher to use
Not sure how relevant this comment is
Everyone has heard the word "enshittification" at this point and this falls in line. But if you haven't read the book [0] it's a great deep dive into the topical area.
But the real issue is that these companies, once they have any market leverage, do things in their best interest to protect the little bit of moat they've acquired.
[0] https://www.mcdbooks.com/books/enshittification
That is not new, just new with APIs.
The usual cycle with startups is to:
- Start being very open, as this brings people developing over the platforms and generates growth
- As long as they are growing, VC money will come to pay for everything. This is the scale up phase
- Then comes the VC exit, IPO or whatever
- Now the new owners don't want user growth, they want margin growth. This is the company phase
- Companies then have monetize their users (why not ads?), close up free, or high-maintenance stuff that do not bring margin
- and report that sweet $$$ growth quarter after quarter
...until a new startup comes in and starts the cycle over again, destroying all the value the old company had.
A mix of Enshittification and Innovators Dilemma theories
APIs are the best when they let you move data out and build cool stuff on top. A lot of big platforms do not really want that anymore. They want the data to stay inside their silo so access gets slower harder and more locked down. So you are not just yelling at the cloud this feels pretty intentional.
Google now wants $30,000 a month for customsearch (minimum charge), up from 1c per search or thereabouts in January 2026...
There is no moat except market saturation and gate keeping for most platforms.
It's because AI is being trained on all of these APIs and the platforms are at risk of losing what makes them valuable (their data). So they have to take the API down or charge enough that it wouldn't be worth it for an AI.
But this ban is precisely on circumventing the API.
You're not wrong. Reddit & Elon started it and everyone laughed at them and made a stink. But my guess is the "last dying gasp of the freeloader" /s wasn't enough to dissuade other companies from jumping on the bandwagon, cause fiduciary responsibility to shareholders reigns supreme at the end of the day.
This is sort of true!
Spotify in particular is just patently the very worst. They released an amazing and delightful app sdk, allowing for making really neat apps in the desktop app in 2011. Then cancelled it by 2014. It feels like their entire ecosystem has only ever gone downhill. Their car device was cancelled nearly immediately. Every API just gets worse and worse. Remarkable to see a company have only ever such a downward slide. The Spotify Graveyard is, imo, a place of singnificantly less honor than the Google Graveyard. https://web.archive.org/web/20141104154131/https://gigaom.co...
But also, I feel like this broad repulsive trend is such an untenable position now that AI is here. Trying to make your app an isolated disconnected service is a suicide pact. Some companies will figure out how to defend their moat, but generally people are going to prefer apps that allow them to use the app as they want, increasingly, over time. And they are not going to be stopped even if you do try to control terms!
Were I a smart engaged company, I'd be trying to build WebMCP access as soon as possible. Adoption will be slow, this isn't happening fast, but people who can mix human + agent activity on your site are going to be delighted by the experience, and that you will spread!
WebMCP is better IMHO than conventional APIs because it layers into the experience you are already having. It's not a separate channel; it can build and use the session state of your browsing to do the things. That's a huge boon for users.
I really hope someone from any of those companies (if possible all of them) would publish a very clear statement regarding the following question: If I build a commercial app that allows my users to connect using their OAuth token coming from their ChatGPT/Claude etc. account, do they allow me (and their users) to do this or not?
I totally understand that I should not reuse my own account to provide services to others, as direct API usage is the obvious choice here, but this is a different case.
I am currently developing something that would be the perfect fit for this OAuth based flow and I find it quite frustrating that in most cases I cannot find a clear answer to this question. I don't even know who I would be supposed to contact to get an answer or discuss this as an independent dev.
EDIT: Some answers to my comment have pointed out that the ToS of Anthropic were clear, I'm not saying they aren't if taken in a vacuum, yet in practice even after this being published some confusion remained online, in particular regarding wether OAuth token usage was still ok with the Agent SDK for personal usage. If it happens to be, that would lead to other questions I personally cannot find a clear answer to, hence my original statement. Also, I am very interested about the stance of other companies on this subject.
Maybe I am being overly cautious here but I want to be clear that this is just my personal opinion and me trying to understand what exactly is allowed or not. This is not some business or legal advice.
I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.
Subscriptions are for first-party products (claude.com, mobile and desktop apps, Claude Code, editor extensions, Cowork).
Everything else must use API billing.
The biggest reason why this is confusing is the Claude Agent SDK[0] will use subscription/oauth credentials if present. The terms update implies that there's some use cases where that's ok and other use cases (commercial?) where using their SDK on a user's device violates terms.
[0] https://platform.claude.com/docs/en/agent-sdk/overview
undefined
undefined
undefined
undefined
And at that point, you might as well use OpenRouter's PKCE and give users the option to use other models..
These kinds of business decisions show how these $200.00 subscriptions for their slot/infinite jest machines basically light that $200.00 on fire, and in general how unsustainable these business models are.
Can't wait for it all to fail, they'll eventually try to get as many people to pay per token as possible, while somehow getting people to use their verbose antigentic tools that are able to inflate revenue through inefficient context/ouput shenanigans.
undefined
undefined
You are talking about Anthropic and indeed compared to OpenAI or GitHub Copilot they have seemed to be the ones with what I would personally describe as a more restrictive approach.
On the other hand OpenAI and GitHub Copilot have, as far as I know, explicitly allowed their users to connect to at least some third party tools and use their quotas from there, notably to OpenCode.
What is unclear to me is whether they are considering also allowing commercial apps to do that. For instance if I publish a subscription based app and my users pay for the app itself rather than for LLM inference, would that be allowed?
undefined
Then why does the SDK support subscription usage? Can I at least use my subscription for my own use of the SDK?
What if you wrap the service using their Agent SDK?
undefined
undefined
Quick question but what if I use claude code itself for the purpose?
https://github.com/rivet-dev/sandbox-agent/tree/main/gigacod... [I saw this inShow HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp] (https://news.ycombinator.com/item?id=46912682)
This can make Opencode work with Claude code and the added benefit of this is that Opencode has a Typescript SDK to automate and the back of this is still running claude code so technically should work even with the new TOS?
So in the case of the OP. Maybe Opencode TS SDK <-> claude code (using this tool or any other like this) <-> It uses the oauth sign in option of Claude code users?
Also, zed can use the ACP protocol itself as well to make claude code work iirc. So is using zed with CC still allowed?
> I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.
This is confusing quite frankly, there's also the claude agent sdk thing which firloop and others talked about too. Some say its allowed or not. Its all confusing quite frankly.
That’s very clearly a no, I don’t understand why so many people think this is unclear.
You can’t use Claude OAuth tokens for anything. Any solution that exists worked because it pretended/spoofed to be Claude Code. Same for Gemini (Gemini CLI, Antigravity)
Codex is the only one that got official blessing to be used in OpenClaw and OpenCode, and even that was against the ToS before they changed their stance on it.
Is Codex ok with any other third party applications, or just those?
undefined
undefined
undefined
undefined
But why does it matter which program consumes the tokens?
undefined
undefined
undefined
undefined
undefined
I think you're just trying to see ambiguity where it doesn't exist because the looser interpretation is beneficial to you. It totally makes sense why you'd want that outcome and I'm not faulting you for it. It's just that, from a POV of someone without stake in the game, the answer seems quite clear.
It is pretty obviously no. API keys billed by the token, yes, Oauth to the flat rate plans no.
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
If you look at this tweet [1] and in particular responses under it, it still seems to me like some parts of it need additional clarification. For instance, I have seen some people interpret the tweet as meaning using the OAuth token is actually ok for personal experimentation with the Agent SDK, which can be seen as a slight contradiction with what you quoted. A parent tweet also mentioned the docs clean up causing some confusion.
None of this is legal advice, I'm just trying to understand what exactly is allowed or not.
[1] https://x.com/trq212/status/2024212380142752025?s=10
undefined
What flatrate?
Pro and Max are both limited
undefined
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai.
I think this is pretty clear - No.
So it’s forbidden to use the Claude Mac app. I would say the ToS as it is, can’t be enforced
Anthropic has published a very clear statement. It's "no".
Does https://happy.engineering/ need to use the API keys or can use oauth? It's basically a frontend for claude-cli.
It doesn't even touch auth right?
""" Usage policy
Acceptable use Claude Code usage is subject to the Anthropic Usage Policy. Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK """
That tool clearly falls under ordinary individual use of Claude code. https://yepanywhere.com/ is another such tool. Perfectly ordinary individual usage.
https://yepanywhere.com/sdk-auth-clarification.html
The TOS are confusing because just below that section it talks about authentication/credential use. If an app starts reading api keys / credentials, that starts falling into territory where they want a hard line no.
If it's a wrapper that invokes the `claude` binary then I believe it's fine.
undefined
Usually, it is already stated in their documentation (auth section). If a statement is vague, treat it as a no. It is not worth the risk when they can ban you at any time. For example, ChatGPT allows it, but Claude and Gemini do not.
https://developers.openai.com/codex/auth
Maybe I am missing something from the docs of your link, but I unfortunately don't think it actually states anything regarding allowing users to connect and use their Codex quota in third party apps.
undefined
undefined
One set of applications to build with subscription is to use the claude-go binary directly. Humanlayer/Codelayer projects on GitHub do this. Granted those are not ideal for building a subscription based business to use oathu tokens from Claude and OpenaAI. But you can build a business by building a development env and gating other features behind paywall or just offering enterprise service for certain features like vertical AI(redpanada) offerings knowledge workers, voice based interaction(there was a YC startup here the other day doing this I think), structured outputs and workflows. There is lots to build on.
Not allowed. They've already banned people for this.
[dead]
I'm only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.
Opus has gone down the hill continously in the last week (and before you start flooding with replies, I've been testing opus/codex in parallel for the last week, I've plenty of examples of Claude going off track, then apologising, then saying "now it's all fixed!" and then only fixing part of it, when codex nailed at the first shot).
I can accept specific model limits, not an up/down in terms of reliability. And don't even let me get started on how bad Claude client has become. Others are finally catching up and gpt-5.3-codex is definitely better than opus-4.6
Everyone else (Codex CLI, Copilot CLI etc...) is going opensource, they are going closed. Others (OpenAI, Copilot etc...) explicitly allow using OpenCode, they explicitly forbid it.
This hostile behaviour is just the last drop.
OpenAI forces users to verify with their ID + face scan when using Codex 5.3 if any of your conversations was redeemed as high risk.
It seems like they currently have a lot of false positives: https://github.com/openai/codex/issues?q=High%20risk
They haven't asked me yet (my subscription is from work with a business/team plan). Probably my conversations as too boring
undefined
I’m unsure exactly in what way you believe it has gone “down the hill” so this isn’t aimed at you specifically but more a general pattern I see.
That pattern is people complaining that a particular model has degraded in quality of its responses over time or that it has been “nerfed” etc.
Although the models may evolve, and the tools calling them may change, I suspect a huge amount of this is simply confirmation bias.
> Opus has gone down the hill continously in the last week
Is a week the whole attention timespan of the late 2020s?
We’re still in the mid-late 2020s. Once we really get to the late 2020s, attention spans won’t be long enough to even finish reading your comment. People will be speaking (not typing) to LLMs and getting distracted mid-sentence.
undefined
undefined
undefined
Unfortunately, and “Attention Is All You Need”.
oh shit we're in the late 2020's now
undefined
Opus 4.6 genuinely seems worse than 4.5 was in Q4 2025 for me. I know everyone always says this and anecdote != data but this is the first time I've really felt it with a new model to the point where I still reach for the old one.
I'll give GPT 5.3 codex a real try I think
Huh… I’ve seen this comment a lot in this thread but I’ve really been impressed with both Anthropic’s latest models and latest tooling (plugins like /frontend-design mean it actually designs real front ends instead of the vibe coded purple gradient look). And I see it doing more planning and making fewer mistakes than before. I have to do far less oversight and debugging broken code these days.
But if people really like Codex better, maybe I’ll try it. I’ve been trying not to pay for 2 subscriptions at once but it might be worth a test.
undefined
I asked Codex 5.3 and Opus 4.6 to write me a macos application with a certain set of requirements.
Opus 4.6 wrote me a working macos application.
Codex wrote me a html + css mockup of a macos application that didn't even look like a macos application at all.
Opus 4.5 was fine, but I feel that 4.6 is more often on the money on its implementations than 4.5 was. It is just slower.
undefined
undefined
undefined
I agree with you. Codex 5.3 is good it's just a bit slower.
undefined
The rate limit for my $20 OpenAI / Codex account feels 10x larger than the $20 claude account.
YES. I hit the rate limit in about ~15 mins on Claude. But it will take me a few hours with Codex. A/B testing them on the same tasks. Same $20/mo.
I was underwhelmed by Opus4.6. I didn’t get a sense of significant improvement, but the token usage was excessive to the point that I dropped the subscription for codex. I am suspect that all the models are so glib that they can create a quagmire for themselves in a project. I have not yet found a satisfying strategy for non-destructive resets when the systems own comments and notes poisons new output. Fortunately, deleting and starting over is cheap.
No offense, but this is the most predicable outcome ever. The software industry at large does this over and over again and somehow we're surprised. Provide thing for free or for cheap, and then slowly draw back availability once you have dominant market share or find yourself needing money (ahem).
The providers want to control what AI does to make money or dominate an industry so they don't have to make their money back right away. This was inevitable, I do not understand why we trust these companies, ever.
because it's easier than paying $50k for local llm setup that might not last 5 years.
undefined
undefined
No offense taken here :)
First, we are not talking about a cheap service here. We are talking about a monthly subscription which costs 100 USD or 200 USD per month, depending on which plan you choose.
Second, it's like selling me a pizza and pretending I only eat it while sitting at your table. I want to eat the pizza at home. I'm not getting 2-3 more pizzas, I'm still getting the same pizza others are getting.
It's the most overrated model there is. I do Elixir development primarily and the model sucks balls in comparison to Gemini and GPT-5x. But the Claude fanboys will swear by it and will attack you if you ever say even something remotely negative about their "god sent" model. It fails miserably even in basic chat and research contexts and constantly goes off track. I wired it up to fire up some tasks. It kept hallucinating and swearing it did when it didn't even attempt to. It was so unreliable I had to revert to Gemini.
It might simply be that it was not trained enough in Elixir RL environments compared to Gemini and gpt. I use it for both ts and python and it's certainly better than Gemini. For Codex, it depends on the task.
> I’m only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.
I have a feeling Anthropic might be in for an extremely rude awakening when that happens, and I don’t think it’s a matter of “if” anymore.
> And don't even let me get started on how bad Claude client has become
The latest versions of claude code have been freezing and then crashing while waiting on long running commands. It's pretty frustrating.
My favorite conspiracy explanation:
Claude has gotten a lot of popular media attention in the last few weeks, and the influx of users is constraining compute/memory on an already compute heavy model. So you get all the suspected "tricks" like quantization, shorter thinking, KV cache optimizations.
It feels like the same thing that happened to Gemini 3, and what you can even feel throughout the day (the models seem smartest at 12am).
Dario in his interview with dwarkesh last week also lamented the same refrain that other lab leaders have: compute is constrained and there are big tradeoffs in how you allocate it. It feels safe to reason then that they will use any trick they can to free up compute.
all this because of a single week?
No, it's not the first time their models degrade for some time.
No developer writes the same prompt twice. How can you be sure something has changed?
I regularly run the same prompts twice and through different models. Particularly, when making changes to agent metadata like agent files or skills.
At least weekly I run a set of prompts to compare codex/claude against each other. This is quite easy the prompt sessions are just text files that are saved.
The problem is doing it enough for statistical significance and judging the output as better or not.
I suspect you may not be writing code regularly... If I have to ask Claude the same things three times and it keeps saying "You are right, now I've implemented it!" and the code is still missing 1 out of 3 things or worse, then I can definitely say the model has become worse (since this wasn't happening before).
undefined
undefined
When I use Claude daily (both professionally and personally with a Max subscription), there are things that it does differently between 4.5 and 4.6. It's hard to point to any single conversation, but in aggregate I'm finding that certain tasks don't go as smoothly as they used to. In my view, Opus 4.6 is a lot better at long standing conversations (which has value), but does worse with critical details within smaller conversations.
A few things I've noticed:
* 4.6 doesn't look at certain files that it use to
* 4.6 tends to jump into writing code before it's fully understood the problem (annoying but promptable)
* 4.6 is less likely to do research, write to artifacts, or make external tool calls unless you specifically ask it to
* 4.6 is much more likely to ask annoying (blocking) questions that it can reasonably figure out on it's own
* 4.6 is much more likely to miss a critical detail in a planning document after being explicitly told to plan for that detail
* 4.6 needs to more proactively write its memories to file within a conversation to avoid going off track
* 4.6 is a lot worse about demonstrating critical details. I'm so tired of it explaining something conceptually without it thinking about how it implements details.
undefined
Ralph Wiggum would like a word
undefined
The economic tension here is pretty clear: flat-rate subscriptions are loss leaders designed to hook developers into the ecosystem. Once third parties can piggyback on that flat rate, you get arbitrage - someone builds a wrapper that burns through $200/month worth of inference for $20/month of subscription cost, and Anthropic eats the difference.
What is interesting is that OpenAI and GitHub seem to be taking the opposite approach with Copilot/OpenCode, essentially treating third-party tool access as a feature that increases subscription stickiness. Different bets on whether the LTV of a retained subscriber outweighs the marginal inference cost.
Would not be surprised if this converges eventually. Either Anthropic opens up once their margins improve, or OpenAI tightens once they realize the arbitrage is too expensive at scale.
these subscriptions have limits.. how could someone use $200 worth on $20/month.. is that not the issue with the limits they set on a $20 plan, and couldn't a claude code user use that same $200 worth on $20/month? (and how do i do this?)
The limits in the max subscriptions are more generous and power users are generating loss.
I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.
What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.
With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.
undefined
undefined
undefined
I'd agree on this. I ended up picking up a Claude Pro sub and am very less than impressed at the volume allowance. I generally get about a dozen queries (including simple follow up/refinements/corrections) across a relatively small codebase, with prompts structured to minimize the parts of the code touched - and moving onto fresh contexts fairly rapidly, before getting cut off for their ~5 hour window. Doing that ~twice a day ends up getting cut off on the weekly limit with about a day or two left on it.
I don't entirely mind, and am just considering it an even better work:life balance, but if this is $200 worth of queries, then all I can say is LOL.
undefined
The usage limit on your $20/month subscription is not $20 of API tokens (if it was, why subscribe?). Its much much higher, and you can hit the equivalent of $20 of API usage in a few days.
undefined
undefined
The median subscriber generates about 50% gross margin, but some subscribers use 10x the amount of inference compute as other subscribers (due to using it more...), and it's a positive skewness distribution.
I don't think it's a secret that AI companies are losing a ton of money on subscription plans. Hence the stricter rate limits, new $200+ plans, push towards advertising etc. The real money is in per-token billing via the API (and large companies having enough AI FOMO that they blindly pay the enormous invoices every month).
They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.
Banning third-party tools has nothing to do with rate limits. They’re trying to position themselves as the Apple of AI companies -a walled garden. They may soon discover that screwing developers is not a good strategy.
They are not 10× better than Codex; on the contrary, in my opinion Codex produces much better code. Even Kimi K2.5 is a very capable model I find on par with Sonnet at least, very close to Opus. Forcing people to use ONLY a broken Claude Code UX with a subscription only ensures they loose advantage they had.
> "just a few dollars per million tokens"
Google AI Pro is like $15/month for practically unlimited Pro requests, each of which take million tokens of context (and then also perform thinking, free Google search for grounding, inline image generation if needed). This includes Gemini CLI, Gemini Code Assist (VS Code), the main chatbot, and a bunch of other vibe-coding projects which have their own rate limits or no rate limits at all.
It's crazy to think this is sustainable. It'll be like Xbox Game Pass - start at £5/month to hook people in and before you know it it's £20/month and has nowhere near as many games.
undefined
undefined
I’m not familiar with the Claude Code subscription, but with Codex I’m able to use millions of tokens per day on the $200/mo plan. My rough estimate was that if I were API billing, it would cost about $50/day, or $1200/mo. So either the API has a 6x profit margin on inference, the subscription is a loss leader, or they just rely on most people not to go anywhere near the usage caps.
undefined
undefined
Inference might be cheap, but I'm 100% sure Anthropic has been losing quite a lot of money with their subscription pricing with power users. I can literally see comparison between what my colleagues Claude cost when used with an API key vs when used with a personal subscription, and the delta is just massive
I wonder how many people have a subscription and don’t fully utilize it. That’s free money for them, too.
undefined
Of course they bundle R&D with inference pricing, how else could you the recoup that investment.
The interesting question is: In what scenario do you see any of the players as being able to stop spending ungodly amounts for R&D and hardware without losing out to the competitors?
undefined
Didn't OpenAI spend like 10 billion on inference in 2025? Which is around the same as their total revenue?
Why do people keep saying inference is cheap if they're losing so much money from it?
undefined
What walled garden man? There’s like four major API providers for Anthropic.
undefined
Except all those GPUs running inference need to be replaced every 2 years.
undefined
> They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.
You've described every R&D company ever.
"Synthesizing drugs is cheap - just a few dollars per million pills. They're trying to bundle pharmaceutical research costs... etc."
There's plenty of legit criticisms of this business model and Anthropic, but pointing out that R&D companies sink money into research and then charge more than the marginal cost for the final product, isn't one of them.
undefined
"They're not losing money on subscriptions, it's just their revenue is smaller than their costs". Weird take.
undefined
The secret is there is no path on making that back.
the path is by charging just a bit less than the salary of the engineers they are replacing.
undefined
My crude metaphor to explain to my family is gasoline has just been invented and we're all being lent Bentley's to get us addicted to driving everywhere. Eventually we won't be given free Bentley's, and someone is going to be holding the bag when the infinite money machine finally has a hiccup. The tech giants are hoping their gasoline is the one that we all crave when we're left depending on driving everywhere and the costs go soaring.
undefined
undefined
undefined
how do I understand what is the sustainable pricing?
Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.
I agree; unfortunately when I brought up that they're losing before I get jumped on demanding me to "prove it" and I guess pointing at their balance sheets isn't good enough.
The question I have: how much are they _also_ losing on per-token billing?
From what I understand, they make money per-token billing. Not enough for how much it costs to train, not accounting for marketing, subscription services, and research for new models, but if they are used, they lose less money.
Finance 101 tldr explanation: The contribution margin (= price per token -variable cost per token ) this is positive
Profit (= contribution margin x cuantity- fix cost)
undefined
Why do you think they're losing money on subscriptions?
Does a GPU doing inference server enough customers for long enough to bring in enough revenue to pay for a new replacement GPU in two years (and the power/running cost of the GPU + infrastructure). That's the question you need to be asking.
If the answer is not yes, then they are making money on inference. If the answer is no, the market is going to have a bad time.
Because they're not saying they are making a profit
undefined
But why does it matter which program you use to consume the tokens?
The sounds like a confession that claude code is somewhat wasteful at token use.
No, it's a confession they have no moat other than trying to hold onto the best model for a given use case.
I find that competitive edge unlikely to last meaningfully in the long term, but this is still a contrarian view.
More recently, people have started to wise up to the view that the value is in the application layer
https://www.iconiqcapital.com/growth/reports/2026-state-of-a...
undefined
Honestly I think I am already sold on AI, who is the first company that is going to show us all how much it really costs and start enshitification? First to market wins right?
Not according to this guy who works on Claude Code: https://x.com/trq212/status/2024212378402095389?s=20
What a PR nightmare, on top of an already bad week. I’ve seen 20+ people on X complaining about this and the related confusion.
No, it is prohibited. They're just updating the docs to be more clear about their position, which haven't changed. Their docs was unclear about it.
Yes, it was always prohibited, hence the OpenCode situation one or two months ago.
They really need to correct that. I understand jack shit. Is openclaw banned under these terms? Or just abuse where I build a business on top of that? And why does it matter anyway? I have my token restrictions ... So let me do what I want.
woof, does Anthropic not have a comms team and a clear comms policy for employees that aren’t on that comms team?
Probably not, they’re like four years old and they’re 2500 people at the company. My guess is that there are but a handful of PMs.
Incorrect, the third-party usage was already blocked (banned) but it wasn't officially communicated or documented. This post is simply identifying that official communication rather than the inference of actual functionality.
I pay a Max subscription since a long time, I like their model but I hate their tools:
- Claude Desktop looks like a demo app. It's slow to use and so far behind the Codex app that it's embarassing.
- Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU. Let's not talk about the feature parity with other agents.
- Claude Agent SDK is poorly documented, half finished, and is just thin wrapper around a CLI tool…
Oh and none of this is open source, so I can do nothing about it.
My only option to stay with their model is to build my own tool. And now I discover that using my subscription with the Agent SDK is against the term of use?
I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider.
I agree that Claude Code is buggy as hell, but:
> Let's not talk about the feature parity with other agents.
What do you mean feature parity with other agents? It seems to me that other CLI agents are quite far from Claude Code in this regard.
Which other CLI agents are that? Because I've found OpenCode to be A LOT better than Claude-Code.
undefined
>I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider
It's funny, you are probably in the cohort that made Antropic have to pursue this type of decision so aggressively.
> Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU
FWIW this aligns completely with the LLM ethos. Inefficiency is a virtue.
I had a Claude code instance using 55 GB of RAM yesterday.
I got so tired of cursor that I started writing down every bug I encountered. The list is currently at 30 entries, some of them major bugs such as pressing "apply" on changes not actually applying changes or models getting stuck in infinite loops and burning 50 million tokens.
I tried to have Cursor change a list of US States and Provinces from a list to a dictionary and it did, but it also randomly deleted 3 states.
I regret ever promoting that Claude Code crap. I remember when it was nothing but glowing reviews everywhere. Honestly AI companies should stick to what they are good at: direct API interface to powerful models.
We are heading toward a $1000/month model just to use LLMs in the cloud.
Your core customers are clearly having a blast building their own custom interfaces, so obviously the thing to do is update TOS and put a stop to it! Good job lol.
I know, I know, customer experience, ecosystem, gardens, moats, CC isn't fat, just big boned, I get it. Still a dick move. This policy is souring the relationship, and basically saying that Claude isn't a keeper.
I'll keep my eye-watering sub for now because it's still working out, but this ensures I won't feel bad about leaving when the time comes.
Update: yes yes, API, I know. No, I don't want that. I just want the expensive predictable bill, not metered corporate pricing just to hack on my client.
They'll all do this eventually.
We're in the part of the market cycle where everyone fights for marketshare by selling dollar bills for 50 cents.
When a winner emerges they'll pull the rug out from under you and try to wall off their garden.
Anthropic just forgot that we're still in the "functioning market competition" phase of AI and not yet in the "unstoppable monopoly" phase.
"Naveen Rao, the Gen AI VP of Databricks, phrased it quite well:
all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose."
~ https://vintagedata.org/blog/posts/model-is-the-product A. Doria
> new Amp Free (10$) access is also closed up since of last night
undefined
Unstoppable monopoly will be extremely hard to pull off given the number of quality open (weights) alternatives.
I only use LLMs through OpenRouter and switch somewhat randomly between frontier models; they each have some amount of personality but I wouldn't mind much if half of them disappeared overnight, as long as the other half remained available.
undefined
undefined
undefined
undefined
> They'll all do this eventually
And if the frontier continues favouring centralised solutions, they'll get it. If, on the other hand, scaling asymptotes, the competition will be running locally. Just looking at how much Claude complains about me not paying for SSO-tier subscriptions to data tools when they work perfectly fine in a browser is starting to make running a slower, less-capable model locally competitive with it in some research contexts.
Imagine having a finite pool of GPUs worth more than their weight in gold, and an infinite pool of users obsessed with running as many queries against those GPUs in parallel as possible, mostly to review and generate copious amounts of spam content primarily for the purposes of feeling modern, and all in return for which they offer you $20 per month. If you let them, you must incur as much credit liability as OpenAI. If you don't, you get destroyed online.
It almost makes me feel sorry for Dario despite fundamentally disliking him as a person.
Hello old friend, I've been expecting you.
First of all, custom harness parallel agent people are so far from the norm, and certainly not on the $20 plan, which doesn't even make sense because you'd hit token limit in about 90 seconds.
Second, token limits. Does Anthropic secretly have over-subscription issues? Don't know, don't care. If I'm paying a blistering monthly fee, I should be able to use up to the limit.
Now I know you've got a clear view of the typical user, but FWIW, I'm just an aging hacker using CC to build some personal projects (feeling modern ofc) but still driving, no yolo or gas town style. I've reached the point where I have a nice workflow, and CC is pretty decent, but it feels like it's putting on weight and adding things I don't want or need.
I think LLMs are an exciting new interface to computers, but I don't want to be tied to someone else's idea of a client, especially not one that's changing so rapidly. I'd like to roll my own client to interface with the model, or maybe try out some other alternatives, but that's against the TOS, because: reasons.
And no, I'm not interested in paying metered corporate rates for API access. I pay for a Max account, it's expensive, but predictable.
The issue is Anthropic is trying for force users into using their tool, but that's not going to work for something so generic as interfacing with an LLM. Some folks want emacs while others want vim, and there will never be a consensus on the best editor (it's nvim btw), because developers are opinionated and have strong preferences for how they interface with computers. I switched to CC maybe a year ago and haven't looked back, but this is a major disappointment. I don't give a shit about Anthropic's credit liability, I just want the freedom to hack on my own client.
undefined
Why do you fundamentally dislike him as a person?
The only thing I've seen from him that I don't like is the "SWEs will be replaced" line (which is probably true and it's more that I don't like the factuality of it).
undefined
Don’t be mad at it, be happy you were able to throw some of that sweet free vc money at your hobbies instead of paying the market rate.
Oh I'm not mad, it's more of a sad clown type of thing. I'm still stoked to use it for now. We can always go back to the old ways if things don't work out.
They offer an API for people who want to build their own clients. They didn't stop people from being able to use Claude.
at a significantly higher price... which of course is why they're doing this.
That's what the API is for.
So basically you are saying Anthropic models are indispensable but you are too cheap to pay for it.
Nowhere did I say they're indispensable, and I explicitly said I'm still paying for it. If all AI companies disappear tomorrow that's fine. I'm just calling out what I think is tone-deaf move, by a company I pay a large monthly bill to.
Sure they are having a blast, they are paying 20$ instead of getting charged hundreds for forr tokens.
It's simple, follow the ToS
Going to keep using the agents sdk with my pro subscription until I get banned. It's not openclaw it's my own project. It started by just proxying requests to claude code though the command line, the sdk just made it easier. Not sure what difference it makes to them if I have a cron job to send Claude code requests or an agent sdk request. Maybe if it's just me and my toy they don't care. We'll see how the clarify tomorrow.
AI is the new high-end gym membership. They want you to pay the big fee and then not use what you paid for. We'll see more and more roadblocks to usage as time goes on.
This was the analogy I was looking for! It feels like a very creepy way to make money, almost scammy and the gym membership/overselling hits the nail.
This feels more like the gym owner clarifying it doesn't want you using their 24-hour gym as a hotel just because you find their benches comfortable to lie down on, rather than a "roadblock to usage"
Not really, these subscriptions have a clear and enforced 5h and weekly limit.
Sorry but if you're not paying the big fee there's no way you're going to have a job by the late-2020s.
The pressure is to boost revenue by forcing more people to use the API to generate huge numbers of tokens they can charge more for. LLMs are becoming common commodities as open weight models keep catching up. There are similarities with pirating in the 90s when users realize they can ctrl+c ctrl+v to copy a file/model and you don't need to buy a cd/use their paid API.
And that is how it should be - the knowledge that the LLM trained on should be free, and cannot (and should never be) gatekept behind money.
It's merely the hardware that should be charged for - which ought to drop in price if/when the demand for it rises. However, this is a bottleneck at the moment, and hard to see how it gets resolved amidst the current US environment on sanctioning anyone who would try.
Is there no value in how the training was done such that it's accessible via inference in a particularly useful way?
undefined
No, a lot of the data they were trained on was pirated.
I think I've made two good decisions in my life. The first was switching entirely to Linux around '05 even though it was a giant pain in the ass that was constantly behind the competition in terms of stability and hardware support. It took awhile but wow no regrets.
The second appears to be hitching my wagon to Mistral even though it's apparently nowhere as powerful or featureful as the big guys. But do you know how many times they've screwed me over? Not once.
Maybe it's my use cases that make this possible. I definitely modified my behavior to accommodate Linux.
They're too small to screw you over. But you've got more time until they do at least.
[dead]
Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?
The good enough alternative models are here or will be soon, depending on your definition of good enough. MiniMax-M2.5 looks really competitive and its a tenth of the cost of Sonnet-4.6 (they also have subscriptions).
Running locally is going to require a lot of memory, compute, and energy for the foreseeable future which makes it really hard to compete with ~$20/mo subscriptions.
Personally I am already there- I go to Qwen and Deepseek locally via ollama for my dumb questions and small tasks, and only go to Claude if they fail. I do this partially because I am just so tired of everything I do over a network being logged, tracked, mined and monetized, and also partially because I would like my end state to be using all local tools, at least for personal stuff.
People running models locally has always been the scare for the sama's of the world. "Wait, I don't need you to generate these responses for me? I can get the same results myself?"
He can't buy all the RAM
This is how you gift wrap the agentic era to the open source chinese LLMs. devs don't need the best model, they need one without lawyers attached.
introducing moderation, steerage and censorship in your LLM is a great way to not even show up to the table with a competitive product. builders have woken up to this reality and are demanding local models
I just cancelled my Pro subscription. Turns out that Ollama Cloud with GLM-5 and qwen-coder-next are very close in quality to Opus, I never hit their rate limits even with two sessions running the whole day and there zero advantage for me to use Claude Code compared to OpenCode.
Is that on the $20 sub?
Thariq has clarified that there are no changes to how SDK and max suscriptions work:
https://x.com/i/status/2024212378402095389
---
On a different note, it's surprising that a company that size has to clarify something as important as ToS via X
> On a different note, it's surprising that a company that size has to clarify something as important as ToS via X
Countries clarify nation policy on X. Seriously it feels like half of the EU parliament live on twitter.
Which makes the whole 'EU first' movement looks super weak when the politicians are part of the worse offenders.
FYI a Twitter post that contradicts the ToS is NOT a clarification.
What's wrong with using X?
In the case you are asking in good faith, a) X requires logging in to view most of its content, which means that much of your audience will not see the news because b) much of your audience is not on X, either due to not having social media or have stopped using X due to its degradation to put it generally.
undefined
undefined
Not bad per se but how much legal weight does it actually carry?
I presume zero.. but nonetheless seems like people will take it as valid anyway.
That can be dangerous I think.
ideologically or practically?
there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense. If anthropic wanted to own that market, they could introduce a bring-your-own-Claude metaphor, where you login with Claude and token costs get billed to your personal account (after some reasonable monthly freebies from your subscription).
But the big guys don’t seem interested in this, maybe some lesser known model will carve out this space
This is going to happen. Unfortunately.
I shudder to think what the industry will look like if software development and delivery becomes like Youtubing, where the whole stack and monetization is funneled through a single company (or a couple) get to decide who gets how much money.
I am a bit worried that this is the situation I am in with my (unpublished) commercial app right now: one of the major pain points I have is that while I have no doubt the app provides value in itself, I am worried about how many potential users will actually accept paying inference per token...
As an independent dev I also unfortunately don't have investors backing me to subsidize inference for my subscription plan.
I recommend kimi. It's possible for people to haggle with it to get cheap for the first month and as such try out your project and best part of the matter is that kimi intentionally supports api usage in any of their subscribed plan and they also recently changed their billing to be more token usage based like others instead of their previous tool calling limits
It's seriously one of the best models. very comparable to sonnet/opus although kimi isn't the best in coding. I think its a really great solid model overall and might just be worth it in your use case?
Is the use case extremely coding intensive related (where even some minor improvement can matter for 10-100x cost) or just in general. Because if not, then I can recommend Kimi.
undefined
>> there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense
Maybe they are not worth building at all then. Like MoviePass wasn’t.
I got banned for violating terms of use apparently, but I'm mystified as to what I rule I broke, and appealing just vanishes into the ether.
Two accounts of mine were banned for some reason and my sub was refunded. Literally from just inane conversations. Conversations also disappear and break randomly, but this happens on ChatGPT too sometimes
In enterprise software, this is an embedded/OEM use case.
And historically, embedded/OEM use cases always have different pricing models for a variety of reasons why.
How is this any different than this long established practice?
It's not, but do you really think the people having Claude build wrappers around Claude were ever aware of how services like this are typically offered.
From the legal docs:
> Authentication and credential use
> Claude Code authenticates with Anthropic’s servers using OAuth tokens or API keys. These authentication methods serve different purposes:
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
> Developers building products or services that interact with Claude’s capabilities, including those using the Agent SDK, should use API key authentication through Claude Console or a supported cloud provider. Anthropic does not permit third-party developers to offer Claude.ai login or to route requests through Free, Pro, or Max plan credentials on behalf of their users.
> Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice.
why wouldn't they just make it so the SDK can't use claude subs? like what are they doing here?
When your company happens upon a cash cow, you can either become a milk company or a meat company.
Anthropic is dead. Long live open platforms and open-weight models. Why would I need Claude if I can get Minimax, Kimi, and Glm for the fraction of the price?
To get comparable results you need to run those models on at least prosumer hardware and it seems that two beef-up Mac Studios are the minimum. Which means that instead of buying this hardware you can purchase Claude, Codex and many other subscriptions for next 20 years.
Or you purchase a year's worth of almost unlimited MiniMax coding plan for a price you'd pay for 15 days of limited Claude usage.
And as a bonus, you can choose your harness. You don't have to suffer CC.
And if something better appears tomorrow, you switch your model, while still using your harness of choice.
undefined
OK I hope someone from anthropic reads this. Your API billing makes it really hard to work with it in India. We've had to switch to openrouter because anthropic keeps rejecting all the cards we have tried. And these are major Indian banks. This has been going on for MONTHS
It’s the same here in Hong Kong. I can’t use any of my cards (personal or corporate) for OpenAI or Anthropic.
Have to do everything through Azure, which is a mess to even understand.
Why does it matter to Anthropic if my $200 plan usage is coming from Claude Code or a third party?
Doesn’t both count towards my usage limits the same?
If you buy a 'Season's Pass' for Disneyland, you cant 'sublet' it to another kid to use on the days you don't; It's not really buying a 'daily access rate'.
Anthropic subs are not 'bulk tokens'.
It's not an unreasonable policy and it's entirely inevitable that they have to restrict.
I’m not subletting my sub to anyone. I’m the only one using the third party harness.
I’m using their own SDK in my own CLI tool.
undefined
undefined
It’s still me going to Disneyland, I just take a different route
Disingenuous analogy.
It's more buying a season pass for Disneyland, then getting told you can't park for free if you're entering the park even though free parking is included with the pass. Still not unreasonable, but brings to light the intention of the tool is to force the user into an ecosystem rather.
undefined
They don't get as much visibility into your data, just the actual call to/from the api. There's so much more value to them in that, since you're basically running the reinforcement learning training for them.
Increasing the friction of switching providers as much as possible is part of their strategy to push users to higher subscription tiers and deny even scraps to their competitors.
Probably because the $20 plan is essentially a paid demo for the higher plans.
They're losing money on this $200 plan and they're essentially paying you to make you dependent on Claude Code so they can exploit this (somehow) in the future.
It's a bizarre plan because nobody is 'dependent' on Claude Code; we're begging to use alternatives. It's the model we want!
undefined
undefined
When using Claude Code, it's possible to opt out of having one's sessions be used for training. But is that opt out for everything? Or only message content, such that there could remain sufficient metadata to derive useful insight from?
Any user who is using a third-party client is likely self-selected into being a power user who is less profitable.
At this point, where Kimi K2.5 on Bedrock with a simple open source harness like pi is almost as good the big labs will soon have to compete for users,... openai seems to know that already? While anthropic bans bans bans
Do you know by any chance if Bedrock custom model import also works with on-demand use, without any provisioned capacity? I'm still puzzled why they don't offer all qwen3 models on Bedrock by default.
I see a lot of Qwen3 in us west 2 And i have no experience with custom model on bedrock
That page is... confusing.
> Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK.
This is literally the last sentence of the paragraph before the "Authentication and credential use"
I would expect, it still is only enforced in a semi-strict way.
I think what they want to achieve here is less "kill openclaw" or similar and more "keep our losses under control in general". And now they have a clear criteria to refer when they take action and a good bisection on whom to act on.
In case your usage is high they would block / take action. Because if you have your max subscription and not really losing them money, why should they push you (the monopoly incentive sounds wrong with the current market).
Openclaw is unaffected by this as the Claude Code CLI is called directly
Many people use the Max subscription OAuth token in OpenClaw. The main chat, heartbeat, etc., functionality does not call the Claude Code CLI. It uses the API authenticated via subscription OAuth tokens, which is precisely what Anthropic has banned.
There are many other options too: direct API, other model providers, etc. But Opus is particularly good for "agent with a personality" applications, so it's what thousands of OpenClaw users go with, mostly via the OAuth token, because it's much cheaper than the API.
Their moat is evaporating before our eyes. Anthropic is Microsoft's side piece, but Microsoft is married with kids to OpenAI.
And OpenAI just told Microsoft why they shouldn't be seeing Anthropic anymore; Gpt-5.3-codex.
RIP Anthropic.
And because of this i'll obviously opt to not subscribe to a Claude plan, when i can just use something like Copilot and use the models that way via OpenCode.
how comparable are the usage limits?
Is this a direct shot at things like OpenClaw, or am I reading it wrong?
They even block Claude Code of you've modified it via tweakcc. When they blocked OpenCode, I ported a feature I wanted to Claude Code so I could continue using that feature. After a couple days, they started blocking it with the same message that OpenCode gets. I'm going to go down to the $20 plan and shift most of my work to OpenAI/ChatGPT because of this. The harness features matter more to me than model differences in the current generation.
Opencode as well. Folks have been getting banned for abusing the OAuth login method to get around paying for API tokens or whatever. Anthropic seems to prefer people pay them.
its not that innocent.
a 200 dollar a month customer isn't trying to get around paying for tokens, theyre trying to use the tooling they prefer. opencode is better in a lot of ways.
tokens get counted and put against usage limits anyway, unless theyre trying to eat analytics that are CC exclusive they should allow paying customers to consume to the usage limits in however way they want to use the models.
undefined
undefined
undefined
undefined
I wonder if it has to do with Grok somehow. They had a suspiciously high reputation until they just binarily didn't, after Anthropic said they did something.
For sure, yes. They already added attempts to block opencode, etc.
The fundamental tension here is that AI companies are selling compute at a loss to capture market share, while users are trying to maximize value from their subscriptions.
From a backend perspective, the subscription model creates perverse incentives. Heavy users (like developers running agentic workflows) consume far more compute than casual users, but pay the same price. Third-party tools amplify this asymmetry.
Anthropic's move is economically rational but strategically risky. Models are increasingly fungible - Gemini 3.1 and Claude 4.5 produce similar results for most tasks. The lock-in isn't the model; it's the tooling ecosystem.
By forcing users onto Claude Code exclusively, they're betting their tooling moat is stronger than competitor models. Given how quickly open-source harnesses like pi have caught up, that's a bold bet.
is the tooling moat and secret sauce in Claude Code the client? That's super risky given the language it was written in ( javascript ). I bet Claude Code itself can probably reverse engineer the minimized javascript, trace the logic, and then name variables something sensible for readability. Then the secret sauce is exposed for all to see.
Also, can you not setup a proxy for the cert and a packet sniffer to watch whatever ClaudeCode is doing with respect to API access? To me, if you have "secret sauce" you have to keep it server side and make the client as dumb as possible. Especially if your client is executes as Javascript.
There is a new breed of agent-agnostic tools that call the Claude Code CLI as if it's an API (I'm currently trying out vibe-kanban).
This could be used to adhere to Claude's TOS while still allowing the user to switch AI companies at a moment's notice.
Right now there's limited customizability in this approach, but I think it's not far-fetched to see FAR more integrated solutions in the future if the lock-in trend continues. For example: one MCP that you can configure into a coding agent like Claude Code that overrides its entire behavior (tools, skills, etc.) to a different unified open-source system. Think something similar to the existing IntelliJ IDEA's MCP that gives a separate file edit tool, etc. than the one the agent comes with.
Illustration of what i'm talking about:
- You install Claude Code with no configuration
- Then you install the meta-agent framework
- With one command the meta-agent MCP is installed in Claude Code, built-in tools are disabled via permissions override
- You access the meta-agent through a different UI (similar to vibe-kanban's web UI)
- Everything you do gets routed directly to Claude Code, using your Claude subscription legally. (Input-level features like commands get resolved by meta-agent UI before being sent to claude code)
- Claude Code must use the tools and skills directly from meta-agent MCP as instructed in the prompt, and because its own tools are permission denied (result: very good UI integration with the meta-agent UI)
- This would also work with any other CLI coding agent (Codex, Gemini CLI, Copilot CLI etc.) should they start getting ideas of locking users in
- If Claude Code rug-pulls subscription quotas, just switch to a competitor instantly
All it requires is a CLI coding agent with MCP support, and the TOS allowing automatic use of its UI (disallowing that would be massive hypocrisy as the AI companies themselves make computer use agents that allow automatic use of other apps' UI)
Could you think of it as ClaudeCode is just a tool used by another agent and that other agent is instructed to use the ClaudeCode tool for everything? Makes sense, i don't see why we can't have agents use these agents for us, just like the AI companies are proposing to use their agents in place of everything else we currently use.
Also, why not distribute implementation documentation so claudecode can write OpenCode itself and use your oauth token. Now you have opencode for personal use, you didn't get it from anywhere your agent created it for you and only you.
This is funny. This change actually pushes me into using a competitor more (https://www.kimi.com). I was trying out this provider with oh-my-pi (https://github.com/can1357/oh-my-pi) and was lamenting that it didn't have web search implemented using kimi.
Well a kind contributor just added that feature specifically because of this ban(https://github.com/can1357/oh-my-pi/pull/110).
I'm happy as a clam now. Yay for competition!
For what it's worth, I built an alternative specifically because of the ToS risk. GhostClaw uses proper API keys stored in AES-256-GCM + Argon2id encrypted vault -no OAuth session tokens, no subscription credentials, no middleman. Skills are signed with Ed25519 before execution. Code runs in a Landlock + seccomp kernel sandbox. If your key gets compromised you rotate it; if a session token gets compromised in someone else's app you might not even know.
t's open source, one Rust binary, ~6MB. https://github.com/Patrickschell609/ghostclaw
That's should be illegal. They used the excuse it was there to take or just burnt evidence literally of pirated books.
What they are doing is implicitly changing the contract of usage of their services.
What is the point of developing against the Agent SDK after this change.
OpenClaw, NanoClaw, et al all use AgentSDK which will from now on be forbidden.
They are literally alienating a large percentage of OpenClaw, NanoClaw, PicoClaw, customers because those customers will surely not be willing to pay API pricing, which is at least 6-10x Max Plan pricing (for my usage).
This isn’t too surprising to me since they probably have a direct competitor to openclaw et al in the works right now, but until then I am cancelling my subscription and porting my nanoclaw fork with mem0 integration to work with OpenAI instead.
Thats not a “That’ll teach ‘em” statement, it is just my own cost optimization. I am quite fond of Anthropic’s coding models and might still subscribe again at the $20 level, but they just priced me out for personal assistant, research, and 90% of my token use case.
What does Anthropic have to gain from users who use a very high amount of tokens for OpenClaw, NanoClaw etc and pay them only $20?
how can they even enforce this? can't you just spoof all your network requests to appear like it's coming from claude code?
in any case Codex is a better SOTA anyways and they let you do this. and if you aren't interested in the best models, Mistral lets you use both Vibe and their API through your vibe subscription api key which is incredible.
> how can they even enforce this?
Many ways, and they’re under no obligation to play fair and tell you which way they’re using at any given time. They’ve said what the rules are, they’ve said they’ll ban you if they catch you.
So let’s say they enforce it by adding an extra nonstandard challenge-response handshake at the beginning of the exchange, which generates a token which they’ll expect on all requests going forward. You decompile the minified JS code, figure out the protocol, try it from your own code but accidentally mess up a small detail (you didn’t realize the nonce has a special suffix). Detected. Banned.
You’ll need a new credit card to open a new account and try again. Better get the protocol right on the first try this time, because debugging is going to get expensive.
Let’s say you get frustrated and post on Twitter about what you know so far. If you share info, they’ll probably see it eventually and change their method. They’ll probably change it once a month anyway and see who they catch that way (and presumably add a minimum Claude Code version needed to reach their servers).
They’ve got hundreds of super smart coders and one of the most powerful AI models, they can do this all day.
the internet has hundreds of thousands of super smart coders with the most powerful ai models as well, I think it's a bit harder than you're assuming.
you just need to inspect the network traffic with Claude code and mimic that
undefined
undefined
undefined
easily "bypassable", trust me :)
see my comment here but I think instead of worrying about the decompile minified JS code etc., you can just essentially use claude code in the background and still do it even using opencode/its SDK thus giving sort of API access over CC subscription https://news.ycombinator.com/item?id=47069299#47070204
I am not sure how they can detect this. I can be wrong, I usually am but I think its still possible to use CC etc. even after this change if you really wanted to
But at this point, to me the question of GP that is that is it even worth it is definitely what I am thinking?
I think not. There are better options out there, they mentioned mistral and codex and I think kimi also supports maybe GLM/z.ai as well
Pretty easy to enforce it - rather than make raw queries to the LLM Claude Code can proxy through Anthropic's servers. The server can then enforce query patterns, system prompts and other stuff that outside apps cannot override.
And once all the Claude subscribers move over to Codex subscriptions, I'd bet a large sum that OpenAI will make their own ToS update preventing automated/scripted usage.
They can't catch everything but they can make your product you're building on top of it non viable when it gets popular enough to look for, like they did with opencode.
at least with open code you can just use a third party plugin to authenticate
> how can they even enforce this?
I would think that different tools would probably have different templates for their prompts?
You could tell by the prompt being used.
We don’t enforce speed limits, but it sucks when you get caught.
OpenAI will adjust, their investors will not allow money to be lost on ”being nice” forever, not until they’re handsomely paid back at least.
¡Quick reminder! We are in the golden era of big company programming agents. Enjoy it while you can because it is likely going to get worse over time. Hopefully, there were will be competitive open source agents and some benevolent nerds put together a reasonable service. Otherwise I can see companies investing in their own AI infrastructure and developers who build their own systems becoming the top performers.
This is the VC funded startup playbook. It has been repeated many times, but maybe for the younger crowd it is new. Start a new service that is relatively permissive, then gradually restrict APIs and permissions. Finally, start throwing in ads and/or making it more expensive to use. Part of the reason is in the beginning they are trying to get as many users as possible and burning VC money. Then once the honey moon is over, they need to make a profit so they cut back on services, nerf stuff, increase prices and start adding ads.
This feels perfectly justifiable to me. The subscription plans are super cheap and if they insist you use their tool I understand. Ya'll seem a bit entitled if I'm being honest.
This article is somewhat reassuring to me, someone experimenting with openclaw on a Max subscription. But idk anything about the blog so would love to hear thoughts.
https://thenewstack.io/anthropic-agent-sdk-confusion/
In my opinion (which means nothing). If you are using your own hardware and not profiting directly from Claude’s use (as in building a service powered by your subscription). I don’t see how this is a problem. I am by no means blowing through my usage (usually <50% weekly with max x5).
I feel like they want to be like Apple, and open-code + open-source models are Linux. The thing is, Apple is (for some) way better in user experience and quality. I think they can pull it off only if they keep their distance from the others. But if Google/Chinese models become as good as Claude, then there won’t be a reason — at least for me — to pay 10x for the product
The analogy I like to use when people say "I paid" is that you can't pay for a buffet then get all the food take-home for free.
Not sure what the problem is, I am on Max and use Claude Code, never get usage issues, that's what I pay for and want that to always be an option (capped monthly cost). For other uses it makes sense to go through their API service. This is less confusing and provides clarity for users, if you are a first party user use Claude's tools to access's the models otherwise API
OpenAI has endorsed OAuth from 3rd party harnesses, and their limits are way higher. Use better tools (OpenCode, pi) with an arguably better model (xhigh reasoning) for longer …
I am looking forward to switching to OpenAI once my claude max account is banned for using pi....
I wrote a mcp bridge so that I don't have to copy and paste prompt back and forth between CLI and claude, chatgpt, grok, gemini
https://github.com/agentify-sh/desktop
Does this mean I have to remove claude now and go back to copy & pasting prompts for a subscription I am paying for ?!
wth happened to fair use ?
That's it. That's all the moat they have.
Does this mean that in an absurd way you can get banned if you use CodexBar https://github.com/steipete/CodexBar to keep track of your usage? It does use your credentials to fetch the usage, could they be so extreme that this would be an issue?
This is a signal that everyone making AI apps should build on Gemini/OpenAI, and since there is a dance of code and model to get good results, inevitably Anthropic are now writing themselves out of being the backend for everyone elses AI apps going forward
Not surprised, its the official stance by Anthropic.
I'm more surprised by people using subscription auth for OpenClaw when its officially not allowed.
Their model actually doesn't have that much of a moat if at all. Their agent harness also doesn't, at least not for long. Writing an agent harness isn't that difficult. They are desperately trying to stay in power. I don´t like being a customer of this company and am investing lots of my time in moving away from them completely.
They are obviously losing money on these plans, just like all of the other companies in the space.
They are all desperately trying to stay in power, and this policy change (or clarification) is a fart in the wind in the grand scheme of what's going on in this industry.
I think that their main problem is that they don't have enough resources to serve too many users, so they resort to this kind of limitations to keep Claude usage under control. Otherwise I wouldn't be able to explain a commercial move that limits their offer so strongly in comparison to competitors.
Product usage subsidized by company, $100. Users inevitably figure out how to steal those subsidies, agents go brrrrr. Users mad that subsidy stealing gets cut off and completely ignore why they need to rely on subsidies in the first place, priceless.
At this point, are there decent alternatives to Anthropic models for coding that allow third-party usage?
OpenAI have been very generous in their plans in terms of token and what you use it with. Is Codex better or as good as Opus for coding? No. Is it a decent alternative? Very.
Thanks for the reply. Need to try Codex
Kimi is amazing for this. They offer API usage as well iirc if you buy their subscription.
Not regular api usage, just the kimi coding plan, which you can only use in some coding agents
Also the .99c deal has API Access
undefined
Thanks, will explore Kimi. Haven’t tried it yet
This month was the first month i spent >$100 on it and it didn't feel like it was money well spent. I feel borderline scammed.
I'm just going to accept that my €15 (which with vat becomes €21) is just enough usage to automate some boring tasks.
Seems fair enough really, not that I like it either, but they could easily not offer the plans and only have API pricing. Makes it make more sense to have the plans be 'the Claude Code pricing' really.
I'm wondering: why now, in early 2026? Why not last year? Why not in July? What changed? What does this teach us about Anthropic and what can we infer about their competition?
Especially how generous Anthropic has been recently with subscribers - extra usage in December, $50 credit earlier this month, $20 subscription getting access to Opus.
It suggests to me Anthropic is less concerned with the financial impact of letting subscribers use alternative tools and more concerned with creating lock in to their products and subscriptions. It very well might backfire though, I was not considering alternative models yesterday, but today I am actively exploring other options and considering cancelling my sub. I've been using my subscription primarily through pi recently, so if they aren't interested in me as a customer, pretty much everyone else is.
Sounds like a panicking company grasping and clawing for a moat
In the old days, think Gmail, or before the "unlimited" marketing scam. People genuinely are smart enough to know they are doing something that they are not suppose to be doing. Even Pirating software, say Windows or Adobe. I mean who can afford those when they were young?
Things get banned, but that is OK along as they give us weeks or days to prep for alternative solution. Users ( Not Customers ) are happy with it. Too bad, the good days are over.
Somewhere along the line, no just in software but even in politics, the whole world on entitlement. They somehow believe they deserve this, what they were doing were wrong but if it is allowed in the first place they should remain allowed to do so.
Judging from account opening time and comments we can also tell the age group and which camp they are on.
I don’t understand which camp are you on?
The one you're not in
undefined
The telemetry from claude code must be immensely valuable for training. Using it is training your replacement!
Hot take: trying to restrict what front end people use to access your service is almost always an anti-competitive, anti-consumer freedom move which should be legally prohibited for those reasons. (Not just for AI, I'm talking about any and all cloud services.)
Regarding consumer freedom, I believe software running on user machines should serve the interests of the user, not the company who wrote the software or anyone else for that matter. Trying to force users to run a particular client written by your company violates this principle.
Regarding competition, forcing users to run a particular client is a form of anti competitive bundling, a naked attempt to prevent alternative clients from being able to enter the market unless they are able to build a competing backed as well. Such artificial "moats" are great for companies but harmful to consumers.
It's a bit unclear to me. I'm building a system around the Claude Agent SDK. Am I allowed to use it or not? Apparently not.
I'm a bit lost on this one.
I can get a ridiculous amount of tokens in and out of something like gpt-5.2 via the API for $100.
Is this primarily about gas town and friends?
Honestly seeing throttling of AI usuage across all providers:
- Google reduced AI Studio's free rate limits by 1/10th
- Perplexity imposing rate limits, card filing to continue free subscriptions
- Now Anthropic as well
There has been a false narrative that AI will get cheaper and more ubiquitous, but model providers have been stuck in a race for ever more capabilities and performance at higher costs.
Am I the only one perplexed why folks find this stunning or meaningful. While LLMs are novel and different in that the subscription gives you access to compute, it does not feel foreign from the subscription, free or paid, landscape. I cannot recall many (if any?) companies that would freely let you use compute, private internal APIs or anything similar just because you have a login. Maybe I come from a different era of tech but it seems both reasonable and not surprising.
Why now? It would not surprise me that this was simply an after thought and once it hit critical mass (opencode) they locked it down.
Anthropic has no authority to do as such. Users and third apps are protected by interoperability exceptions found in copyright case law.
Trying to prevent competitors from interoperating with the service also may be construed as anticompetitive behaviour.
The implementation details of an authentication process do not beget legal privileges to be a monopolist. What an absurd thought.
What about using claude -p as an api interface?
This confirms they're selling those subscriptions at a loss which is simply not sustainable.
They probably are but I don’t think that’s what this confirms. Most consumer flat rate priced services restrict usage outside of the first party apps, because 3rd party and scripted users can generate orders of magnitude more usage than a single user using the app can.
So it makes sense to offer simple flat pricing for first party apps, and usage priced apis for other usage. It’s like the difference between Google Drive and S3.
I get your point - they might count on the user not using their full quota they're officially allowed to use (and if that's the case, Anthropic is not losing money). But then still - IF the user used the whole quota, Anthropic loses.. so what's advertised is not actually honest.
For me, flat rates are simply unfair either ways - if I'm not using the product much, I'm overpaying (and they're ok with that), otherwise it magically turns out that it's no longer ok when I actually want to utilize what I paid for :)
undefined
My alt Google accounts were all banned from Gemini access. Luckily Google left my main account alone. They are all cracking down.
From 3rd party AI app use?
Using a proxy to switch accounts
Why do I get the nagging suspicion their 1 million LOC codebase is backdoored?
at least there seems to be some clarification regarding Agent SDK ... unclear whats happening with OpenClaw https://x.com/atla_/status/2024399329310511426
the subscription already has usage caps. if the caps are the caps, why does the client matter. if the caps aren't actually the caps, that's a different conversation.
Claude Code is a lock-in play. Use Cursor or OpenCode.
Too bad will stick with codex as thinker and glm5 as hands, at a fraction of the cost.
The people who they’re going to piss off the most with this are the exact people who are the least susceptible to their walled garden play. If you’re using OpenCode, you’re not going to stop using it because Anthropic tells you to; you’re just going to think ‘fuck Anthropic’, press whatever you’ve bound “switch model” to, and just continue using OpenCode. I think most power users have realized by now that Claude Code is sub-par software and probably actively holding back the models because Anthropic thinks they can’t work right without 20,000 tokens worth of system prompt (my own system prompt has around 1,000 and outperforms CC at every test I throw it at).
They’re losing the exact crowd that they want in their corner because it’s the crowd that’s far more likely to be making the decisions when companies start pivoting their workflows en-masse. Keep pissing on them and they’ll remember the wet when the time comes to decide whom to give a share from the potentially massive company’s potentially massive coffers.
You need a company with a market cap in the trillions to succeed here
How does this impact open router?
Can’t this restriction for the time being be bypassed via -p command line flag?
OpenRouter uses the API and does not use any subscription auth.
The reason I find this so egregious is because I don’t want to use Claude Code! It’s complete rubbish, completely sidelines security, and nobody seems to care. So I’m forced to use their slop if I want to use Claude models without getting a wallet emptying API bill? Forget it, I will use Codex or Gemini.
Claude Code is not the apex. We’re still collectively figuring out the best way to use models in software, this TOS change kills innovation.
So even simple apps that are just code usage monitors are banned?
Always have been, unless you're using the API meant for apps.
But if you're doing something very basic, you might be able to slop together a tool that does local inferencing based on a small, local model instead, alleviating the need to call Claude entirely.
You can use Claude CLI as a relay - yes, it needs to be there -but its not that different than use the API
Cancelled my Claude and bought GLM coding plan + Codex.
This is something I think Anthropic does not get. They want to be Microsoft of AI, make people their solution, so they will not to move to the other provided. Thing is, giving access to a text prompt is not something that you can monopolize easily. Even if you provide some stuff like skills, MCP server integration, that is not a big deal.
important they have clarified that it's OK to use it for personal experimentation if you don't build a business out of it!
Sonnet literally just recommended using a subscription token for openclaw. Even anthropic's own AI doesn't understand its own TOS.
Sonnet was not trained with this information and extremely-recent-information-without-access-to-a-Web-Search-tool is the core case of hallucination.
Sonnet does have search available FYI.
undefined
Isn’t this flawed anyway? If an application communicates with Claude Code over ACP (like Zed), it works fine?
Instead of using SDKs, this will just shift the third party clients to use ACP to get around it - Claude Code is still under the hood but you’re using a different interface.
This all seems pretty idiotic on their part - I know why they’re trying it but it won’t work. There will always be someone working around it.
You guys are acting like coke addicts... dont you see?
Anthropic is just doing this out of spite. They had a real scenario to win mindshare and marketshare and they fucked up instead. They could have done what Open AI did - hired the OpenClaw/d founder. Instead, they sent him a legal notice for trademark violation. And now they're just pissed he works for their biggest competitor. Throw all tantrums you want, you're on the wrong side of this one, Anthropic.
Agreed! I don't understand how so many people on here seem to think it is completely reasonable for Anthropic to act like this.
Apple/OpenAI = god
Anthropic = good
Google = evil
That's pretty much HN crowd logic to be honest
So here goes my OpenClaw integration with Anthropic via OAuth… While I see their business risk I also see the onboarding path for new paying customers. I just upgraded to Max and would even consider the API if cost were controllable. I hope that Anthropic finds a smart way to communicate with customers in a constructive way and offers advice for the not so skilled OpenClaw homelabbers instead of terminating their accounts… Is anybody here from Anthropic that could pick up that message before a PR nightmare happens?
Oh crap. I just logged into HN to ask if anyone knew of a working alternative to the Claude Code client. It's lost Claude's work multiple times in the last few days, and I'm ready to switch to a different provider. (4.6 is mildly better than 4.5, but the TUI is a deal breaker.)
So, I guess it's time to look into OpenAI Codex. Any other viable options? I have a 128GB iGPU, so maybe a local model would work for some tasks?
QWEN models are quite nice for local use. Gemini 3 Pro is much better than Codex IMO.
Local? No, not currently. You need about 1TB VRAM. There are many harnesses in development at the time, keep a good look out. Just try many of them, look at the system prompts in particular. Consider DeepSeek using the official API. Consider also tweaking system prompts for whatever tool you end up using. And agree that TUI is meh; we need GUI.
Zed with CC using ACP?
Opencode with CC underneath using Gigacode?
OpenAI codex is also another viable path for what its worth.
I think the best model to my liking open source is kimi k2.5, so maybe you can run that?
Qwen is releasing some new models so I assume keep an eye on those and maybe some model can fit your use case as well?
Zed's ACP client is a wrapper around Agents SDK. So that will be a TOS violation.
Codex has now caught up to Claude Opus and this is a defensive move by Anthropic
Thanks codex allows using their subscription and it’s working very well for me. I will not miss anything from Anthropic. BTW bad move, shame on you
People on here are acting like school children over this. It’s their product that they spent billions to make. Yet here we are complaining about why they should let you use third party products specifically made to compete against Anthropic.
You can still simply pay for API.
Just a friendly reminder also to anyone outside the US that these subscriptions cannot be used for commercial work. Check the consumer ToS when you sign up. It’s quite clear.
Yeah for context the TOS outside the US has:
Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.
May we still use the agent sdk for our own private use with the max account? I’m a bit confused.
That's too bad, in a way it was a bit of an unofficial app store for Anthropic - I am sure they've probably looked at that and hopefully this means there's something on it's way.
Not really sure if its even feasible to enforce it unless the idea is to discourage the big players from doing it.
I have no issues with this. Anthropic did a great job with Claude Code.
It's a little bit sleazy as a business model to try to wedge one's self between Claude and its users.
OpenAI acquiring OpenClaw gives me bad vibes. How did OpenClaw gain so much traction so quickly? It doesn't seem organic.
I definitely feel much more aligned with Anthropic as a company. What they do seems more focused, meritocratic, organic and genuine.
OpenAI essentially appropriated all their current IP from the people... They basically gutted the non-profit and stole its IP. Then sold a huge chunk to Microsoft... Yes, they literally sold the IP they stole to Microsoft, in broad daylight. Then they used media spin to make it sound like they appropriated it from Elon because Elon donated a few millions... But Elon got his tax deduction! The public footed the bill for those deductions... The IP belonged to the non-profit; to the public, not Elon, nor any of the donors. I mean let's not even mention Suchir Balaji, the OpenAI researcher who supposedly "committed suicide" after trying to warn everyone about the stolen IP.
OpenAI is clearly trying to slander Anthropic, trying to present themselves as the good guys after their OpenClaw acquisition and really rubbing it in all over HN... Over which they have much influence.
The entitlement from many HN posters is astounding. "Companies must provide services in the way I want billed how I want and with absolutely zero restrictions at all!" Get over yourselves. You're not that important. Don't like it. Don't use it. Seems pretty straightforward.
[dead]
> The real value in these tools is not the model, it is the harness... And that is the part that is easiest to replicate.
> The companies that will win long-term are the ones building open protocols and letting users bring their own model.
These seem contradictory. It sounds like you're saying that the long term winners are the ones who do the easy part. The future I see is open source harnesses talking to commodity models.
More than just the real value: the real intelligence is in the harness.
I agree, but that's not something you can maintain an advantage on for long.
Perhaps there's enough overlap among the low hanging fruit that you can initially sell a harness that makes both genomics researchers and urban planners happy... but pretty quickly you're going to need to be the right kind of specialist to build an effective harness for it.
The opposite is true.
There is barely any magic in the harness, the magic is in the model.
Try it: write your own harness with (bash, read, write, edit) ... it's trivial to get a 99% version of (pick your favorite harness) -- minus the bells and whistles.
The "magic of the harness" comes from the fun auxiliary orchestration stuff - hard engineering for sure! - but seriously, the model is the key item.
undefined
undefined
undefined
[dead]
[dead]
[dead]
They really ficked up by not embracing openclaw now I use codex 5.3
The number one thing we need is cheap abundant decentralized clean energy, and these things are laughable.
Unfortunately neither political party can get all of the above.
Are you implying that no one would use LLM SaaSes and everyone would self-host if energy costs were negligible?
That is...not how it works. People self-hosting don't look at their electricity bill.
I was stuck on the part where they said neither party could provide cheap abundant decentralized clean energy. Biden / Obama did a great job of providing those things, to the point where dirty coal and natural gas are both more expensive than solar or wind.
So, which two parties could they be referring to? The Republicans and the Freedom Caucus?
undefined
And I just bought my mac mini this morning... Sorry everyone
You know that if you are just using a cloud service and not running local models, you could have just bought a raspberry pi.
Yeah. I know it’s dumb but it’s also a very expensive machine to run BlueBubbles, because iMessage requires a real Mac signed into an Apple ID, and I want a persistent macOS automation host with native Messages, AppleScript, and direct access to my local dev environment, not just a headless Linux box calling APIs.
undefined
undefined
Harder to get at the Apple ecosystem. I have an old Macbook that just serves my reminders over the internet.
undefined
I think this is shortsighted.
The markets value recurring subscription revenue at something like 10x “one-off” revenue, Anthropic is leaving a lot of enterprise value on the table with this approach.
In practice this approach forces AI apps to pay Anthropic for tokens, and then bill their customers a subscription. Customers could bring their own API key but it’s sketchy to put that into every app you want to try, and consumers aren’t going to use developer tools. And many categories of free app are simply excluded, which could in aggregate drive a lot more demand for subscriptions.
If Anthropic is worried about quota, seems they could set lower caps for third-party subscription usage? Still better than forcing API keys.
(Maybe this is purely about displacing other IDE products, rather than a broader market play.)
I think they are smart making a distinction between a D2C subscription which they control the interface to and eat the losses for vs B2B use where they pay for what they use.
Allows them to optimize their clients and use private APIs for exclusive features etc. and there’s really no reason to bootstrap other wannabe AI companies who just stick a facade experience in front of Anthropic’s paying customer.
> eat the losses
Look at your token usage of the last 30 days in one of the JSON files generated by Claude Code. Compare that against API costs for Opus. Tell me if they are eating losses or not. I'm not making a point, actually do it and let me know. I was at 1 million. I'm paying 90 EUR/m. That means I'm subsidizing them (paying 3-4 times what it would cost with the API)! And I feel like I'm a pretty heavy user. Although people running it in a loop or using Gas Town will be using much more.
undefined
There's no decision to be made here, it's just way too expensive to have 3rd parties soak up the excess tokens, that's not the product being sold.
Especially as they are subsidized.
That’s not true, the market loves pay per use, see ”cloud”. It outperforms subscriptions by a lot, it’s not ”one-off”. And your example is not how companies building on top tend to charge, you either have your own infrastructure (key) or get charged at-cost + fees and service costs.
I don’t think Anthropic has any desire to be some B2C platform, they want high paying reliable customers (B2B, Enterprise).
> the market loves pay per use, see ”cloud”.
Cloud goes on the books as recurring revenue, not one-off; even though it's in principle elastic, in practice if I pay for a VM today I'll usually pay for one tomorrow.
(I don't have the numbers but the vast majority of cloud revenue is also going to be pre-committed long-term contracts from enterprises.)
> I don’t think Anthropic has any desire to be some B2C platform
This is the best line of argument I can see. But still not clear to me why my OP doesn't apply for enterprise, too.
Maybe the play is just to force other companies to become MCPs, instead of enabling them to have a direct customer relationship.
undefined