- I write detailed specs. Multifile with example code. In markdown.
Then hand over to Claude Sonnet.
With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
So turns out that I'm not writing code but I'm reading lots of code.
The fact that I know first hand prior to Gen AI is that writing code is way easier. It is reading the code, understanding it and making a mental model that's way more labour intensive.
Therefore I need more time and effort with Gen AI than I needed before because I need to read a lot of code, understand it and ensure it adheres to what mental model I have.
Hence Gen AI at this price point which Anthropic offers is a net negative for me because I am not vibe coding, I'm building real software that real humans depend upon and my users deserve better attention and focus from me hence I'll be cancelling my subscription shortly.
- I feel like I'm using Claude Opus pretty effectively and I'm honestly not running up against limits in my mid-tier subscriptions. My workflow is more "copilot" than "autopilot", in that I craft prompts for contained tasks and review nearly everything, so it's pretty light compared to people doing vibe coding.
The market-leading technology is pretty close to "good enough" for how I'm using it. I look forward to the day when LLM-assisted coding is commoditized. I could really go for an open source model based on properly licensed code.
by janwillemb
13 subcomments
- This is what worries me. People become dependent on these GenAI products that are proprietary, not transparant, and need a subscription. People build on it like it is a solid foundation. But all of a sudden the owner just pulls the foundation from under your building.
by wood_spirit
2 subcomments
- Me and so many coworkers have been struggling with a big cognitive decline in Claude over the last two months. 4.5 was useful and 4.6 was great. I had my own little benchmark and 4.5 could just about keep track of a two way pointer merge loop whereas 4.6 managed a 3 way and the 1M context managed k-way. And this ability to track braids directly helped it understand real production code and make changes and be useful etc.
but then two months ago 4.6 started getting forgetful and making very dumb decisions and so on. Everyone started comparing notes and realising it wasn’t “just them”. And 4.7 isn’t much better and the last few weeks we keep having to battle the auto level of effort downgrade and so on. So much friction as you think “that was dumb” and have to go check the settings again and see there has been some silent downgrade.
We all miss the early days of 4.6, which just show you can have a good useful model. LLMs can be really powerful but in delivering it to the mass market Anthropic throttle and downgrade it to not useful.
My thinking is that soon deepseek reaches the more-than-good-enough 4.6+ level and everyone can get off the Claude pay-more-for-less trajectory. We don’t need much more than we’ve already had a glimpse of and now know is possible. We just need it in our control and provisioned not metered so we can depend upon it.
by wilbur_whateley
6 subcomments
- Claude with Sonnet medium effort just used 100% of my session limit, some extra dollars, thought for 53 minutes, and said:
API Error: Claude's response exceeded the 32000 output token maximum. To configure this behavior, set the CLAUDE_CODE_MAX_OUTPUT_TOKENS environment variable.
by kashunstva
0 subcomment
- I’m sympathetic to the author’s complaints about Anthropic’s support, though I would go further. It doesn’t exist.
For reasons that continue to elude me, almost exactly one year ago, Anthropic cancelled my Claude Pro plan. To appeal, you must fill out a Google docs form. And wait. In my case, I’ve waited for about one year. Once I managed to email with a human but they quickly plugged that hole with a chatbot that sends you back to their never-to-be-reviewed form. No route to escalate.
A year gives one a long time to think about things. Maybe it was because I was on a VPN temporarily. Otherwise, no clue. I’m a hobbyist embedded developer. That’s it.
So no, Anthropic support isn’t just poor; it’s nonexistent.
- My max20 sub is sitting unused since april mostly now, codex with 5.4 (and now 5.5) even with fast mode (= double token costs) is night and day. Opus is doing convincing failures and either forgets half the important details or decides to do "pragmatic" (read: technical debt bandaids or worse) silently and claims success even with everything crashing and burning after the changes. and point out the errors it will make even more messes. Opus works really well for oneshotting greenfield scopes, but iterating on it later or doing complex integrations its just unusable and even harmfully bad.
GPT 5.4+ takes its time and considers even edgecases unprovoked that in fact are correct and saves me subsequent error hunting turns and finally delivers. Plus no "this doesn't look like malware" or "actually wait" thinking loops for minutes over a oneliner script change.
- Yesterday was a realization point for me. I gave a simple extraction task to Claude code with a local LLM and it "whirred" and "purred" for 10 minutes. Then I submitted the same data and prompt directly to model via llama_cpp chat UI and the model single-shotted it in under a minute. So obviously something wrong with coding agent or the way it is talking to LLM.
Now I'm looking for an extremely simple open-source coding agent. Nanocoder doesn't seem install on my Mac and it brings node-modules bloat, so no. Opencode seems not quite open-source. For now, I'm doing the work of coding agent and using llama_cpp web UI. Chugging it along fine.
by drunken_thor
8 subcomments
- AI services are only minorly incentivized to reduce token usage. They want high token usage, it makes you pay more. They are going to continually test where the limit is, what is the max token usage before you get angry. All AI companies will continue to trade places for token use and cost as cost increases. We are in tepid water pretending it is a bath pretending we aren’t about to be boiled frogs.
- I see a lot of people struggling to work with agents. This post has a good example:
> “you can’t be serious — is this how you fix things? just WORKAROUNDS????”
If this is how you’re interacting with your agents I think you’re in for a world of disappointment. An important part of working with agents is providing specific feedback. And beyond that making sure this feedback actually available to them in their context when relevant.
I will ask them why they made a decision and review alternatives with them. These learnings will aid both you and the agent in the future.
- I've noticed that sometimes the same Claude model will make logical errors sometimes but not other times. Claude's performance is highly temporal. There's even a graph! https://marginlab.ai/trackers/claude-code/
I haven't seen anyone mention this publicly, but I've noticed that the same model will give wildly different results depending on the quantization. 4-bit is not the same as 8-bit and so on in compute requirements and output quality. https://newsletter.maartengrootendorst.com/p/a-visual-guide-...
I'm aware that frontier models don't work in the same way, but I've often wondered if there's a fidelity dial somewhere that's being used to change the amount of memory / resources each model takes during peak hours v. off hours. Does anyone know if that's the case?
by lukaslalinsky
0 subcomment
- I feel like Opus 4.5 was the peak in Claude Code usefulness. It was smart, it was interactive, it was precise. In 4.6 and 4.7, it spends a long time thinking and I don't know what's happening, often hits a dead-end and just continues. For a while I was setting Opus 4.5 in Claude Code, but it got reset often. I just canceled my Max plan, don't know where to look for alternatives.
by petterroea
3 subcomments
- Looking at Anthropic's new products I think they understand they don't really have a cutting edge other than the brand.
I tried Kimi 2.6 and it's almost comparable to Opus. Anthropic lost the ball. I hope this is a sign the we are moving towards a future where model usage is a commodity with heavy competition on price/performance
by ChicagoDave
4 subcomments
- I think there’s a clear split amongst GenAI developers.
One group is consistently trying to play whack-a-mole with different models/tools and prompt engineering and has shown a sine-wave of success.
The other group, seemingly made up of architects and Domain-Driven Design adherents has had a straight-line of high productivity and generating clean code, regardless of model and tooling.
I have consistently advised all GenAI developers to align with that second group, but it’s clear many developers insist on the whack-a-mole mentality.
I have even wrapped my advice in https://devarch.ai/ which has codified how I extract a high level of quality code and an ability to manage a complex application.
Anthropic has done some goofy things recently, but they cleaned it up because we all reported issues immediately. I think it’s in their best interests to keep developers happy.
My two cents.
- I've been a fan since the launch of the first Sonnet model and big props for standing up to the government, but you can sure lose that good faith fast when you piss off your paying customers with bad communication, shaky model quality and lowered usage limits.
- Same, after being a long-time proponent too.
First was the CC adaptive thinking change, then 4.7. Even with `/effort max` and keeping under 20% of 1M context, the quality degradation is obvious.
I don't understand their strategy here.
by siliconc0w
1 subcomments
- Shameless self plug but also worried about the silent quality regressions, I started building a tool to track coding agent performance over time.. https://github.com/s1liconcow/repogauge
Here is a sample report that tries out the cheaper models + the newest Kimi2.6 model against the 5.4 'gold' testcases from the repo: https://repogauge.org/sample_report.
by binaryturtle
1 subcomments
- I have a simple rule: I won't pay for that stuff. First they steal all my work to feed into those models, afterwards I shall pay for it? No way!
I use AI, but only what is free-of-charge, and if that doesn't cut it, I just do it like in the good old times, by using my own brain.
by mrinterweb
2 subcomments
- My recent frustration with Claude has been it feels like I'm waiting on responses more. I don't have historical latency to compare this with, but I feel like it has been getting slower. I may be wrong, and maybe its just spending more time thinking than it used to. My guess is Anthropic is having capacity issues. I hope I'm wrong because I don't want to switch.
- I’ve noticed most of the complaints are about the Pro plan. Anecdotally I pay for the $200 Max plan and haven’t noticed anything radically different re: tokens or thinking time (availability is still a crapshoot)
I am certainly not saying people should “spend more money,” more like the Claude Code access in the Pro plan seems kind of like false advertising. Since it’s technically usable, but not really.
by taffydavid
0 subcomment
- I know this thread is likely full of similar anecdotes, but I also want to share.
My experience very suddenly and very clearly degraded over the last few days.
Today I was trying to build a simple chess game. Previous one shots were HTML, this gave me a jsx file. I asked it to put it HTML and it absolutely devoured my credits doing so, I had to abort and do it manually. The resulting app didn't work, and it had decided that multiplayer could work by storing the game state only on local storage without the clients communicating at all
- They can't afford to care about individual customers because enterprise demand exploded and they're short on compute
by stan_kirdey
0 subcomment
- I also cancelled my subscription.The $20 Pro plan has become completely unusable for any real work. What is especially frustrating is that Claude Chat and Claude Code now share the exact same usage limits — it makes zero sense from a product standpoint when the workflows are so different. Even the $200 Max plan got heavily nerfed. What used to easily last me a full week (or more) of solid daily use now burns out in just a few days. Combined with the quality drop and unpredictable token consumption, it simply stopped being worth it.
- Doesn't "poor support" implies that there is some sort of support? Shouldnt it be "no support"
- Max x20 user here. As long as Opus 4.6 is available and they fix Opus 4.7, I'll stay with Anthropic. Tho, I'd imagine in 5 years we'll have Opus 4.6 equivalent performance available in an at home consumer model.
by vintagedave
0 subcomment
- They won't even reset usage for me: https://news.ycombinator.com/item?id=47892445
And by crikey do I empathise with the poor support in this article. Nothing has soured me on Anthropic more than their attitude.
Great AI engineers. Questionable command line engineers (but highly successful.) Downright awful to their customers.
- for all the drama, its pretty clear both openai, google, and anthropic have had to degrade some of their products because of a lack of supply.
There's really no immediate solution to this other than letting the price float or limiting users as capacity is built out this gets better.
by isjcjwjdkwjxk
0 subcomment
- Oh no, the unreliable product people pretend is the next coming of Jesus turned out to be thoroughly unreliable. Who coulda thunk it.
by PeterStuer
0 subcomment
- I'm on max x5. No limit problems, but I am definetly feeling the decline. Early stopping and being hellbent on taking shortcuts being the main culprits, closely followed by over optimistic (stale) caching (audit your hooks!).
All mostly mitigatable by rigorous audits and steering, but man, it should not have to be.
by aucisson_masque
0 subcomment
- First ever time I used ai to code was a week ago, went with the Claude pro because I didn't want to commit.
The 20$ plan has incredible value but also, the limit are just way too tight.
I'm glad Claude made me discover the strength of ai, but now it's time to poke around and see what is more customer friendly. I found deepseek V4 to be extremely cheap and also just as good.
Plus I get the benefit to use it in vs code instead of using Claude proprietary app.
I think that when people goes over the hype and social pressure, anthropic will lose quite a lot of customer.
by torstenvl
1 subcomments
- I feel like almost everyone using AI for support systems is utterly failing at the same incredibly obvious place.
The first job of any support system—both in terms of importance and chronologically—is triage. This is not a research issue and it's not an interaction issue. It's at root a classification problem and should be trained and implemented as such.
There are three broad categories of interaction: cranks, grandmas, and wtfs.
Cranks are the people opening a support chat to tell you they have vital missing information about the Kennedy Assassintion or they want your help suing the government for their exposure to Agent Orange when they were stationed at Minot. "Unfortunately I can't help with that. We are a website that sells wholesale frozen lemonade. Good luck!"
Grandma questions are the people who can't navigate your website. (This isn't meant to be derogatory, just vivid; I have grandma questions often enough myself.) They need to be pointed toward some resource: a help page, a kb article, a settings page, whatever. These are good tasks for a human or LLM agent with a script or guideline and excellent knowledge/training on the support knowledge base.
WTFs are everything else. Every weird undocumented behavior, every emergent circumstance, every invalid state, etc. These are your best customers and they should be escalated to a real human, preferably a smart one, as soon as realistically possible. They're your best customers because (a) they are investing time into fixing something that actually went wrong; (b) they will walk you through it in greater detail than a bug report, live, and help you figure it out; and (c) they are invested, which means you have an opportunity for real loyalty and word-of-mouth gains.
What most AI systems (whether LLMs or scripts) do wrong is that they treat WTFs like they're grandmas. They're spending significant money on building these systems just to destroy the value they get from the most intelligent and passionate people in their customer base doing in-depth production QC/QA.
by lawrence1
2 subcomments
- The timeline doesn't make any sense. How can you subscribe a couple weeks ago and the problem start 3 weeks ago and yet things also went well for the first few weeks. was this written by GPT 5.5?
- Funny. I thought I was the only one. Then I found more people and now you wrote about that. Just this week I also wrote about Claude Opus 4.7 and how I came back to Codex after that: https://thoughts.jock.pl/p/opus-4-7-codex-comeback-2026
by arikrahman
0 subcomment
- I use Aider nowadays, and will probably cancel my Github multi AI bundle subscription due to the new training policy. I find using Aider with the new open models and using Open Spec to negotiate requirements before the handoff, has helped me a lot.
- Curious. Not my experience whatsoever.
I tried Claude recently and it was able to one-shot fixes on 9/9 of the bugs I gave it on my large and older Unity C# project. Only 2/9 needed minor tweaks for personal style (functionally the same).
Maybe it helps that I separately have a CLI with very extensive unit tests. Or that I just signed up. Or that I use Claude late in the evenings (off hours). I also give it very targeted instructions and if it's taking longer than a couple minutes - I abort and try a different or more precise prompt. Maybe the backend recognizes that I use it sparingly and I get better service.
The author describes what sounds like very large tasks that I'd never hand off to an AI to run wild in 2026.
Anyway I thought I'd give a different perspective than this thread.
- Wait, weren't there posts in the not too distant past where everyone was signing the praises for Claude and wondering how OpenAI will catch up?
- This is interesting to me, because Claude has been a net-positive for me. I haven't faced token issues or declining quality. I generally work with Claude as an assistant -- I may have it do planning and have it "one shot" a thing, but it's usually a personal tool or a utility that I want it to write.
For actual code that goes out to production, I generally figure out how I would solve the problem myself (but will use Claude to bounce ideas and approaches -- or as a search engine) and then have Claude do the boring bits.
Recently I had to migrate a rules-engine into an FSM-based engine. I already had my plan and approach. I had Claude do the boring bits while I implemented the engine myself. I find that Claude does best when you give it small, focused, incremental tasks.
by easythrees
1 subcomments
- I have to say, this has been the opposite of my experience. If anything, I have moved over more work from ChatGPT to Claude.
- My experience is Claude and others are good at writing methods and smaller because you can dictate what it should do in less tokens and easily read the code. This closes the feadback loop for me.
I occasionally ask AI to write lots of code such as a whole feature (>= medium shirt size) or sometimes even bigger components of said feature and I often just revert what it generated. It's not good for all the reasons mentioned.
Other times I accept its output as a rough draft and then tell it how to refactor its code from mid to senior level.
I'm sure it will get better but this is my trust level with it. It saves me time within these confines.
Edit: it is a valuable code reviewer for me, especially as a solo stealth startup.
- I still haven’t seen any other models be as complete as Claude inside Claude Code. I bet Anthropic knows this and they turn the knobs and see people’s reactions… I have been planning with Qwen3.6 Max inside opencode, absolutely game changer.
Opus can then follow the plan quite detailed and like this I can make progress on my toy apps on Pro plan at 20/mo.
For work, unlimited usage via Bedrock.
Yes I’d like to get more usage out of my personal sub, but at 20/mo no complains
by throwaway2027
0 subcomment
- Same. I think one of the issues is that Claude reached a treshold where I could just rely on it being good and having to manually fix it up less and less and other models hadn't reached that point yet so I was aware of that and knew I had to fix things up or do a second pass or more. Other providers also move you to a worse model after you run out which is key in setting expectation as well. Developers knew that that was the trade-off.
I think even with the worse limits people still hated it but when you start to either on purpose or inadvertently make the model dumber that's when there's really no purpose to keep using Claude anymore.
- I’ve definitely encountered a drop in Claude quality.
Even a simple prompt focused on two files I told Claude to do a thing to file A and not change file B (we were using it as a reference).
Claude’s plan was to not touch file B.
First thing it did was alter file B. Astonishing simple task and total failure.
It was all of one prompt, simple task, it failed outright.
I also had it declare that some function did not have a default value and then explain what the fun does and how it defaults to a specific value….
Fundamentally absurd failures that have seriously impacted my level of trust with Claude.
by airbreather
0 subcomment
- I am sort of in the same place, it seems to have lost enough of the magic that I might be better trying to do more with running local LLMs on my 4090.
The thing is running local LLMs will give some kind of reliability and fixed expectations that saves a lot of time - yeah sure Claude might be fantastic one day, but what do I do when the same workload churns out shit the next and I am halfway thru updating and referencing a 500 document set?
Better the devil you know and all that.
- One of the biggest problem with Claude is, it tries to do things that I don't even ask. I really like to have full control over what I do. I feel sometimes, Claude has the urgency to keep going with what it is hardcore programmed for instead waiting for my feedback. Looks like, Claude consider everything to be oneshot. I maybe wrong, this is my personal experience
- The discussion about Claude always omit the important context - which language/platform you’re using it for. It is best trained for web languages and has most up to date knowledge for that.
If you use it for Swift it is trained on whole landfill of code and that gives you strong bias towards pre-Swift 6 coding output. Imagine you would give Claude a requirements for a web app, and it implements it all in JQuery. That’s what happens with other platforms.
by chaosprint
1 subcomments
- I bought a Claude membership a few days ago. I asked him to fix a React issue—a very simple UI modification with almost no logic. He still failed to understand it. And after three attempts, the 5-hour limit was reached. This was a disaster. I had to immediately buy a CodeX membership and also tried Image2. I won't give Claude another chance.
by lawrence1
1 subcomments
- The timeline of the first few sentences doesn't add up. how can you subscribe 2 weeks ago when the problem started 3 weeks ago.
- That's bad for him, because he already had a cheap plan. Now he wont get it back that easy.
Pro is gone. OpenAI plans are more expensive. He can only buy a Kimi plan, which is at least better than Sonnet. But frontier for cheap is gone. Even copilot business plans are getting very expensive soon, also switching to API usage only.
- After the fixes in Claude Code, Opus 4.6/4.7 have been performing well.
Before the fixes, they were complete trash and I was ready to cancel this month.
Now, I'm feeling like the AI wars are back -- GPT 5.5 and Opus 4.7 are both really good. I'm no longer feeling like we're using nerfed models (knock on wood)!
- The usage metering is just so incredibly inconsistent, sometimes 4 parallel Opus sessions for 3 hours straight on max effort only uses up 70% of a session, other times 20 mins / 3 prompts in one session completely maxes it out. (Max x20 plan)
Is this just a bug on anthropic side or is the usage metering just completely opaque and arbitrary?
- I feel like Anthropic is forcing their new model (Opus 4.7) to do much less guess work when making architectural choices, instead it prefers to defer back decisions to the user. This is likely done to mine sessions for Reinforcement-Learning signals which is then used to make their future models even smarter.
- It's bad, really bad.
The filesystem tool cannot edit xml files with <name></name> elements in it
- I've been very happy using Codex in the VScode extension. Very high quality coding and generous token limits. I've been running Claude in the CLI over the last couple of months to compare and overall I prefer Codex, but would be happy with either.
by hybrid_study
0 subcomment
- Sometimes it feels like Anthropic uses token processing as a throttling tool, to their advantage.
- Support? You expected support? Live support?
Most of this is about the billing system, which is apparently broken.
- Signup for all major providers (pro plan) and round-robin between all of them. This is the only way to protect against not having access to all of these heavily subsidised subscriptions. See what happened to Copilot.
- If someone wants to move off Claude what are the alternatives? More importantly can another system pick up from where Claude left off or is there some internal knowledge Claude keeps in their configuration that I need to extract before canceling?
- I can agree. ChatGPT 5.5 made this a no-brainer choice. Anthropic are idiots removing Claude Code from the Pro plan. They need to ask Claude if what they did was a natural intelligence bug! Greed kills companies, too!
- Seems like some of the token issues may be corrected now
https://www.anthropic.com/engineering/april-23-postmortem
by sreekanth850
0 subcomment
- Biggest issue i see is, models are not getting efficient. This is no where going to get commoditised. There will be a limit at which you can burn money at subsidised cost.
by DeathArrow
1 subcomments
- I use Claude Code with GLM, Kimi and MiniMax models. :)
I was worried about Anthropic models quality varying and about Anthropic jacking up prices.
I don't think Claude Code is the best agent orchestrator and harness in existence but it's most widely supported by plugins and skills.
by giancarlostoro
3 subcomments
- I'm torn because I use it in my spare time, so I've missed some of these issues, I don't use it 9 to 5, but I've built some amazing things, when 1 Million tokens dropped, that was peak Claude Code for me, it was also when I suspect their issues started. I've built up some things I've been drafting in my head for ages but never had time for, and I can review the code and refine it until it looks good.
I'm debating trying out Codex, from some people I hear its "uncapped" from others I hear they reached limits in short spans of time.
There's also the really obnoxious "trust me bro" documentation update from OpenClaw where they claim Anthropic is allowing OpenClaw usage again, but no official statement?
Dear Anthropic:
I would love to build a custom harness that just uses my Claude Code subscription, I promise I wont leave it running 24/7, 365, can you please tell me how I can do this? I don't want to see some obscure tweet, make official blog posts or documentation pages to reflect policies.
Can I get whitelisted for "sane use" of my Claude Code subscription? I would love this. I am not dropping $2400 in credits for something I do for fun in my free time.
- I've see a post like this every week for the last 2 years. Are these models actually getting worse? Or do folks start noticing the cracks as they use them more and more?
- As many others I had negative (not good as before) feeling about Claude Code lately
What I don't understand is these loud "voting with money" comments. What they are canceling is very subsidized plan to buy something that delivers a lot of value.
There are only two providers that can provide this level of models at very subsidized price - anthropic and openai. Both of them are bad in terms of reliability.
So I wonder what these people do after they "cancel" both of them? Do they see producing less result at same hourly rate as everyone else on the market as viable option?
- I used Opus via Copilot until December and then largely switched over to Claude Code. I'm not sure what the difference is but I haven't seen any of these issues in daily use.
by nickdothutton
1 subcomments
- Switched to local models after quality dropped off a cliff and token consumption seemed to double. Having some success with Qwen+Crush and have been more productive.
- Off topic: I do feel like this model switching content feels very circa 2010 "I'm quitting Facebook"
- i ran prompts used up a ton of usage, and got no return just showed error.
Asked support hey i got nothing back i tried prompting several times used a ton of usage and it gave no response. I'd just like usage back. What I payed for I never got.
Just bot response we don't do refunds no exceptions. Even in the case they don't serve you what your plan should give you.
- If all Claude does is automate mundane code, why not just make a "meta library" of said common mundane code snippets?
- I hope codex doesn’t decline the same way
I’m blown away by how good it is lately
- We are in the 'we need to IPO so screw our customers' phase of the cycle
- I don't get it. I use Claude Code every day, what I would consider pretty heavy usage...at least as heavy as I can use it while actually paying attention to what it's producing and guiding it effectively into producing good software. I literally never run into usage limits on the $100 plan, even when the bugs related to caching, etc. were happening that led to inflated token usage.
WTF are y'all doing that chews tokens so fast? I mean, sure, I could spin up Gas Town and Beads and produce infinite busy work for the agents, but that won't make useful software, because the models don't want anything. They don't know what to build without pretty constant guidance. Left to their own devices, they do busy work. The folks who "set and forget" on AI development are producing a whole lot of code to do nothing that needed doing. And, a lot of those folks are proud of their useless million lines of code.
I'm not trying to burn as many tokens as a possible, I'm trying to build good software. If you're paying attention to what you're building, there's so many points where a human is in the loop that it's unusual to run up against token limits.
Anyway, I assume that at some point they have to make enough money to pay the bills. Everything has been subsidized by investors for quite some time, and while the cost per token is going down with efficiency gains in the models/harnesses and with newer compute hardware tuned for these workloads, I think we're all still enjoying subsidized compute at the moment. I don't think Anthropic is making much profit on their plans, especially with folks who somehow run right at the edge of their token limit 24/7. And, I would guess OpenAI is running an even lossier balance sheet (they've raised more money and their prices are lower).
I dunno. I hear a lot of complaining about Claude, but it's been pretty much fine for me throughout 4.5, 4.6 and 4.7. It got Good Enough at 4.5, and it's never been less than Good Enough since. And, when I've tried alternatives, they usually proved to be not quite Good Enough for some reason, sometimes non-technical reasons (I won't use OpenAI, anymore, because I don't trust OpenAI, and Gemini is just not as good at coding as Claude).
by captainregex
0 subcomment
- anyone remember the whole “delete uber” thing from 2017ish? good times
by dannypostma
0 subcomment
- When I saw the German screenshot it all made sense to me.
by datavirtue
0 subcomment
- I have enterprise plans for all AI services except Google. GitHub Copilot in VS Code is the best I have used so far. I hear a lot of complaints from people who are holding it wrong. In a single day I can have a beautiful greenfield app deployed. One dev. One day. Something that would have taken weeks with two teams bumping into each other. It's fully documented. Beautiful code. I read the reasoning prompts as it flows by to get an idea of what is going on. I work in phases and review the code and working product quickly after that. Minimal issues.
I'm an executive, the devs complaining are getting retrained or put on the chopping block.
My rockstars are now random contractor devs from Vietnam. The aloof FTE grey beards saying "I don't know, it doesn't work very good on X." Are getting a talking to or being sidelined/canned. So far most of my grey beards are adapting pretty well.
I'm not waiting on people to write code any more. No way in hell.
by bad_haircut72
0 subcomment
- Waiting 60s every time I send a msg really kills the ux of claude
by spaceman_2020
0 subcomment
- 4.7 is the breaking point for me
It's almost unusable
by postepowanieadm
0 subcomment
- Yeah, session limits are kinda show stoppers.
- I just cancelled my Max20 plan yesterday.
- Did the same with Google Ai Ultra. They rug pulled the subscribers. They changed the deal, we cancel. Simple.
by varispeed
1 subcomments
- It also seems to me they route prompts to cheaper dumber models that present themselves as e.g. Opus 4.7. Perhaps that's what is "adaptive reasoning" aka we'll route your request to something like Qwen saying it's Opus. Sometimes I get a good model, so I found I'll ask a difficult question first and if answer is dumb, I terminate the session and start again and only then go with the real prompt. But there is no guarantee model will be downgraded mid session. I wish they just charged real price and stopped these shenanigans. It wastes so much time.
- Same, it's a mess.
- This sounds just like all my neighbors complaining about their internet provider.
- Codex is becoming such a good product. I have the 100$ pro lite. I have Claude still but 20$. I rarely use it. Let’s see if they give generous limits and more importantly a model that’s better than 5.5. The mythos fear mongering did not give me a good impression that they care about the average developer.
- Very similar experience, although I didn’t use claude for anything in production, but I did try some tests with some few topics and questions on things that I know, and while initially it works very well, but as soon as you dive deeper you get all sort of extra none sense that was never asked to add/do nor it’s useful, just workarounds after workarounds after duct tape solutions, several times I would say “no, why are you introducing xyz, that will cause this and that” to get similar answer of “thanks for pushing back, you are right bla bla”.
We probably hit peak generative AI last year, now they probably use AI to improve the AI so it’s kinda garbage in garbage out, or maybe anthropic is deprioritizing users while favoring enterprise or even government where it provides better quality for higher contracts.
- Maybe this is an unpopular opinion, but I think choosing which companies to support during this period of pre-alignment is one way to vote which direction this all goes. I'm happy to accept a slightly worse coding agent if it means I don't get exterminated someday.
by johanneskanybal
0 subcomment
- It's not magic but for me definitly claude is the way to go. Not expecting magic it's just another level of non-slop than the rest I've tried.
by drivebyhooting
0 subcomment
- Imagine vibe coding your core consumer application and associated backend…
Oh wait, I don’t have to imagine. That’s what Anthropic does. A nice preview for what is in store for those who chose to turn off their brains and turn on their AI agents.
by kissgyorgy
0 subcomment
- I cancelled in the minute my subscription stopped working in Pi. Not going back to the slopfest what Claude Code is.
- My main problem with claude code right now is observability. I've been experimenting a lot with vibe coding, but nowadays I can't even tell what it's doing. It's still delivering me value, but the trust on the company is going down and I've already started looking for alternatives.
by josefritzishere
0 subcomment
- AI has a lot of future potential but at every level... it's still not very good. And certainly not good enough to validate the expense, let alone what the actual cost would be were it profitable.
- Anthropic is astroscaling. We're essentially buying into a loop where speed and iteration take precedence over stability and support. If you view them as an experimental lab undergoing rapid atmospheric friction rather than a company, the "unreliability" is just the cost of being at the frontier. This is not an endorsement for Anthropic, just imagining their craziness on how you "can" grow in a fraction of time.
by shevy-java
0 subcomment
- Those AI using software developers begin to show signs of addiction:
From "yay, claude is awesome" to "damn, it sucks". This is like with
withdrawal symptoms now.
My approach is much easier: I'll stay the oldschool way, avoid AI
and come up with other solutions. I am definitely slower, but I reason
that the quality FOR other humans will be better.
by ForOldHack
0 subcomment
- I have token issues three times a day, and I just upgraded to pro... and now this... now I cancel. my work flow was co-pilot to Gemini to Claude Code... and the bottle neck was always CC. Always. I am done. It should be pretty easy to replace CC.
AI used to be, the punched card replicator... its all replaceable.
by moralestapia
0 subcomment
- The midwit curve of LLMs has OpenAI on both ends.
by docheinestages
0 subcomment
- Me too.
by estimator7292
0 subcomment
- I just noticed today that it doesn't warn about approaching limits and just blows straight into billing extra tokens.
I'm pretty sure it used to warn when you got close to your 5hr limit, but no, it happily billed extra usage. Granted only about $10 today, but over the span of like 45 minutes. Not super pleased.
by GrumpyGoblin
0 subcomment
- Cool
- We can't do it. We standardized. They got us.
by semiinfinitely
0 subcomment
- absolute garbage support was the reason why I canceled. who would have thought that an AI company has only bots as support agents
by whalesalad
1 subcomments
- I've spent thousands of dollars on API tokens in the last few months. Out of my own pocket, as an indie contractor. I used the API specifically instead of Pro/Max/Plus/Silver/Gold/Platinum/Diamond to avoid all of the mess there regarding usage resets and potential hidden routing to worse models. It worked great for months, I got a ton of shit done, shipped a bunch of features. I really began to rely on the tech. I was not happy about the cost, but the value proposition was there.
Then within the last few months everything changed and went to shit. My trust was lost. Behavior became completely inconsistent.
During the height of Claude's mental retardation (now finally acknowledged by the creators) I had an incident where CC ran a query against an unpartitioned/massive BQ table that resulted in $5,000 in extra spend because it scanned a table which should have been daily partitioned 30 times. 27 TB per scan. I recall going over and over the setup and exhaustively refining confidence. After I realized this blunder, I referred to it in the same CC session, "jesus fucking christ, I flagged this issue earlier" -- it responded, "you did. you called out the string types and full table scans and I said "let's do it later." That was wrong. I should have prioritized it when you raised it". Now obviously this is MY fault. I fucked up here, because I am the operator, and the buck stops with me. But this incident really galvinized that the Claude I had come to vibe with so well over the last N months was entirely gone.
We all knew it was making making mistakes, becoming fully retarded. We all felt and flagged this. When Anthropic came out and said, "yeah ... you guys are using it wrong, its a skill issue" I knew this honeymoon was over. Then recently when they finally came out and ack'd more of the issues (while somehow still glossing over how bad they fucked up?) it was the final nail. I'm done spending $ on Anthropic ecosystem. I signed up for OpenAI pro $200/mo and will continue working on my own local inference in the meantime.
- Welcome to the future. Anthropic is currently speed running it but this is what all LLM tools are going to look like in the next few years, once they turn the enshitification corner.
- The great de-skilling programme continues in Anthropic's casino. They completely want you dependent on gambling tokens on their slot machines with extortionate prices, fees and limits.
Anthropic can't even scale their own infrastructure operations, because it does not exist and they do not have the compute; even when they are losing tens of billions and can nerf models when they feel like it.
Once again, local models are the answer and Anthropic continues to get you addicted to their casino instead of running your own cheaper slot machine, which you save your money.
Every time you go to Anthropic's casino, the house always wins.
- Same here. The single prompt burnt all my tokens in 3 minutes for the day. What happened to Claude in the last 2 months? I was happy with what they were providing and was happy to pay whatever for it. Why did they mess with it? Why are they destroying the tool we all loved?
I hate enshittification and I hate seeing this happening to Claude Code right now.
- [dead]
- [dead]
- [dead]
by rambojohnson
0 subcomment
- [dead]
by j_gonzalez
0 subcomment
- [dead]
- [dead]
by deferredgrant
0 subcomment
- [dead]
- [dead]
- I would love to just say that if you are using claude code, you should no be on pro. I feel like all the people complaining are complaining that an agent cant handle the work of a developer for $20/m. Get on at least max 5, its a world of a difference.