Many times, when i used to hit claude limits mid task and switched to ChatGPT thinking the no-limit thing would make up for it. But it's really annoying.
I ended up just being more deliberate about how i use Claude. longer more complete prompts instead of back and forth, which naturally stretches the messages further. not a perfect fix but it changed the experience enough to stick with it.
I do head to head comparisons with this setup pretty regularly and what I’ve found is there is not much difference in outcomes between the 2 frontier labs at equivalent model settings. It’s hard to get statistically significant results on my budget and eval ability but my anecdotal feeling is that there is as much difference in group as out in outcomes.
Given that setup I use codex much more than Claude because it’s more reliable.
But I believe it’s easier to go from nothing to decent with Claude.
For other stuff I use Claude.
Chatgpt needs more prompting to get what you want, but its nearly impossible to reach your limit.
- Claude Opus for general discussion, design, reviews, etc.
- Codex GPT-5.4 High for task breakdown and implementation.
I often feed their responses to each other (manual copy/paste) to validate/improve the design and/or implementation. The outcome has been better than using one alone.
This workflow keeps Claude's usage in check (it doesn't eat as much tokens), and leverages Codex generous usage limits. Although sometimes I run into Codex's weekly limit and I need to purchase additional credits: 1000 credits for $40, which last for another 4-5 days (which usually overlap with my weekly refresh, so not all the credits are used up).