My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.
I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.
My hunch is that in five years we'll look back and see current OpenAI as something like a 1970's VAX system. Once PCs could do most of what they could, nobody wanted a VAX anymore. I have a hard time imagining that all the big players today will survive that shift. (And if that particular shift doesn't materialize, it's so early in the game; some other equally disruptive thing will.)
And so this goes back to my theory that open AI's execution is basically to get it itself in a position where the market cannot afford to have it implode. Basically, it wants to or it needs to be too big to fail. And I think we're already kind of seeing the politicization, if you will, sort of the rocket race between two superpowers or large powers on the AI front, and I think that Might be a viable strategy.
At least some of us in HN talk about limiting the data we give to Facebook, Google, Microsoft, etc. Isn’t it just as important to limit what we share with non-privacy preserving AIs?
Note: tech friends have asked me how I can use slightly weaker AI models and be happy about it: I still use Gemini Plus (and Anthropic via AntiGravity) for technical work: everything I do as a software developer is open source and all of my writing (20+ books) is Open Content so I don’t care about privacy and being direct-marketed based on my tech work. To me it makes sense to use the best AI just for tech work and a private AI for everything else. Think about this if a family member has a serious health problem, or something else private: do you want to use open web searches and open AI chats, or do you want to use private web search and private AI access? Why not make privacy your default, except in special situations?
Anthropic Claude has the best integrations with coding; what would make sense is for them to focus on that segment.
Other AI companies don't have anything really compelling. Meta has a model that's fully open-source, but then that's not particularly useful outside of helping them remain somewhat relevant, but not market-leading.
1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?
2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.
I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.
For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.
Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.
Sounds good to me.
As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.
None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.
(Aside, it's interesting how perceptions of these things have changed in one year: a whole article on OpenAI's future that makes no mention of AGI/ASI)
None of these can't be moved away from immediately. Even with my github repos, I use Antigravity, Claude Code, Opencode, and I might try Codex. I use one of them as a primary more than the other, but they're as close to interchangeable as possible.
From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.
Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.
Today you have a phone in your pocket and you have apps on your home screen. Facebook is on your home screen, Whatsapp or X or Bluesky or whatever have a place on your home screen. Google basically is the safari app on iPhone. I don't know how many people have ChatGPT on their home screen. And soon, there will be some AI in your home screen from Apple (served by Google or another big hitter)that will be an incredible advantage.
That means OpenAI either needs to build up history with users very quickly and use that as stickiness before Apple nukes that distribution. Or they need to find a way of being another device that every living person has in their pocket.
Every attempt at doing that so far has been a comical failure and the way OpenAI are behaving makes me think their attempt will be no different.
I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.
I would love to dunk on this or something, but the lesson is that it's all about distribution.
Sama is really good at that, and also.. gotta give props for a lot of forward thinking like the orb, which now makes a lot of sense to me, as non-Apple/Google proof of personhood.
They're already doing it, but wonder how far they'll take it.
I see the point Ben is making even though there are a lot of nerdier innovations he’s skipping over — credential management, APIs (.closest!), evergreen deployments, plugin ecosystems, privacy guards, etc.
One aspect that model execution and web browsers share is resource usage. A Raspberry Pi, for example, makes for a really great little desktop right up until you need to browse a heavy website. In model space there are a lot of really exciting new labs working on using milliwatts to do inference in the field, for the next generation of signal processing. Local execution of large models gets better every day.
The future is in efficiency.
For me, the choice is ChatGPT, not for its Codex or other fancy tooling - just the chat. Not that Claude Code or Cowork is less important. Not that I like Codex over Claude Code.
I would argue chatgpt is in the top 10 products of all time with regard to product market fit.
This matters a lot to me, as I use AI as something of an ongoing project organizer, and not purely for specific prompts.
So at least for me, it would be a huge hassle to move to another platform, on par with moving from one note-taking software to another (e.g., Evernote to IA Writer.)
I hear this, but every time I look the platforms have captured another use case that the startup ecosystem built (eg images, knowledge summarization, coding, music).
The sector is already littered with the corpses of the innovators that got swallowed by the platforms’ aggressiveness to do it all.
Like, why do I STILL have to do taxes and accounting with external tools? Why doesn't OpenAI have their own tax filing service for the people?
OpenAI should just drop their API service and build everything themselves. It's exactly what they did with ChatGPT. Build thousands of things, not just a few.
What is the network effect of Google Search?
The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?
Google “How to send a get request using Java”.
>import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse;
public class GetRequestExample { public static void main(String[] args) { // Define the URL String url = "https://api.example.com/data";
// 1. Create an HttpClient instance
HttpClient client = HttpClient.newHttpClient();
// 2. Create an HttpRequest object for a GET request
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(url))
.GET() // Default method, but good to be explicit
.build();
try {
// 3. Send the request and receive the response synchronously
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
// 4. Process the response
System.out.println("Status code: " + response.statusCode());
System.out.println("Response body: " + response.body());
} catch (Exception e) {
e.printStackTrace();
}
}
}Vs Chat GPT
> import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse;
public class GetRequestExample {
public static void main(String[] args) throws Exception {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://api.example.com/data"))
.GET()
.build();
HttpResponse<String> response = client.send(
request,
HttpResponse.BodyHandlers.ofString()
);
System.out.println("Status: " + response.statusCode());
System.out.println("Body: " + response.body());
}
}Chat GPT is a bit clearer, but both are good.
It’s really Google’s race to lose, but we are talking about Google here. They’re very hit or miss outside of Search
Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.
I really dislike this narrative where it's always China = bad, and US companies = good.
These labs all copy from each other. OpenAI and Anthropic have "distilled" each other models too and routinely poach key researchers from competitors. Not only that, there's evidence Sonnet 4.6 has heavily distilled Deepseek R1 too, in fact, if you ask Sonnet 4.6 in Chinese who it is, it will tell you it's a Deepseek model.
Chinese are the only ones publishing papers on their models non stop.
The whole AI race is entirely based on blatant copyright infringements and copying each other.
Give me an open source or non-American product that delivers the same quality, and I'll switch in an instant.
FWIW, this is how capitalism is supposed to work! Competition is driving AI forward at a fantastic pace!
First off, nonetheless open publishing stuff. Everything would have been trade secrets.
Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!
The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)
Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.
It has all been a net good for technological progress but not that good for the companies involved.
OpenAI has the best model, that is how they are going to compete.
Their chatbot business could be in trouble, but Gemini needs a LOT of work to make it better to use too.
Coding wise, it has become very competitive. They need to sell better and sell aggressively
That being said...
> The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.
This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.
Moving on to another section:
> If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?
Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.
Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.
They'll have their guard down more often than the claudinistas and geminites, and be cheaper to somehow exploit.
I also think that more half-serious business ideas have been initially implemented against OpenAI services, i.e. most likely to fail due to a lack of proficiency in how to make an organisation work even if the core idea is sound and worthwhile pursuing.
Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…
We have yet to see who’s winning in the “creative space”, probably OpenAI.
As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.
Demo: https://chatjimmy.ai/
There is no way that number is an accurate reflection of the number of actual human users of their service. I could believe they have 8-900m bot/fraud accounts in their databases, maybe, but not real users.