by eddythompson80
4 subcomments
- For me, 2023 was an entire year of weekly demos that now looking back at were basically a "Look at this dank prompt I wrote" followed by thunderous applause from the audience (which was mostly, but not exclusively, upper management)
Hell man, I attended a session at an AWS event last year that was entirely the presenter opening Claud and writing random prompts to help with AWS stuff... Like thanks dude... That was a great use of an hour. I left 15 minutes in.
We have a team that's been working on an "Agent" for about 6 months now. Started as prompt engineering, then they were like "no we need to add more value" developed a ton of tools and integrations and "connectors" and evals etc. The last couple of weeks were a "repivot" going back full circle to "Lets simplify all that by prompt engineering and give it a sandbox environment to run publicly documented CLIs. You know, like Claude Code"
The funny thing is I know where it's going next...
- A long time ago a mentor of mine said,
"In tech, often an expert is someone that know one or two things more than everyone else. When things are new, sometimes that's all it takes."
It's no surprise it's just prompt engineering. Every new tech goes that way - mainly because innovation is often adding one or two things more the the existing stack.
- > just prompt engineering
This dismisses a lot of actual hard work. The scaffolding required to get SOTA performance is non-trivial!
Eg how do you build representative evals and measure forward progress?
Also, tool calling, caching, etc is beyond what folks normally call “prompt engineering”.
If you think it’s trivial though - go build a startup and raise a seed round, the money is easy to come by if you can show results.
- Why is this post published in November 2025 talking about GPT-4?
I'm suspicious of their methodology:
> Open DevTools (F12), go to the Network tab, and interact with their AI feature. If you see: api.openai.com, api.anthropic.com, api.cohere.ai You’re looking at a wrapper. They might have middleware, but the AI isn’t theirs.
But... everyone knows that you shouldn't make requests directly to those hosts from your web frontend because doing so exposes your API key in a way that can be stolen by attackers.
If you have "middleware" that's likely to solve that particular problem - but then how can you investigate by intercepting traffic?
Something doesn't smell right about this investigation.
It does later say:
> I found 12 companies that left API keys in their frontend code.
So that's 12 companies, but what about the rest?
- But ... what else should they be doing? What's the expectation here?
For example, in the 90's, a startup that offered a nice UI for a legacy console based system, would have been a great idea. What's wrong with that?
- Isn’t this true for most start ups out there even before AI? Some sort of bundle/wrapper around existing technology? I worked auditing companies and we used a particular system that cost tens of thousands of dollars per user per year and we charged customers up to a million to generate reports with it. The platform didn’t have anything proprietary other than the UX, under the hood it was a few common tools some of them open source. We could have created our own product but our margins were so huge it didn’t make sense to setup a software development unit not even bother with outsourcing it.
by gnarlouse
1 subcomments
- This post hovers on something I came to the week after ChatGPT dropped in 2023.
If an AI company has an AGI, what incentive do they actually have to sell it as a product, especially if it’s a 10x cost/productivity/reliability silicon engineer? Just undercut the competition by building their services from scratch.
- That is lower than I expected. There are just a handful of companies that create llms. They are all more ir less similar. So all automation is in using them, which is prompt engineering if you see that way.
The bigger question is, this is the same story with apps on mobile phones. Apple and google could easily replicate your app if they wanted to and they did too. That danger is much higher with these ai startups. The llms are already there in terms of functionality, all the creators figured out the value is in vertical integration and all of them are doing it. From that sense all these startups are just showing them what to build. Even perplexity and cursor are in danger.
by goranmoomin
1 subcomments
- It is beyond annoying that the article is totally generated by AI. I appreciate the author (hopefully) spending effort in trying to figure out the AI systems, but the obviously-LLM non-edited content makes me not trust the article.
by analogpixel
5 subcomments
- Where is this guy sitting that he is able to collect all of this data? And why is he able to release it all in a blog post? (my company wouldn't allow me to collect and release customer data like this.)
by aurareturn
4 subcomments
- Prompt engineering isn't as simple as writing prompts in english. It's still engineering data flow, when data is relevant, systems that the AI can access and search, tools that the AI can use, etc.
- I can believe that many startups are doing prompt engineering and agents but in a sense this like saying 90% of startups are using cloud providers mainly AWS and Azure.
There is absolutely no point of reinventing the wheel to create a generic LLM, spend fortune to run GPUs while there are providers giving this power cheaply
by muppetman
2 subcomments
- This makes no sense to me? I don't understand why a company, even if it is using GPT or Claude as their true backend, is going to leave API calls in Javascript that anyone can find. Sure maybe a couple would, but 73% of those tested?
Surely your browser is going to talk to their webserver, and yup sure it'll then go off and use Claude etc then return the answer to you, but surely they're not all going to just skin an easily-discoverable website over the big models?
I don't believe any of this. Why aren't we questioning the source of how the author is apparently able to figure out some sites are using REDIS etc etc?
- 73% of AI startups are building their castle in someone else's kingdom.
by CuriouslyC
1 subcomments
- The thing that drives me nuts is that most "AI Applications" are just adding crappy chat to a web app. A true AI application should have AI driven workflows that automate boring or repetitive tasks without user intervention, and simplify the UI surface of the application.
by Workaccount2
1 subcomments
- I'm surprised by the number of people who are running head first into AI wrapper start-ups.
Either you have a smash-and-grab strategy or you are awful at risk analysis.
by zach_moore
0 subcomment
- The reason is because VC needs to show that their flagship investments have "traction" so they manufacture ecosystem interest by funding and encouraging ecosystem product usage. It's a small price to pay. If someone builds a wrapper that gets 100 business users then token use on the foundation layer gets that passed down. Big scheme.
- My question with these is always "what happens when the model doesn't need prompting?". For example, there was a brief period where IDE integrations for coding agents were a huge value add - folks spent eons crafting clever prompts and integrations to get the context right for the model. Then... Claude, Gemini, Codex, and Grok got better. All indications are that engineers are pivoting to using foundation model vended coding toolchains and their wrappers.
This is rapidly becoming a more extreme version of the classic "what if google does that?" as the foundation model vendors don't necessarily need to target your business or even think about it to eat it.
- This a kind of global app store all over again, where all these companies are clients of only few true ai companies and try to distinguish themselves in the bounds of the underlying models and apis, just like apps were trying to find niches in the bound of apis and exposed hw of underlying iphones. Apis versions bugs are now models updates. And of course, all are at the mercy of their respective Leviathan.
- Flagged. Please don't post items on HN where we have to pay or hand over PII to read it. Thanks.
- it's wild, I work with some fortune 500 engineers who don't spend a lot of time prompting AI, and just a quick few prompts like "output your code in <code lang="whatever">...</code>" tags" — a trick that most people in the prompting world are very familiar with, but outside of the bubble virtually no one knows about — can improve AI code generation outputs to almost 100%.
It doesn't have to be this way and it won't be this way forever, but this is the world we live in right now, and it's unclear how many years (or weeks) it'll be until we don't have to do this anymore
by leeroy0xffffff
0 subcomment
- Interesting article and plausible conclusions but the author needs to provide more details to back up their claims. The author has yet to release anything supporting their approach on their Github.
https://github.com/tejakusireddy
by piyushpr134
1 subcomments
- 98% of all websites are just database wrappers
- https://archive.ph/Zjs2J
by michaelgiba
0 subcomment
- 73% of startups are just writing computer programs
- 5% prompt engineering, 95 % orchestration and no you can not vibe code your way and clone my apps, I have paid subscriptions why aren't you doing it then? Oh because models degrade severely over 500 lines.
LLMs is the new AJAX. AJAX made pages dynamic, LLMs make pages interactive.
by thethimble
0 subcomment
- 100% of startups are just software engineering
by tracerbulletx
0 subcomment
- I don't care how you get to a system that does something useful.
- 73% of statistics are wrong
- That's actually lower than I would have thought.
by Ozzie_osman
0 subcomment
- And 73% of SaaS companies are just CRUD.
Honestly it sounds about right: at the end of the day, most companies will always be an interesting UI and workflow around some commodity tech, but, that's valuable. Not all of it may be defensible, but still valuable.
by furyofantares
0 subcomment
- Another slop article that could probably be good if the author was interested in writing it, but instead they dumped everything into an LLM and now I can't tell what's real and what's not and get no sense of what parts of the findings the author found important or interesting compared to what other parts.
I have to wonder, are people voting this up after reading the article fully, and I'm just wrong and this sort of info dump with LLM dressing is desirable? Or are people skimming it and upvoting? Or is it more of an excuse to talk about the topic in the title? What level of cynicism should I be on here, if any?
- Maybe one day i can ask my tech in natural language for the weather...could you imagine?
Wait...nvm.
- And 99% of software development is just feeding data into a complier. But that sort of misses the point doesn't it?
AI has created a new interface with a higher level abstraction that is easier to use. Of course everyone is going to use it (how many people still code assembler?).
The point is what people are doing with it is still clever (or at least has potential to be).
- So? 73% of Saas startups are DB connectors & queries.
by g42gregory
0 subcomment
- Isn’t it a bit like saying, “X% of startups are just writing code”?
by alex_young
0 subcomment
- 73% of AI blog post statistics are bogus. Subscribe to learn more.
by NaomiLehman
0 subcomment
- it was never about the software.
- Flagged. AI written article with questionable sources behind a wall that requires handing over PII.
- It’s because the LLM is a commodity.
What differentiates a product is not the commodity layer it’s built on (databases, programming languages, open source libraries, OS apis, hosting, etc) but how it all gets glued together into something useful and accessible.
It would be a bad strategy for most startups to do anything other than prompt engineering in their AI implementations for the same reason it would be a bad idea for most startups to write low-level database code instead of SQL queries. You need to spend your innovation tokens wisely.
- Yep I just use chatgpt . I can write better prompts and data for my own usecases
- Atlas himself doesn't carry as much as "engineering" does in that headline.
- That's like saying "73% of business is just meetings"
- One of the biggest problems frontier models will face going forward is how many tasks require expertise that cannot be achieved through Internet-scale pre-training.
Any reasonably informed person realizes that most AI start-ups looking to solve this are not trying to create their own pre-trained models from scratch (they will almost always lose to the hyperscale models).
A pragmatic person realizes that they're not fine-tuning/RL'ing existing models (that path has many technical dead ends).
So, a reasonably informed and pragmatic VC looks at the landscape, realizes they can't just put all their money into the hyperscale models (LP's don t want that) and they look for start-ups that take existing hyperscale models and expose them to data that wasn't in their pre-Training set, hopefully in a way that's useful to some users somewhere.
To a certain extent, this study is like saying that Internet start-ups in the 90's relied on HTML and weren't building their own custom browsers.
I'm not saying that this current generation of start-ups will be successful as Amazon and Google, but I just don't know what the counterfactual scenario is.
by drivingmenuts
0 subcomment
- When people are desperate to invest, they often don't care what someone actually can do but more about what they claim they can do. Getting investors these days is about how much bullshit you can shovel as opposed to how much real shit you shoveled before.
Thus has it always been. Thus will it always be.
by IncreasePosts
1 subcomments
- Prompt engineering and using an expensive general model in order to prove your market, and then putting in the resources to develop a smaller(cheaper) specialized model seems like a good idea?
by Der_Einzige
0 subcomment
- And out of that 73%, 99% of them don't even do the obvious step of trying to actually optimize/engineer their damn prompts!
https://github.com/zou-group/textgrad
and bonus, my rant about this circa 2023 in the context of Stable Diffusion models: https://gist.github.com/Hellisotherpeople/45c619ee22aac6865c...
by ReptileMan
0 subcomment
- The really impressive thing about AI startups is not that they sell wrappers around (whatever), but that they are not complete vaporware.
by hn_throwaway_99
1 subcomments
- I decided to flag this article because it has to be fake.
The author never explains how he is able to intercept these API calls to OpenAI, etc. I definitely believe tons of these companies are just wrappers, but they'd be doing the "wrapping" in their backend, with only a couple (dumb) companies doing the calls directly to OpenAI from the front end where they could be traced.
This article is BS. My guess is it was probably AI generated because it doesn't make any sense.
by RobertDeNiro
1 subcomments
- People talk about an AI bubble. I think this is the real bubble.
- Wait til you hear what GPT 5 is
by DetroitThrow
0 subcomment
- Why is slop with ridiculous or impossible claims at the top of HN?
by senordevnyc
0 subcomment
- This is an AI slop article that sounds completely fabricated. Half of what's being claimed here isn't even possible to discern. My guess is that some LLM is churning out these 100% fake articles to get subscribers and ad revenue on Medium. Flagged.
- [dead]
by strathmeyer
0 subcomment
- [dead]
by theshetty
3 subcomments
- Prompt is code.
- 100% of AI startups are just multiplying matrices
100% of tech startups are just database engineering
It's still early in the paradigm and most startups will fail but those that succeed will embed themselves in workflows.