The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
Back then, a domain expert could fire up either Delphi or Visual Basic 6, and build a program that was useful for them. If they needed more performance, they would hire a professional programmer who used their work as a specification, and sanded off the rough edges.
These days, Lazarus is the open source follow on to Delphi. It'll work almost anywhere. I've run it on a Raspberry Pi Zero W! The only downside is the horrible documentation.
Microsoft went off the rails with their push towards .NET, sadly.
Low-code has become especially important now with LLMs for several reasons, especially in terms of stability, maintainability, security and scalability.
If the same feature can be implemented with less code, the stability of the software improves significantly. LLMs work much better with solid abstractions; they are not great at coding the whole thing from scratch.
More code per feature costs more in terms of token count, is more error-prone, takes more time to generate, is less scalable, more brittle, harder to maintain, harder to audit... These are major negatives to avoid when working with LLMs... So I don't understand how author reached the conclusion that they reached.
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
However underlying principles haven’t changed.
-Engineering bandwidth is minimally available for Internal tools. -Enterprise controls/guardrails are important needs - bringing in data to your app is a must have - maintaining code vs low code apps — low code has been a lot easier
In a conversation with a CTO at VC fund - he predicts that 4-6 quarters and you shall see demand back to peak in low code segment!
In a customer conversation— customer made one tool with cursor and he was very successful but by the time he started adding features for 2.0 everything started breaking and he wanted to move back to lowcode.
As a low code vendor- we just added internal tool building agent that underneath writes react code and leverages the other core capability of the platform thereby giving users best of both the worlds.
But surely interesting times ahead for the category. Let’s see if it survives or dies!
My personal take— it will survive and converge with agentic ai!
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
Low-Code and the Democratization of Programming: Rethinking Where Programming Is Headed
https://www.oreilly.com/radar/low-code-and-the-democratizati...
You could try to generate the business tools straight from the conventional toolsets but the problem is that agents are still far to unreliable for that. However, just like humans, if you dumb down the space and give them a smaller, simpler, set of primitives - they can do a lot better.
Speaking as someone who spent 8 years building nocode tools, had two exits, and stepped out of the industry last year: I’m not bitter, and I’m not cheerleading either.
For apps—where “nocode” is basically an App template /API template builder it was always 50% useful, 50% marketing to sell you extra services. You still need an advanced builder mindset: people who think like engineers, but don’t want to write code. That’s a weird combo, and it’s really hard to find consistently.
For business logic, it’s almost the opposite. Nocode can give you a clean, visual UX—a clear map of how the logic is connected instead of a spaghetti mess in code. That value sticks around wherever “explain how this works” matters. Not everywhere, but definitely enough places for a real market.
a twist of that could be a hybrid that explains how it was built, has some quick controls, and not just typing prompt. e.g. NoCode agentic UI.
For our startup, the low-code vs LLM shift started hugely frustrating and scary, but also hopeful. After years of dev, we were getting ready to launch our low code app product #2, and then bam, chatgpt 3.5 happened and LLMs stopped sucking so much.
We had to look at our future for our corner of the world -- bringing our tricky gpu graph investigation tech into something that goes beyond the data 1%'ers at top gov/bank/tech/cyber investigation teams to something most teams can do -- and made the painful and expensive call to kill the low-code product.
The good news is, as a verticalized startup, the market still needed something here for the same reason we originally built it. LLMs just meant the writing was on the wall that that the market expectations would grow as would what's possible in general. We correctly guessed that would happen, and started building louie.ai . Ex: While we previously had already viewed our low-code platform as doubling a way for teams to write down their investigation flows so they can eventually do ML-powered multi-turn automations on them.. we never dreamed we'd be speed running investigation capture the flag competitions. Likewise, we're now years ahead of schedule on shedding the shackles of python-first notebooks & dashboards.
So yeah, for folks doing generic low-code productivity apps, it's not great. n8n and friends had to reinvent themselves as AI workflows, and there's still good reason to believe that as agent experiences improve, they'll get steamrolled anyways... but...
Verticalized low-code workflow tools get to do things that are hard for the claude codes. Today the coding envs are built better than most of the non-ai-native vertical teams, but the patterns are congealing and commoditizing. It'll be interesting as the ai side continues to commoditize , and the vertical teams get better at it - at which point the verticals get much more valuable again. (And indeed, we see OpenAI and friends hitting ceilings on generic applications and having to lean in to top verticals, at least for the b2b world.)
Things I built for internal use pretty quickly:
- Patient matcher
- Great UX crud for some tables
- Fax tool
- Referral tool
- Interactive suite of tools to use with Ashby API
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.Much easier to just have a Claude Code build it all out for real.
Just someone give me MS Access for the web with an SSO module and let me drive it.
That'd cover 99% of LOB app needs and allow me to actually get shit done without tools that dissolve in my hands or require hordes of engineers to keep running or have to negotiate with a bullshit generator to puke out tens of thousands of lines of unmaintainable javascript crap.
We have achieved nothing in the last 25 years if we can't do that. Everyone who entered the industry since about 2005 appears to be completely braindead on how damn easy it was to get stuff actually done back then.
Low-code and LLMs can coexist: low-code can be just another layer (or, if you prefer, a more abstract programming language) that LLMs can use. You have less freedom, but more predictability and robustness, which is perfectly fine for internal tools.
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
https://www.reddit.com/r/salesforce/comments/1hxxdls/unpopul...
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
Let AI build apps using these building blocks instead of wasting tokens reinventing the wheel on how interactive tables should be, which chart library to use, and how speaking from the frontend to the backend works securely.
LLMs will make creating low-code apps as easy as normal apps. But it has one constraint: how is the extensibility of the low-code framework?
I work on a 'low code' platform, not really, but we do a lot of EDI. This requires a bunch of very normal patterns and so we basically have a mini-DSL for mapping X12 and EDIFACT into other objects.
You guessed it, we have a diagram flow control tool.
It works, yes I can write it in Javascript too... but most of the 'flow control bits' are really inside of a small sandbox. Of course, we allow you to kick out to a sandbox and program if needed.
But for the most part, yeah I mean a good mini-DSL gets you 90% of the way there for us and we dont reach for programming to often.
So - its still useful to abstract some stuff.
Could AI write it by hand every time? yes... but you still would want all the bells and sidepieces.
A strong advantage a platform like retool has in the non-developer market is they own a frictionless deployment channel. Your average non-developer isn't going to learn npm and bash, and then sign up for an account on AWS, when the alternative is pushing a button to deploy the creation the AI has built from your prompt.
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
Also, I see great value in not having to take care of the runtime itself. Sure, I can write a python script that does what I want much quicker and more effectively with claude code, but there is also a bunch of work to get it to run, restart, log, alert, auth…
Is this a commonly held assumption?
Fascinating but not surprising given some of the AI-for-software development changes of late.
But if I can get my AI to use an off the shelf open source flow orchestrator rather than manual coding api calls that is better.
Reallly ?