A lot of posts about "vibe coding success stories" would have you believe that with the right mix of MCPs, some complex claude code orchestration flow that uses 20 agents in parallel, and a bunch of LLM-generated rules files you can one-shot a game like this with the prompt "create a tower defense game where you rewind time. No security holes. No bugs."
But the prompts used for this project match my experience of what works best with AI-coding: a strong and thorough idea of what you want, broken up into hundreds of smaller problems, with specific architectural steers on the really critical pieces.
One thing I've noticed is many (most?) people in our cohort are very skeptical of AI coding (or simply aren't paying attention).
I recently developed a large-ish app (~34k SLOC) primarily using AI. My impression is the leverage you get out of it is exponentially proportional to the quality of your instructions, the structure of your interactions, and the amount of attention you pay to the outputs (e.g. for course-correction).
"Just like every other tool!"
The difference is the specific leverage is 10x any other "10x" tool I've encountered so far. So, just like every tool, only more so.
I think what most skeptics miss is that we shouldn't treat these as external things. If you attempt to wholly delegate some task with a poorly-specified description of the intended outcome, you're gonna have a bad time. There may be a day when these things can read our minds, but it's not today. What it CAN do is help you clarify your thinking, teach you new things, and blast through some of the drudgery. To get max leverage, we need to integrate them into our own cognitive loops.
I stopped coding a long time ago. Recently, after a few friends insisted on trying out AI-Assistance codes and I tinkered. And all I came up was a Bubble Wrap popper, and a silencer. :-)
The first commit[0] seems to have a lot of code, but no `PROMPTS.md` yet.
For example, `EnergySystem.ts` is already present on this first commit, but later appears in the `PROMPTS.md` in a way that suggests it was made from scratch by the AI.
Can you elaborate a bit more on this part of the repository history?
[0]: https://github.com/maciej-trebacz/tower-of-time-game/commit/...
In the old days, code reuse was an aspirational goal. We had collections of functions, libraries, etc., but the overhead of reusing specific lines of code, or patterns of lines of code, was too burdensome to be practical. Many tutorials have been published on how to create a tower defense game, meaning there are tons of sample code out there for this domain.
I would ask that given the amount of source material available, when when ask an LLM to generate code, is this really "AI" of any sort, or is it really a new kind of search?
It made me think that one of the things that it probably needs is a way to get a 'feel' for the game in motion. Perhaps a protocol for encoding visible game state into tokens is needed. With terrain, game entity positions, and any other properties visible to the player. I don't think a straight autoencoder over the whole thing would work but a game element autoencoder might as a list of tokens.
Then the game could provide an image for what the screen looks like plus tokens fed directly out of the engine to give the AI a notion of what is actually occurring. I'm not sure how much training a model would need to be able to use the tokens effectively. It's possible that the current embedding space can hold a representation of game state in a few tokens, then maybe only finetuning would be needed. You'd 'just' need a training set of game logs with measurements of how much fun people found them. There's probably some intriguing information there for whoever makes such a dataset. Identifying player preference clusters would open doors to making variants of existing games for different player types.
Anyway what a world. It would have taken me weeks to create what an AI and myself are able to whip up in a few short, and fun, hours.
Giving a personality to Gemini is also a vital feature to me. I love the portability of the GEMINI.md file so I can bring that personality onto other devices and hand-tailor it to custom specifications.
I vibe coded a greenfield side project last weekend for the first time and I was not prepared for this. It wrote probably 5x more functions than it needed or used, and it absolutely did not trust the type definitions. It added runtime guards for so many random property accesses.
I enjoyed watching it go from taking credit for writing new files and changes, and then slowly forgetting after a few hours that it was the one that wrote it ... repeatedly calling calling it "legacy" code and assuming the intents of the original author.
But yeah, it, Claude (no idea which one), likes to be verbose!
I especially find it funny when it would load the web app in the built-in browser to check its work, and then claiming it found the problem before the page even finishes opening.
I noticed it's really obsessed with using Python tooling... in a typescript/node/npm project.
Overall it was fun and useful, but we've got a long way to go before PMs and non-engineers can write production-quality software from scratch via prompts.
> During this process I've learned a lot
Yes, but what exactly? I mean I guess you don't have to touch the project once its finished so there is less value in familiarizing yourself with the source. The source is roughly 15135 lines. That is quite a chunk and most likely would have taken more than 30 hours to write that from an standpoint of knowing the basics of typescript and the phaser library.
If you ever want to build this out in Unity, you should try https://www.coplay.dev/ for the AI copilot
Thanks for the game!
At the 20 minute mark, he decides to ask the AI a question. He wants it to figure out how to prevent a menu from showing when it shouldn't. It takes him 57 seconds to type/communicate this to the AI.
He then basically just sits there for over 60 seconds while the AI analyzes the relevant code and figures it out, slowly outputting progress along the way.
After a full two minutes into this "AI assistance" process, the AI finally tells him to just call a "canBuildAtCurrentPosition" method when a button is pressed, which is a method that already exists, to switch on whether the menu should be shown or not.
The AI also then tries to do something with running the game to test if that change works, even though in the context he provided he told it to never try to run it, so he has to forcefully stop the AI from continuing to spend more time running, and he has to edit a context file to be even more explicit about how the AI should not do that. He's frustrated, saying "how many times do I have to tell it to not do that".
So, his first use of AI in 20 minutes of coding, is an over two minute long process, for the AI to tell him to just call a method that already existed when a button is pressed. A single line change. A change which you could trivially do in < 5 seconds if you were just aware of what code existed in your project.
About what I expected.