Show HN: PrePrompt – rewrites vague prompts before they reach the LLM
5 points by yashdeeptehlan
by potter098
0 subcomment
This feels useful if it improves first-acceptance rate rather than just making prompts longer. I'd be curious whether you track how often people keep the rewrite as-is vs editing it back down, especially once the learned stack hints start to accumulate.
by gin_dev
0 subcomment
Hi! The idea sounds interesting, this is kind of regular process. But my question would be - normally I do clarifications within dialog mode, asking AI to add needed details and correcting it to proper direction. Sometimes AI prompt extension goes wrong direction, thus I would avoid automatic use for me.
But it might be most interesting for me as an automatic calculation of "vagueness" and in this case automatically to switch to that dialog clarification mode.
by yashdeeptehlan
1 subcomments
Hey HN, I'm Yashdeep — I built PrePrompt because I kept writing vague prompts and spending 3-4 turns correcting the model. I wanted something that fixed that automatically without changing my workflow.
The classifier runs pure heuristics in <1ms — no API call. It only fires Haiku when a prompt scores above 38. Simple questions like "what is jwt" get a score of -5 and pass through untouched.
The most interesting part: it learns your stack over time. After 50 prompts it knows you use FastAPI, prefer typed code, etc. and injects that into every optimization without being told.
Happy to answer questions about the MCP hook architecture or the classifier scoring.