Some questions:
[Tech]
1. How deep does the modification go? If I request a tweek to the YouTube homepage, do I need to re-specify or reload the tweek to have it persist across the entire site (deeply nested pages, iframes, etc.)
2. What is your test and eval setup? How confident are you that the model is performing the requested change without being overly aggressive and eliminating important content?
3. What is your upkeep strategy? How will you ensure that your system continues to WAI after site owners update their content in potentially adversarial ways? In my experience LLMs do a fairly poor job at website understanding when the original author is intentionally trying to mess with the model, or has overly complex CSS and JS.
4. Can I prompt changes that I want to see globally applied across all sites (or a category of sites)? For example, I may want a persistent toolbar for quick actions across all pages -- essentially becoming a generic extension builder.
[Privacy]
5. Where and how are results being cached? For example, if I apply tweeks to a banking website, what content is being scraped and sent to an LLM? When I reload a site, is content being pulled purely from a local cache on my machine?
[Business]
6. Is this (or will it be) open source? IMO a large component of empowering the user against enshittification is open source. As compute commoditizes it will likely be open source that is the best hope for protection against the overlords.
7. What is your revenue model? If your product essentially wrestles control from site owners and reduces their optionality for revenue, your arbitrage is likely to be equal or less than the sum of site owners' loss (a potentially massive amount to be sure). It's unclear to me how you'd capture this value though, if open source.
8. Interested in the cost and latency. If this essentially requires an LLM call for every website I visit, this will start to add up. Also curious if this means that my cost will scale with the efficiency of the sites I visit (i.e. do my costs scale with the size of the site's content).
Very cool.
Cheers
Like some others here, Firefox is my daily driver and would look forward to anything you could bring our way.
https://web.archive.org/web/20041207071752/http://www.cnn.co...
Make every new page I load look like this, or a slightly cleaned up or mobile-specific version.
Also it turns out llm's are already very good at just generating Violentmonkey scripts for me with minimal prompting. They also are great for quickly generating full blown minimal extensions with something like WXT when you run into userscript limitations. These are kind of the perfect projects for coding with llm's given the relatively small context of even a modest extension and certainly a userscript.
I am a bit surprised YC would fund this as I think building a large business on the idea will be extremely difficult.
One angle I was/am considering that I think could be interesting would be truly private and personal recommendation systems using LLM's that build up personal context on your likes/dislikes and that you fully control and could own and steer. Ideally local inference and basically an algo that has zero outside business interests.
Since I also have to use Chrome for an extension I'm developing, I pinned Tweeks and will likely reach for it every so often to actually test how well it does, but the demos definitely impressed me.
Out of curiosity, how much, if any, of this did you vibe code?
Something in a similar vein that I would love would be a single feed you have control over, powered by an extension. You can specify in plain english what the algorithm should exclude / include - that pulls from your fb/ig/gmail/tiktok feeds.
Sadly (?) the only reliable way to website unfuckery is and will remain crowdsourcing by a bunch of nerds (see: Easylist) for the foreseeable future. This product is the opposite of that, with everyone having their private collection of prompts/tweaks, which they will have to individually fix every two weeks.
Also I find the founder journey interesting. What made you decide to pivot from AI Recruiting to an extension generator? Saw this https://www.ycombinator.com/launches/MvC-nextbyte-ai-recruit...
I created a rule to remove thumbnails and shorts from YouTube, and after a few failed attempts, it succeeded! But there were massive tracts of empty space where the images were before. With polish and an accessible way to find and apply vetted addons so that you don't have to (fail at) making your own, I would consider using it.
My daily driver is Firefox, where I've set up custom uBlock Origin cosmetic rules to personalize YouTube by removing thumbnails, short, comments, images, grayscaling everything except the video, etc. My setup works great for me, but I can't easily share it with other people who would find it useful.
Okay, this is really, really cool and is exactly my niche, as you mentioned it's kinda a combination of things like Stylus/uBlock Origin filters and custom filters/etc. This is really needed, as for example GitHub code preview is completely and utterly fucked, to put it lightly. Showing symbols, not being able to select code properly without weird visual glitches happening..... requires a bunch of scripts to fix. (https://github.com/orgs/community/discussions/54962).
What's your funding plan? You mentioned paid plan, but what's the actual benefit for users that they would pay for this? (I totally would, FWIW).
Do you foresee companies who need to build special widgets for whatever reason for random websites they use as kind of "Extension light" alternatives - your product reminds me of Retool (https://retool.com/), but for website tweaking.
Very cool product, love the ability to do "extra things" which will fix a whole bunch of websites I use everyday that I CBF'd either making an extension to fix or battling the uBlock/stylus filters.
Discoverability will also be needed, kinda like [karabiner elements complex modification rules](https://ke-complex-modifications.pqrs.org/)
edit: no firefox support, sadpants.
observe(document.body, {childList: true, subtree: true})
as a ServiceI didn't realise how rough the UX around the userScripts API was, but your onboarding page does a good job of walking the user through it.
If you edit the userscript metadata in a tweek then share it, the original metadata is still displayed on the site and when you install.
You can cheekily add existing user scripts (used to test the above):
Hacker News: https://www.tweeks.io/share/script/97ba1102db5a46f88f34454a
Twitter: https://www.tweeks.io/share/script/1eed748ffbe74fce93c677ed
Are there plans to add the ability to preview the source before installing? Absent any other indicators that a tweek is legit, I'm never clicking that Install button without being able to check it first.
* Tried the obvious one, removing all Shorts sections and buttons from Youtube.com, it one-shotted it without apparent issues. Great!
* Tried a second one on youtube: increase the information density, shrinking the size of each video entry so more entries can fit in my screen. This one was a fail: it took 180+ seconds (vs. around 60 seconds of previous query), then some thumbnails got smaller (not all), while the physical space and padding for each entry was still there (so no real density was gained from the change)
* I think it'd be useful to be able to check the exact prompt that was used, so I can re-read it. It might even be interesting if it was editable, so I might decide to rephrase something from it. Otherwise, a chat-like interface would be interesting too, so the extension asks me what it interprets from my words before working to produce it. Right now, it feels like a very slow iteration process where I write something, it doesn't get it, and I keep trying to refine it and waiting around 60 to 100 seconds between results.
* I'd also like to see the actual filters that are being generated and applied on each change. This is so I can learn from them and probably edit them manually for refinement, as it can be faster to change a little bit in there, than trying to convey the exact phrasing of what I want to the LLM.
* This brought to mind the obvious (to me at least!) idea of how helpful it would be to have an uBlock Origin rule creator with same kind of LLM help. Filter rules are so esoteric and complicated for me (a C++ backend dev) that I always spend several hours of reading and DOM analysis as soon as I want to do anything that's not as simple as using the extension's element picker.
* A collection of curated changes would be useful. My first instinct was to check if there's a gallery of the most common changes that people request to some popular websites. I guess this can be analyzed and trends can be discovered.
* All in all, this looks amazing. It's a very useful and really gamechanging usage of LLMs! Changing website contents was out of reach for the general public before, so this extension could become their door to that.
This could be thought of as an LLM reading a webpage for you without interfering with its operation and writing a new one without the crap.
After all, if training models on pirated books you haven't paid for is fair use[1] then transforming shifified websites should be too. :P
1. https://www.reuters.com/legal/litigation/meta-says-copying-b...
I am imagining something slightly different perhaps? In the same way Pi Hole has a kind of global list of (ad) URLs to block, I am looking for an extension where all these edits to deshittify a site are applied for me automatically when I visit a site.
That is, if someone has already stripped out banners, etc. (deshittified a site) and (somehow?) "submitted" the edits, I just want to pull those in when I visit the same site.
I understand 1) one person's deshittifying might go too far 2) there will be multiple ways to deshittify a site depending on who does it, and 3) sites change and a deshittify strategy that worked earlier could break.
I have no good answers for the above issues.
A bit disappointed that it doesn't work on Firefox. Since Google banned ublock origin I would think much of your core audience is on FF.
This is awesome work - I can already imagine using this to hide features I don't want to see on websites at certain times.
Granted that's not user-friendly, so I don't suggeset it for the typical person. I do think though the typical person would come to love the sort of web that I experience, so it's cool that there's a plugin now. Also the AI scraping (eg on LI) is interesting.
What makes its results deterministic? Is it a "pick from a menu of curated transformations"?
What is the risk level of it generating something harmful? (Eg. That directly or inadvertently leaks data I don't want to leak)
How human-friendly are the transformations if I want to review them myself before letting what could amount to an AI-powered botnet run in my logged-in useragent?
If anyone is up for writing a front-end framework where you create building blocks for LLMs and then you can use LLMs to reshuffle your website, send me an email!
Looking forward to trying it and see how far it takes the vision. Tweaks looks capable of very cool and useful things :)
Please add a way on your site for us to keep tabs on you (email list, Twitter, etc.).
I think I am not fully understanding the use case yet.
I want to know what plugins or scripts other Hacker News users use to block annoying segments. Beside uBlock Origin, I use kill-sticky[1] to hide sticky items like dialogs or headers (though sometimes it's wrong), SponsorBlock to skip sponsor segments, DeArrow to change YouTube thumbnails and titles to be less clickbaity. And I use Firefox's Reader View sometime too.
[1] https://addons.mozilla.org/en-US/firefox/addon/kill-sticky/
[2] https://addons.mozilla.org/en-US/firefox/addon/sponsorblock/
[3] https://addons.mozilla.org/en-US/firefox/addon/youtube-recom...
Edit: And I just found this new Kagi's AI-slop detection on the Hacker News. I'll definitely try!
I think it's better to use Tampermonkey/Greasemonkey. Rules are deterministic, you have full control, and you don't have to worry about monetization or malicious data collection in the future.
There have been multiple incidents in the past of extensions like these being sold off to sketchy third party companies which then use the popularity to insert malware into folks' machines.
I really recommend against this. The AI spin doesn't add much since most sites have had rules that work for years, they don't change that often. Please don't build up this type of dependence on a company for regular browsing.
Power users already know about customizing the Web with greasemonkey and those who don't really don't know why they would want this. It's trying to be all things to all people - it's an everything extension. You need to make this work BETTER than the free tools. And this is before even thinking about the legal grey area of modifying websites and then sharing modifications to those websites.
>YC
>look inside
>ai slop
lol called it from the first line
The capitalist internet is the issue. Capitalist internet is enshittification.
I would like to see this in action cause in my crap-blocking attempts it never really lasts pasts a few weeks/months because sited can change labels and tags so often.
Especially if its something related to revenue
"Facebook does not have a specific policy against Greasemonkey like extensions by name, but it has banned users for creating or using scripts that interfere with Facebook's functionality, which can include those made with Greasemonkey. Such actions are against Facebook's terms of service, which prohibit anything that could disable, overburden, or impair the proper working or appearance of the site.
Interfering with site functionality: Scripts, including Greasemonkey scripts, that alter how Facebook's pages load or work can be seen as a violation of the terms of service, which can lead to account suspension or banning. Examples of banned scripts: A specific example is the ban of the creator of the FB Purity add-on, which was a Greasemonkey script used to customize Facebook, say The Next Web."
Telling me to install an extension without ever telling me what that extension actually does is the most rookie move ever!
You don't de-encode.
I know linguistics is descriptive not prescriptive, but it's truly amazing to me the lengths people will go to swear.