I pay for a lot of tools, but patterns like this leave me with a really bad impression.
> Traditional search engines were built for humans. They rank URLs, assuming someone will click through and navigate to a page. The search engine's job ends at the link. The system optimizes for keywords searches, click-through rates, and page layouts designed for browsing - done in milliseconds and as cheaply as possible.
> ... AI search has to solve a different problem: what tokens should go in an agent's context window to help it complete the task? We’re not ranking URLs for humans to click— we’re optimizing context and tokens for models to reason over.
I also want a search engine which ranks the results based on how it's useful to reason about, not how it can sell potential ads by invoking false rage or insecurities. And it would be better if unrelated information or fancy gimmicks are removed from the website like Reader View.> The materials displayed or performed or available on or through our website, including, but not limited to, text, graphics, data, articles, photos, images, illustrations and so forth (all of the foregoing, the “Content”) are protected by copyright and/or other intellectual property laws. You promise to abide by all copyright notices, trademark rules, information, and restrictions contained in any Content you access through our website, and you won’t use, copy, reproduce, modify, translate, publish, broadcast, transmit, distribute, perform, upload, display, license, sell, commercialize or otherwise exploit for any purpose any Content not owned by you, (i) without the prior consent of the owner of that Content or (ii) in a way that violates someone else’s (including Parallel's) rights.
I agree there is a need for such APIs. Using Google or Bing isn't enough, and Exa and Brave haven't clearly solved this yet.
When an AI searches google.com for you, the ads never get shown to the user. Search engines like kagi.com are the future. You'll give the AI your Kagi API key and that'll be it. You won't even need cloud-based AI for that kind of thing! Tiny, local models trained for performing searches on behalf of the user will do it instead.
Soon your OS will regularly pull down AI model updates just like it pulls down software updates today. Every-day users will have dozens of models that are specialized for all sorts of tasks—like searching the Internet. They won't even know what they're for or what they do. Just like your average Linux user doesn't know what the `polkit` or `avahi-daemon` services do.
My hope: This will (eventually) put pressure on hardware manufacturers to include more VRAM in regular PCs/consumer GPUs.
I get that everyone wants to piggyback on the common-ness of words, but it'd be a lot cooler if they _didn't_.
Obligatory: information-dense format is valuable for humans too! But the entire Internet is propped up by ads so seems we can't have nice things.