A practical mental model for recommendations is less “ranking” and more confidence:
Does the model have enough context to map your product to a problem? Are there independent mentions (docs, comparisons, forum threads) that look earned vs manufactured? Is there procedural detail that makes it easy to justify recommending you (“here’s the workflow / constraints / outcomes”)? For builders, a good AEO baseline is: Publish a strong docs/use-case page that answers “when should I use this vs alternatives?” Seed real-world context by participating in existing discussions (HN/Reddit/etc.) with genuine problem-solving and specifics. Track influence with repeatable prompt tests + lightweight surveys (“how did you hear about us?”) since last-click won’t capture it.
It feels like early SEO again: less perfect instrumentation, more building the clearest and most defensible reference for your category.
Google Search Console shows the user's query if the query is popular enough and your website is in the search results. Bing shows all queries, even if they are not popular, and if your website is in the search results.
But if AI recommends your website when answering people's questions, you cannot find out what questions the user discussed, how many times your website was shown, and in what position. You can see the UTM tag in your website analytics (for example, GPT adds utm source), but that is the maximum amount of information that will be available to you. But if a user discussed a question with AI and only got your brand name, and then found your site in a search engine, you won't be able to tell that they found you with the help of AI advice.
SEO has made web search unusable and practitioners are the scum of the earth.
But more practically like Raymond Chen said, if every app could figure out how to keep their windows always on top, what good would it do? The same with SEO.