AI DISCOVERY
What AI-answer share of voice actually means for app discovery.
The phrase that keeps coming up in new-client intro calls is "we're losing to AI." What they mean, when you push on it, is that the product research their users used to do on Google is now happening inside ChatGPT, Perplexity or Gemini, and the usual SEO instruments do not touch it.
That framing is half true. The mechanic is different, yes. But "AI share of voice" gets sold to clients as a new metric with a new playbook, and most of what we see marketed as the playbook is premature or wrong.
What actually changes
Google surfaces ten links. An LLM surfaces a single paragraph, in which your product is either named or it is not. There is no page two, no long tail of patient scrollers. This part is real and it matters.
What is also real, and less discussed, is that the answer is not deterministic. Ask the same model the same question twice and you will often get different lists. Change the system prompt, the client, the user's memory, the time of day, and the answer shifts again. Anyone selling you a "LLM ranking report" with position numbers is selling you a false object. Mentions are the only signal that generalizes, and even mentions are noisy enough that you need month-over-month trend, not an absolute score.
How we measure it
We run a set of queries per client (usually 30 to 50, covering category, intent and comparison queries), twice each, across ChatGPT, Perplexity and Gemini, with no memory or saved context. We score a mention, not a position. We report the direction of travel versus last month and versus the two or three competitors the client actually fights on the ground. That is all. The graphs look less impressive than the dashboards vendors are now selling, but they survive being questioned.
What moves the number, as far as we can tell
Models tend to cite sources they treat as canonical, which in practice is a rough proxy for: old domains, domains heavy in the training data, and independent community discussion. In our work the following correlate with mention rate going up:
Real discussion on Reddit, Hacker News and category-specific forums. These surface in LLM answers more often than most clients expect. You cannot buy this. You can sometimes earn it by being useful in those threads under your own name.
Comparison content published by third parties that have their own audience. Your own "X vs Y" page helps a little. Someone else's helps a lot more.
Wikipedia, where your category has a page. If you are not on it, that is a gap worth addressing through the normal editorial process, not a PR stunt.
Factual, structured product documentation. Pages written to be quoted, not to convert.
None of this is a growth hack. All of it is slow.
What we do not know
We do not have clean attribution from "LLM mentioned you" to "user installed the app." Nobody we trust does. The honest position is that LLM mentions are a leading indicator we think matters, not a measured channel with a CAC you can optimize against. If someone tells you otherwise, ask where the attribution comes from.
We also do not know how durable any of this is. The model providers are already experimenting with sponsored placements inside answers. If that becomes a real inventory, a lot of the organic work above will get compressed. We assume our playbook here has a half-life and we price client engagements accordingly.