Press "Enter" to skip to content

Why Brands Can’t Just Rely on SEO for Visibility

For more than two decades, the dominant mental model for finding information online has been search. Ask a question and a system retrieves documents. You look at links, rankings and sources.

As generative AI becomes the primary gateway for answers, many organizations are still applying this same logic. They assume large language models operate like faster, more conversational search engines – issuing queries, pulling relevant documents and summarizing what they find.

But this assumption is increasingly wrong.

In search, failure is visible. If a brand doesn’t appear, you can scroll, refine the query, or click to page two, three, four, and so on. You can see what ranked and what didn’t.

In generative AI, failure looks different. Ask a question and receive a fluent, confident answer – with no links, no rankings and no indication of what was considered and discarded. If a company, product or idea is missing, there is nothing to check. The answer simply moves on as if that absence were natural.

By default, large language models (LLMs) don’t behave like traditional search engines. When you ask them a question, they are not automatically sending a query out to the web or scanning a library of documents in real time. Instead, their first response comes from recall: generating an answer based on patterns, concepts, and associations the model has already internalized during training and reinforcement. Put simply, the answer comes from the LLM’s memory. 

This is not a database of facts, but a learned map of how concepts, ideas, entities and language fit together. The model draws on that map to decide what information it deems relevant, plausible or worth mentioning in response to a prompt.

Retrieval – looking something up on the web – is then layered on top of the recall process. Whether retrieval is used depends on the product, the prompt, the user’s settings and the model’s own confidence in what it already ‘knows.’ Increasingly, when the LLM uses retrieval, its outputs are woven seamlessly into a single fluent response, rather than surfaced as links or citations.

This shift – from retrieval-first systems to memory-first systems – fundamentally changes how visibility works in an AI-led world.

There are no rankings. No links. And no guarantee that a brand, concept, or company will appear at all.

Rethinking discovery in an AI-led world

Most people understand search engines work by fetching information at query time. When you type a question, the system retrieves documents from a vast index, ranks them and presents references to then be looked through. Visibility is explicit and measurable: impressions, rankings, clicks.

However, LLMs work differently.

During training, they absorb enormous amounts of information, including text, and learn statistical representations of how language, concepts, entities, and relationships fit together. This training process forms an internal memory – not a database of facts.

When a user asks a question, the model does not automatically look things up. Instead, it generates an answer by drawing on this internal memory: what it considers plausible, relevant, and salient given the prompt.

Retrieval systems – such as web search, document lookup or tool use – can be layered on top. But they are not guaranteed to fire. They depend on the product, the user’s settings, the phrasing of the prompt, and the model’s own confidence in its internal knowledge. Even when retrieval is used, the internal memory decides which of the retrieved sources are actually cited in the response, rather than a traditional search ranking algorithm.

The result is a hybrid system where memory dominates, and retrieval is a hotfix for memory knowledge cutoff.

A new form of online invisibility

Treating AI as just another search surface leads to false assumptions. In search, if a brand is missing, it is usually because the page isn’t indexed, it ranks too low or it’s been outcompeted by better-optimized content. 

In generative AI, absence can happen for entirely different reasons. It could be because it is weakly represented in the model’s internal memory, it is not strongly associated with the concepts the question activates or the model does not consider it a ‘natural’ example to mention.

None of these failures produce an error message. None are visible in a dashboard. None are fixed by simply publishing more content.

This is creating a new form of invisibility: quiet disappearance from AI answers, without obvious warning signs.

Visibility is about being remembered

Early AI systems made retrieval visible. Answers came with links. Sources were clearly labeled. The boundary between what the model knows and what it looked up was easier to see. However, that is changing. As models improve, retrieval is increasingly the following:

  • triggered selectively
  • summarized rather than cited
  • merged seamlessly into responses

Ask an AI assistant a broad question like “How do companies approach AI visibility?” and you’ll get a confident, fully formed answer – categories, concepts even examples – but no clear indication of what was recalled from the model’s internal knowledge and what, if anything, was retrieved live. To the user, it all appears as a single act of intelligence.

From a business perspective, this makes it far harder to tell why something was mentioned – or crucially, why it wasn’t.

The familiar tools of SEO – rankings, impressions, backlinks – struggle to describe what is happening here. Visibility is no longer about being retrieved. It is about being remembered.

Change the mental model

Importantly, model memory, like human memory, is not fixed.

It shifts as models are retrained, fine-tuned, and reinforced. Concepts gain or lose prominence. Associations strengthen or weaken. What felt obvious in one model version may vanish in the next. This is why brands sometimes see abrupt changes in how often – or whether – they are mentioned in AI answers, even when nothing about their own content strategy has changed.

From the outside, these shifts can look arbitrary. From inside the model, they reflect changes in internal representation. 

Understanding this dynamic requires a different mental model – one borrowed less from search marketing and more from systems thinking and data science.

In practice, this shifts the work upstream. Instead of optimizing pages and keywords, organizations need to ask how their brand exists inside the model’s internal picture of a category, what concepts it is linked to, which peers it is grouped alongside, and whether it is treated as a default answer or a peripheral one.

These questions sit beneath the interface – inside the learned interpretations – not on the surface of the internet.

A new visibility problem for businesses

If AI systems are memory-first, then the strategic question changes.

It is no longer just: “Can we be found?” It becomes: “How are we represented inside the model’s understanding of our category?”

That includes whether the model recognizes the brand at all, what it associates the brand with and whether it considers the brand a natural answer to category-level questions.

These aren’t surface-level optimization problems. They operate beneath the interface, inside the model’s learned structure.

For businesses, this means visibility risk is no longer confined to search results pages. It exists inside the systems increasingly used for recommendations, explanations, comparisons, and decision-making.

And because these systems speak with confidence, absence can be mistaken for irrelevance.

Search no longer the whole story

None of this means search is disappearing. Retrieval still matters, and will continue to matter. But it does mean it is no longer the whole story.

AI-led discovery is becoming a layered system:

  • Memory provides the default answer space.
  • Retrieval intervenes selectively.
  • Presentation hides the distinction.

Organizations that continue to treat generative AI as “search, but conversational” risk optimizing for the wrong layer of the system. The companies that adapt fastest will be those that recognize the shift: from rankings to representations, from links to learned memory.

Because in an AI-first world, being searchable is no longer enough. You also have to be remembered.

Author

  • Francisco Vigo profile picture

    Francisco Vigo is the co-founder and CEO of geoSurge, a company that helps brands stay visible in AI-generated answers.

    View all posts

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

×