Business leader analyzing AI competitor mentions on split public and private screens

When I first started noticing the impact of Language Learning Models (LLMs) like ChatGPT and Gemini, I realized that people are not only looking for quick answers— they are forming opinions about brands and industries based on what these AIs share.

Questions like “What are the best tools for marketing strategy?” or “Tell me about Company X” are now asked straight to AIs. The answers sometimes surprise even me. They don’t just summarize what’s on the web— they often pull from a mixture of facts, context, and sometimes, guesses. For brands, this raises a big question: When an LLM mentions your company as a competitor (or not), is that information public, private, or something in between?

How LLMs gather and share competitor information

In my experience, understanding how LLMs work is a good starting point. Instead of scouring the web in real time, most large AIs are trained on vast datasets collected from public sources. These models analyze data until a certain cutoff point, then generate responses based on patterns and summaries they formed during training.

This means, whenever you (or a potential customer) ask an LLM about your company or its competitors, the information returned is not directly scraped at that moment, but distilled from what’s already “in the model.” Although some AIs allow limited real-time browsing, the vast bulk of responses still comes from older, more public datasets.

LLMs answer with what they know, not what they see right now.

I find it helpful to think of LLMs like students who studied from yesterday’s textbook, not today’s news. This creates unique challenges in managing brand reputation and in understanding exactly what is being said— or invented— about your company and its competitive landscape.

What is considered public in LLM competitor mentions?

Most competitor mentions and comparisons given by LLMs are built upon public information made available online— articles, reviews, public databases, blogs, and documentation. Here are the main sources that, in my research, most directly influence what an LLM can say:

  • Official company websites and product pages
  • News articles and media coverage
  • Public product reviews and testimonials
  • Frequently crawled forums and discussion boards
  • Openly available government or research documents

If your pricing, features, or service comparisons have ever been published in these places, chances are, they’re within the scope of what LLMs can repeat. In fact, AI reputation monitoring tools like getmiru.io exist precisely to help companies know when, where, and how their brand is mentioned or compared in AI-generated answers.

But there’s a twist. Sometimes, LLMs “hallucinate”— generating details that aren’t found anywhere in their sources. These can be simple errors, or more severe: confusion over pricing, features, or even who your main rivals are. I have seen AIs list fictitious features or present outdated information, often because they are trained on old or inconsistent data.

What stays private? Limits to public LLM knowledge

You might wonder whether private or sensitive details you share across your organization or with clients can appear in LLM answers. The simple answer is no— LLMs do not access your internal documents or confidential data. Their knowledge comes from what is openly available on the web and within forums or articles that are not hidden behind paywalls or logins.

LLMs cannot “see” your emails, proprietary materials, or unpublished pricing sheets— unless that information became public at some point outside your control.

If it’s not on the public web, it’s not in an LLM’s training set.

However, inaccuracies can arise if even a small reference slips out publicly— for example, a conference talk disclosed details that were later blogged by attendees, or an ex-employee posted on social media. LLMs do not have intent; they simply repeat or remix what they have seen, regardless of whether you wanted it public.

LLM dashboard with public data sources and AI brain illustration

Because the line between “public” and “private” can be thin—sometimes even one stray forum post is enough—brands need ways to actively monitor LLM outputs, not just incoming web links. That’s one reason I recommend checking out tools in the AI monitoring space.

Why LLM competitor references are different from traditional search

I think the concept of “public” versus “private” becomes blurry with AI-generated answers. Unlike Google or other search engines, where you see a clear list of sources and links, an LLM answer is a synthesis— sometimes a summary, sometimes a new phrase altogether.

For example, when you ask, “Tell me about project management tools,” an LLM might provide a ranked list, explain strengths, or even summarize feedback it has “learned” from reviews and discussions. You don’t see the sources neatly cited unless the AI is specifically prompted to show them, and even then, the references may be limited or simplified.

In practice, this means that competitor mentions— both for your company and others—are a combination of facts, opinions, and AI-generated context, not a direct snapshot of any single website.

What the LLM says can shape how people see you— even if it’s not what you wrote or intended.

This shift demands a new approach to brand monitoring and reputation management, especially in sectors like marketing and digital reputation, where perception can turn on a single AI-generated answer.

How to track and respond to competitor mentions in LLMs

From my observations, many companies still rely on traditional alerts—waiting for news coverage or tracking web mentions through search. This no longer covers the full landscape, because the new conversation is happening inside LLMs, often without any notification to the brands being discussed.

Technologies like getmiru.io are built for this changing environment. They track LLM answers across several major platforms, revealing:

  • How often and in what context your brand is mentioned
  • Sentiment trends in AI-generated answers
  • Common sources or citations the AI relies on
  • When hallucinations or untrue statements appear

Getting this insight puts you in a stronger position. You can spot errors, clarify outdated content, and understand what new customers might hear about you without ever visiting your website. It also helps teams react quickly if an inaccurate mention starts to shape public understanding in unexpected ways.

People monitoring LLM competitor mentions on screens

For more guidance on competitor intelligence topics, I often point readers to resources in the competitor monitoring category and the artificial intelligence category on our blog.

Conclusion: Navigating public and private in the age of LLMs

In my opinion, understanding what is public versus private when it comes to competitor mentions in LLMs is now a central part of digital brand management. LLMs only know what was public at the time of their training— but the way they synthesize and present that knowledge blurs the lines between explicit fact and AI-invented context.

Your brand’s reputation can be shaped by what LLMs say, whether those statements originated from a press release, a user review, or a stray comment on a forum. The best defense is active awareness and timely action. If you want more visibility into what major AIs are saying about your company, or you worry about AI “hallucinations” shaping perceptions, see how getmiru.io can help you keep track and stay ahead in the age of AI-driven answers.

Frequently asked questions

What is a competitor mention in LLMs?

A competitor mention in LLMs is when an AI model references your company alongside others when answering a prompt— for example, suggesting alternative brands or comparing features, pricing, or value. These mentions usually rely on publicly available information that the model was trained on.

Are competitor mentions public or private?

Competitor mentions in LLMs are based on public information, such as websites, news, forums, and reviews. LLMs do not access private or confidential data, and will only mention competitors based on content that was openly available online when they were trained.

How can I control competitor mentions?

Direct control is limited, since LLMs use previously collected data. However, you can influence future mentions by ensuring accurate, updated information is available on your sites and in public forums. Actively monitoring what LLMs say with platforms like getmiru.io helps identify inaccuracies quickly, allowing you to clarify or update public resources as needed.

Why do LLMs mention competitors?

LLMs mention competitors because users often request lists, comparisons, or alternatives. The AI model responds by presenting options it “knows” from its training, aiming to give a broad or balanced answer, often summarizing content from articles or reviews.

Is my data safe with LLMs?

Yes. LLMs are trained only on public information and do not access your internal data or private company records. If private data is not publicly available, it will not be used or revealed by LLMs.

Share this article

Do you want to protect your reputation in the age of AI?

Learn more about how to monitor and optimize your company's image with Miru.

Get Miru
Aleph

About the Author

Aleph

Aleph is a software engineer with 10 years of experience, specializing in digital communication and innovative strategies for technology companies. Passionate about artificial intelligence and online reputation, he dedicates himself to creating content that helps brands understand and optimize their presence in the digital world. He believes that keeping up with trends and adopting modern tools is essential for companies to stand out in increasingly competitive environments.

Recommended Posts