No, llms.txt is not the ‘new meta keywords’

No, llms.txt is not the ‘new meta keywords’

When Google’s John Mueller compared llms.txt to the old meta keywords tag, some corners of the SEO world interpreted the comment as a dismissal of the concept and took it as confirmation that llms.txt is overhyped, possibly even DOA.

To be fair and clear, Mueller wasn’t claiming llms.txt worked like meta keywords in a technical sense. 

He pointed out that LLMs aren’t yet widely requesting it, and so it just doesn’t matter – which, at the time, wasn’t entirely unreasonable given the adoption trajectory. 

But that was a month ago, and things can change a great deal in a few weeks.

His analogy also implied that llms.txt is gameable, but that’s no more true for it than for most of what SEOs already do.

The meta keywords tag made unverifiable claims – you could declare anything without proof, and people did. They abused it so much that Google eventually ignored it. 

Llms.txt, on the other hand, curates a list of real URLs, and the content has to exist – and deliver – when the model gets there. It guides models to actual value, not self-declared relevance.

llms.txt is a proposed standard

Think of llms.txt like a treasure map for AI systems – one you draw yourself. 

It is a simple, plain text (markdown) file placed at the root of your website that explicitly lists the specific URLs you want AI models to prioritize when accessing your site. 

At inference time – the moment an LLM receives a question and retrieves content from the web to synthesize a response – the model may not immediately know where your best content lives. 

Without guidance, it might miss the perfect answer entirely. 

The llms.txt file acts as that guidance, letting you plant an X and say, “Start digging here.” 

It points to actual, high-quality content – no bluffing, no empty declarations – and ensures models find and use what you want them to.

Without it, models are forced to stumble through a site cluttered with bloated markups, popups, inconsistent navigation, and buried pages. 

With llms.txt, you clearly guide them to what matters most, conserving their limited context windows and ensuring they extract the right information efficiently.

Standards take time

Yes, support is limited today. But standards evolve: robots.txt, schema, and sitemaps all took years. 

Perplexity already references structured summaries, and smaller tools are testing ingestion layers. Early adopters will be ready when it goes mainstream.

Even if you’re skeptical, it’s worth remembering that technologies we now consider foundational to good SEO started small and faced resistance. 

  • Robots.txt wasn’t respected at first. 
  • Schema.org took years to become widely adopted – and even now, it’s optional but valuable. 

Standards like these succeed because they solve real problems in a way people can use.

A better analogy

AMP optimized for a specific interface: the mobile web. 

Llms.txt also optimizes for a specific interface: the LLM-driven answer layer. But unlike AMP, it doesn’t force you to duplicate your content in a different format. 

It simply asks you to showcase your best work clearly and accessibly – and to make sure it’s actually there when the bot follows the map.

Interestingly, one of Mueller’s criticisms of llms.txt was that bots already download the full page anyway – so what’s the point of providing an alternate version? 

That complaint makes sense if you assume bots behave like Google or Bing – crawling and indexing everything. 

But that’s not how LLM-driven agents work. 

AMP, by the way, did actually require duplicate pages and Google did look at both versions. 

But llms.txt works differently because the LLM crawlers work differently: LLMs tend to drop into specific pieces of content – Mission Impossible-style – grab what they need, and leave. 

If they check llms.txt first, they can compare what is listed as most important to the content they were about to fetch and decide if another is a better fit. 

This context-first approach is exactly what llms.txt enables, which makes the critique ironically more applicable to AMP than to llms.txt.

But even this analogy isn’t perfect. 

Where AMP was brittle, mandatory, and really only served a narrow group of publishers, llms.txt is optional, benefits everyone, and rides the AI wave. 

It doesn’t limit creativity or UX, nor does it create duplicate content headaches. Instead, it highlights what’s already good.

Can llms.txt be abused?

Nearly everything in SEO can be gamed. 

People abuse schema. People keyword stuffed meta tags. 

Some even tried to manipulate robots.txt. llms.txt isn’t more vulnerable – it’s just newer. 

Instead of trying to cheat, the best strategy is to curate clear, quotable, verifiable content and make it frictionless for models to find.

Even if someone tries to game it, the models will still arrive at the content and assess its quality. You can’t fake clarity, authority, or substance once the model is on the page.

What SEOs should actually do

Before dismissing llms.txt, remember: robots.txt isn’t “necessary” either, and neither is schema. 

Llms.txt offers a fast, pragmatic shortcut: a clean, markdown-based map you control without redesigns or marketing fights. 

You can even offer lightweight markdown alternates and point models to those. You will reduce server strain while improving clarity.

Many site owners now complain that LLMs are hammering their servers and consuming significant bandwidth. 

This is where llms.txt combined with markdown alternates becomes even more valuable. 

By creating clean, lightweight .md versions of your most important pages, specifying only those in your llms.txt, and explicitly denying LLM crawlers access to everything else, you can restrict agents to just those .md files – preserving bandwidth while still conveying your information clearly and efficiently.

Let’s forget whether llms.txt is a ranking factor – that’s the wrong conversation. Instead, ask:

  • Is my best content structured for extraction?
  • Can an LLM quote this page without explanation?
  • Am I surfacing the content I want AI to find?

Use llms.txt to put that content on display. 

Think of it less like metadata and more like an AI-accessible storefront. 

The models are already coming. You can either let them fumble around… or hand them a map.

If you want to future-proof your visibility in AI-driven results, invest in clarity and structure now – and llms.txt is one way to do that without waiting for everyone else to catch up.

llms.txt as a spotlight

The comparison to the meta keywords tag undersells the moment we’re in. 

We’re no longer competing for rankings on a page of blue links. 

We’re competing for inclusion in answers, which requires structure, clarity, and intent.

That’s what llms.txt is for.

It’s not a wish list. It’s a spotlight. Use it to illuminate the work you’re proud to show the world.