5 practical SEO experiments with AI as a co-pilot

5 practical SEO experiments with AI as a co-pilot

User intent is evolving, and so are our habits around technology.

With the rise of AI, the ways people search and find information are diversifying fast. 

Naturally, the way we think about SEO is shifting, too.

But this isn’t a pitch for AI. 

Instead, I want to explore how we can treat AI as a collaborator, not a replacement for human expertise, to make our workflows more efficient and adaptive in this increasingly complex landscape.

I see AI as a telescope, not the North Star. It helps us see farther and move faster, but we still need to navigate the path ourselves.

With that mindset, I’ll walk you through a series of practical, low-barrier SEO experiments where generative AI acts as a co-pilot. 

No armies, no endless budgets, no risky tests – just focused, useful ways to get results.

5 SEO experiments where AI acts as a co-pilot

SEO has always involved waiting for even basic actions. 

Publish content, wait. 

Implement internal links, wait. 

Fix the page load issue, wait. 

Manually test a theory, and sometimes spend weeks watching results unfold.

What changes when you add AI? 

You should still wait to see the performance. 

But there is a little difference: Now you can ask the right questions and frame the experiment to predict the outcome and make the right decision to get results. 

AI can give you some speed and scale.

It sounds faster, proactive, and more granular to me. If that sounds good to you too, let’s go!

1. Validating ideas before wasting dev time

Time and budgets are limited. 

That’s why validating ideas that aim to improve user experience (UX) and SEO performance before sharing them with stakeholders or the development team is a very wise action. 

After all, no one wants to waste time on a change that may not yield the expected results.

To ensure we don’t overburden our devs, I decided to run an A/B test-like process with Claude 3.7 Sonnet, which has:

  • Deep reasoning.
  • Structured outputs.
  • Extended memory support. 

I wanted to compare the current navigation bar with the version I believed would perform better. 

This would help us determine which version would lead to better user engagement and conversion rates, all without prematurely involving the dev team.

I began by feeding Claude information about the current and proposed navigation bar designs, along with data on the website, products, and our web content. 

It assessed both versions and then outlined their strengths, weaknesses, and potential impact on user engagement and conversions.

Disclosure: I work at Designmodo, the SaaS company referenced in this experiment.

Claude also gave some recommendations to improve the design even more, which helped me refine the idea before bringing it to the team. 

After implementing the new version of the navigation bar, we saw a significant increase in user engagement and conversion rates, confirming that the decision to invest time in the changes was the right one. 

2. Content optimization experiment

I’m sure we all have websites we’re confident in. They:

  • Check all the quality boxes.
  • Serve intent.
  • Have performed well in the past. 

But after a while, for some reason, they stop performing so well. 

Maybe user intent has shifted, competitors published better-formatted content like lists, tables, and comparisons, or an algorithm update caused it.

In situations like this, we usually audit the content and the SERP and revise the content based on its current ranking. 

For this experiment, I decided to let AI assist me in that process.

I used Gemini’s Deep Research to:

  • Review the top-ranking pages for a specific query.
  • Predict which content formats were most likely to succeed.
  • Identify factors affecting performance.
  • Determine patterns that are consistently successful. 

I filtered what made sense, shared it with the content team.

Content optimization experiment with Gemini

This is a practical example of using AI in this type of experiment.

We made the article more skimmable by:

  • Improving its structure.
  • Clarifying key sections.
  • Refining the overall format to better align with user expectations.

Within two weeks, impressions jumped. 

And after two months, we noticed a piece in the AI Overviews summary from our revised content.

Content on AI Overviews

Could I have done this analysis manually? 

Sure, I’ve been doing it for years. 

But this experiment allowed me to see how AI can support fast, focused reverse-engineering. And it worked.

Get the newsletter search marketers rely on.



3. New page indexing speed experiment

Understanding how fast different platforms index or surface content can help prioritize which pages need attention first. 

To explore this, I ran an experiment to compare how quickly and selectively traditional search engines and generative AI platforms discover new content.

I published 10 different types of pages on one of my test websites, all going live at 4 p.m. on a Saturday.

I didn’t submit these pages for indexing on Google or Bing. Instead, I waited to see which platforms would find them organically. 

Meanwhile, I shared the pages on several social media platforms.

  • Surprisingly, Bing was the first to index the pages, doing so in just 38 minutes. 
  • ChatGPT began surfacing two of the pages within relevant responses after about two hours.
  • Perplexity was even faster in some cases, showing six of the pages within three hours.

Google Search Console’s URL Inspection tool said 8 of the pages were indexed six hours later. 

But when I checked with the site: operator, I could only see five of them. 

By the next day, though, all 10 were indexed and visible in Google Search.

Below you can see all the results: 

Indexed pages reults per AI tool

For the second part of the experiment, I brought AI into the scene.

I asked ChatGPT (using the Reason function) to help analyze the crawl and indexing results and shared the data I had and prompted it accordingly.

The result was impressive: a faster understanding of the correlation between structure, page type, and visibility across engines and generative AI platforms. 

And then I benefited from those outputs to prioritize fixing slow-indexing or appearing pages. 

Keep in mind that indexing and visibility speed can vary widely depending on factors like a website’s authority, age, internal and external link profile, etc. 

So, the results of this kind of experiment will naturally differ for each website.

New page indexing speed experiment - Key findings

4. Crawlability priority scoring experiment

Sometimes a page that should perform well doesn’t – simply because it isn’t crawlable and therefore can’t be discovered. 

Reviewing log files is one of the best ways to diagnose this issue.

But instead of combing through thousands of lines manually, I decided to run an experiment and use ChatGPT’s Advanced Data Analysis function as my co-pilot. 

It scanned for patterns and summarized issues like: 

  • An orphan page that bots had never visited.
  • Pages consuming crawl budget.
  • Slow-loading pages.
  • Unusual spikes or drops in crawl activity.
Crawlability priority scoring experiment

In the end, I had a prioritized list of crawl-related issues. 

Rather than spending hours diagnosing problems manually, I was able to use that time to fix them. That’s exactly what a co-pilot should do – make the process easier, not take it over.

5. Content velocity index experiment

We know that publishing fresh content regularly helps maintain visibility. 

But have you ever wondered whether the content velocity of your competitors could be affecting your performance as well?

I wanted to conduct an experiment on this and decided to get a hand from Gemini so as not to disrupt my other priority tasks. 

I asked Gemini to scrape and summarize the blog publish dates from three main competitors. 

It gathered all the info I wanted and calculated how often new content was being published around key topics. 

Then, I fed it the publishing history of the blog I was focused on.

Content velocity index experiment

Gemini gave me a quantified content velocity benchmark, showing how much faster (or slower) others were moving. 

Then I compared it with our publishing frequency to keep up and work on reallocating resources to close the gap. 

Final thoughts

These small but valuable experiments have shown me that AI-based platforms still depend heavily on human expertise to operate and interpret their outputs, make informed decisions, and guide their responsible use. 

We, not AI, should remain the true North Stars of our roadmaps, with AI serving as a helpful assistant.

That’s why it’s essential never to rely on AI blindly. 

Always question its output, especially in an environment where mistakes can cost significant revenue. 

Use AI thoughtfully and experimentally – not as a shortcut, but as a powerful tool to enhance execution and achieve better results.