La catégorie “Best Of SEO” présente et recommande des lectures sur le référencement technique en flux RSS.
Il s’agit d’expertises considérées comme les meilleures, en provenance de blogs ou de sites internet de consultants américains de renom.
Pour parfaire vos connaissances en matière de référencement, il n’y a pas mieux.
Google will soon fully transition to automatically generated publication pages next month, in March. Back in April 2024, Google told us Publisher Center will soon stop allowing you to add publications and now this is the next step. This means that all publication pages in Google News will be generated automatically by Google.
Following our announcement in April 2024 last year, Google News will fully transition to automatically generated publication pages in March. This change improves our existing publisher workflow and simplifies our current product experience.
Moving forward, all publication pages in Google News will be generated automatically. As a result, publication pages that had been created by publishers manually will no longer appear to users in Google News. Publisher Center will discontinue customization features for publication pages in Google News, and the Google News tile will no longer appear in Publisher Center.
What is not changing. Google said this has no impact on what content is eligible to appear in Google News or other Google News related surfaces. “Content from publishers that adheres to our content policies is automatically eligible for consideration in Google News and across news surfaces,” Google wrote.
Google will still use its confusing automated methods for determining what is included and not included in Google News.
Also, for Google News Showcase and Reader Revenue Manager, publishers will continue to submit logos through Publisher Center.
What is changing. Here is what is changing:
Custom sections that were previously created in Google Publisher Center will no longer appear on publisher Google News landing pages.
Publishers will no longer be able to use Google Publisher Center to customer their logos and publication titles.
Google News will use a site’s favicon for the publisher logo instead.
Google News will use the site names for publication titles instead.
Why we care. Google Publisher Center, which was once a really great place for news publishers to control and maintain their publications in Google News, is becoming less and less value to news publishers.
Google wants to automate the process and claims, “This change improves our existing publisher workflow and simplifies our current product experience.” However, I know that news publishers continue to miss the old method for Google News and Publisher Center.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/man-newspaper-bench-google-logo-1920-800x457-O5EyHN.jpeg457800Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-11 22:10:572025-02-11 22:10:57Google News automated publication pages to start in March
Less than 1% of views of YouTube videos come from Google search clicks, according to a member of Google’s legal team.
The quote. Here’s what Attorney John Schmidtlein said, according to Courthouse News Service:
“Roughly less than 1% of views on YouTube come from people who click on [search] links,” Attorney John Schmidtlein of Williams & Connolly, who represented Google, said in court.
Why we care. This is the first time this statistic has been revealed publicly, as far as I know. It might be 100% true. The majority of people likely discover and view videos directly on YouTube, either via YouTube search or YouTube’s recommendation algorithm.
But. Google does seem to self-preference YouTube a lot. it’s hard to imagine that Google would show videos in prominent places in search results if the videos weren’t getting clicked on and watched.
Google vs. Rumble. The statistic was revealed in federal court last week, ahead of what would be yet another antitrust trial brought against Google – this time by Rumble, a rival video platform.
Rumble is arguing that “a rival cannot hope to compete” when Google gives preferential visibility to YouTube – especially on mobile, but especially when Google ranks YouTube videos over Rumble videos even when Rumble’s name appears in the search query.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/green-red-robots-court-1920-800x450-yP0q0Z.png450800Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-11 20:33:422025-02-11 20:33:42Google lawyer: Less than 1% of YouTube views come from search
Google Ads is launching a new video enhancement feature for Demand Gen campaigns. The update automatically creates shorter versions of existing video ads to better engage diverse audiences.
Details:
The enhancement will automatically generate condensed versions of existing video ads.
The feature will be enabled by default across all Google Demand Gen ad campaigns.
Advertisers have until March 10 to opt out of the automatic enhancement.
Why we care. Short-form content consistently captures higher engagement, especially on mobile and social platforms. By automatically generating shorter video versions, this could aid in reaching diverse audiences without the added cost or effort of creating new content. This update is a low-effort way to maximize ad performance and stay competitive in an increasingly fast-paced digital landscape.
However, it may be so low effort that quality would need to be closely monitored. Be open to test but ensure the outputs match the message your original video is trying to send across.
How it works. You can manage the enhancement through ads.google.com or by working with their Google sales representatives.
First seen. This update was first brought to our attention by PPC expert Julie Friedman Bacchini, who shared the message she received from Google about this message on X:
What’s next. You have approximately one month to evaluate the feature and decide whether to keep it enabled for your campaigns.
The big picture. The feature arrives as social media platforms and advertisers increasingly pivot toward short-form video content to capture fleeting consumer attention spans.
Between the lines. This move signals Google’s growing investment in AI-powered advertising tools, helping advertisers maximize reach without increasing creative production budgets.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/Gjcju2pWcAAFWdU-P3tCGB.jpeg11261428Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-11 19:47:332025-02-11 19:47:33New in Google Demand Gen Ads: Automatically create short videos
With marketing spending down and Google reporting higher earnings, we know that SEO is a tough sell.
Below are seven proven methods to increase clients’ payments for SEO (and the value they receive).
First, a word on value
Before I provide solid ways to increase revenue, we need to discuss value and price.
Humans are hard-wired to fear and avoid loss.
Loss aversion is heavily linked with SEO because it’s the “pay now, might gain later” of the marketing world.
So, as an offer, SEO is automatically less appealing than 99.9% of other marketing activities.
Knowing this, SEOs need to understand value like the back of their hands.
There are two components to price value:
Acquisition utility: The value you get from the product or service.
Transaction utility: How good of a deal you feel you are getting.
Research shows that losing money triggers the same area of the brain as physical pain, making financial loss feel psychologically distressing.
This is a key challenge for SEO as a service. Clients are naturally cautious, which affects how they perceive its value.
As a result, SEO pricing tends to remain low across the industry.
I’d say businesses spend more on their Christmas party than they do their SEO.
That’s not to say we can’t increase the price we charge and earn from SEO.
It’s a gentle reminder that SEO often scares clients, especially if they are not the business owner.
Staff put their professional reputation on the line when choosing an SEO agency.
Knowing all the above, here are seven tried-and-tested methods to increase your revenue from clients.
1. Break down your services to reduce client risk
All agencies want to have retainers, but this can create barriers.
When you ask the client to commit to a large sum but to spread that money out, this can raise alarm bells.
The compound cost of retainers can add up, and prospects look at cancellation clauses and think, “I could sink $15,000 on you and have nothing to show for it.”
To avoid this, the first concept to cover is splitting services.
Service splitting involves breaking what you do into core deliverables or projects.
This means that clients are only on the “hook” for specific elements, and they can leave at any time they wish.
Sadly, this makes forecasting hard for agencies as payments are not monthly.
Still, it can increase conversion rates, and if you price differently, you can make more revenue in stages.
2. Sell SEO strategy as a standalone service
The next key aspect to consider is separating SEO strategy and selling it as its own service.
Many agencies rush this process, often reducing strategy to nothing more than a basic to-do list with little competitive analysis or critical thinking.
By offering strategy as a standalone service, you can price it higher – charging for your time, expertise, and insights.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/7-ways-to-increase-SEO-revenue-without-losing-clients-800x450-2eQcs5.png450800Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-11 15:00:002025-02-11 15:00:007 ways to increase SEO revenue without losing clients
Google Analytics focuses on user sessions and uses different attribution models (e.g., last-click, first-click, or data-driven) to assign credit within a session.
The pros
Granular data: Provides detailed insights into user behavior at a session level.
Customizable models: Allows marketers to choose or customize attribution models to fit their business needs.
Real-time tracking: Captures real-time user interactions, offering immediate feedback on performance.
Cross-channel insights: Integrates data from multiple channels (organic, paid, referral, etc.), enabling better cross-channel analysis.
The cons
Limited to owned data: Relies on first-party data, making it less effective in environments with poor tracking (e.g., cookie restrictions, blocked JavaScript).
Bias toward measurable interactions: Doesn’t account for offline or untrackable influences (e.g., word of mouth).
Session-centric focus: May overlook the broader customer journey, especially for longer purchase cycles.
2. Advertising platforms (click and impression-based attribution)
PPC platforms like Google Ads and Facebook Ads attribute conversions to clicks or impressions tied to their specific ads.
The pros
Channel-specific insights: Provide detailed performance metrics for individual ad platforms.
Immediate ROI tracking: Excellent for tracking direct-response campaigns and performance-based advertising.
Impression data: Includes visibility data even if the user doesn’t click, allowing for broader analysis of brand awareness.
The cons
Walled gardens: Each platform operates within its ecosystem, often overstating its role in conversions because of a lack of cross-platform visibility.
Overlapping attribution: Different platforms may claim credit for the same conversion, leading to double-counting.
Short-term focus: Often overemphasizes direct clicks and conversions, neglecting long-term brand effects or multi-touch journeys.
3. Multi-touch attribution
MTA assigns credit to multiple touchpoints leading to a conversion rather than just the first or last interaction.
It’s typically based on clicks (sometimes impressions) but does not account for branding initiatives.
The pros
Comprehensive view: Captures the contribution of each touchpoint in the customer journey.
Optimizes campaigns: Enables better budget allocation by highlighting impactful channels.
Customizable models: Supports various methods like linear, time decay, or algorithmic models.
The cons
Complex implementation: Requires advanced tracking and integration across channels.
Tracking limitations: Cookie restrictions and data silos can hinder accuracy.
Data overload: Processing and interpreting the vast amount of data can be challenging for smaller teams.
Branding blindness: As noted above, branding campaigns without measurable clicks or impressions (think: anything analog, out-of-home, etc.) aren’t included in the analysis.
Salesforce uses CRM data to track the entire customer lifecycle, from lead generation to sales and retention, offering attribution for both online and offline interactions.
The pros
Full-funnel view: Tracks interactions across sales, marketing, and customer service.
Offline and online integration: Combines offline (e.g., in-person sales) and online data.
Custom reporting: Highly customizable to align with specific business goals.
Retention and LTV insights: Tracks post-conversion metrics like customer lifetime value (LTV).
The cons
Data dependency: Relies heavily on accurate and comprehensive data entry and segmentation across departments.
Complexity: Requires integration with other systems and significant setup effort.
Delayed feedback: May not be as real-time as tools focused on web analytics.
The best approach is to understand what each model captures (and what it doesn’t) so you can combine them strategically.
Here’s a quick breakdown of when each model works best:
Google Analytics is great for overall session-based behavior insights.
Ad platforms are ideal for optimizing campaigns within their ecosystems – all the way to the ad level.
MTA provides a nuanced view of the digital customer journey, and helps mitigate overlapping attribution across channels.
Salesforce is powerful for tracking the customer journey, including offline interactions and evaluating lead quality.
Shopify excels in ecommerce-specific insights for merchants within its platform, such as distinguishing one-time purchases and subscriptions.
Media mix modeling is suited for strategic, omnichannel decision-making and accounts for the entire customer journey, from branding to down-funnel activities.
The best attribution strategy: A balanced approach
At my agency, we love to run MMM regularly to give branding initiatives the credit they deserve, helping to fine-tune marketing strategies for long-term success.
However, no single model is sufficient on its own.
The best approach is integrating multiple attribution tools for a more complete view of marketing performance across platforms and touchpoints.
Attribution is an inexact science. It requires ongoing testing and adjustments.
Start by aligning on the KPIs that matter most to your marketing team, then choose the models that best assess your campaign success.
Last week, Avinash Kaushik said, “The best way to make a Super Bowl ad effective is through ‘spike and sustain’ marketing.”
He also explained that releasing teasers, ads, and extended versions before the Big Game is part of the “sustain” strategy, building momentum ahead of the “spike” in viewership.
Super Bowl LIX drew 113 million viewers, according to Nielsen.
Many brands embraced this strategy, unveiling their commercials early to build buzz and maximize exposure.
By midday Sunday, more than 40 ads had already been released, and several organizations had analyzed those aired before Feb. 9.
Pre-game Super Bowl ads that made an impact
For example, iSpot.tv identified “Budweiser | Super Bowl LIX ‘First Delivery’” as the early winner based on consumer surveys.
A young Clydesdale foal, eager to join the Budweiser delivery team, is told he’s too young. While the other horses depart, a keg falls off the wagon unnoticed.
The foal, determined to prove himself, embarks on a long journey, pushing the keg through various obstacles.
He successfully delivers the keg to the bar, interrupting a “horse walks into a bar” joke and impressing the driver who had dismissed him earlier.
The foal is rewarded with recognition and a sense of accomplishment.
“Somebody | It Takes All of Us SB LIX,” which features players mentoring kids, was the most emotionally resonant and attention-grabbing Super Bowl ad released before the Big Game, according to DAIVID.
It evoked intense positive emotions in over half of viewers, particularly feelings of warmth, inspiration, and pride, all significantly higher than the U.S. average.
The ad also held viewers’ attention better than average ads, both at the beginning and the end.
And Sprout Social‘s social media analysis showed strong performance for “A Century of Cravings | Uber Eats.”
Their successful celebrity-filled advertisement features Matthew McConaughey, Charli XCX, and Martha Stewart and cleverly incorporates the stadium’s name into a joke about Stewart’s Caesar salad.
The best Super Bowl ads released during the Big Game
Some of the roughly 80 Super Bowl spots cost a record $8 million for 30 seconds this year, per the Associated Press.
The granddaddy of rating TV commercials during the Big Game, “Budweiser | Super Bowl LIX ‘First Delivery’” ranked no. 1 with a score of 3.56 out of 5, according to USA Today’s Ad Meter. (Kudos to iSpot.tv for picking this winner ahead of time.)
In second spot was “LAY’S | The Little Farmer | :60,” with a score of 3.55. As the video’s description says, “One little potato. One big dream.”
In third place was “The ULTRA Hustle | Super Bowl LIX | Michelob ULTRA” with a score of 3.52.
As the ad’s description declares, “You can’t out-hustle a hustler. Willem Dafoe, Catherine O’Hara, Sabrina Ionescu, Randy Moss, and Ryan Crouser.”
In fourth position was “Stella Artois | David & Dave: The Other David” with a score of 3.51.
So, David Beckham goes to meet his long-lost twin, played by Matt Damon.
What do they have in common? “A love for Stella? Fancy footwork?”
Ranked fifth with a score of 3.49 was “Somebody | It Takes All of Us SB LIX.” Congrats to DAIVID for picking this winner ahead of time.
It’s also worth noting that “A Century of Cravings | Uber Eats” finished no. 8 in the USA Today Ad Meter rankings. Sprout Social’s analysis of social media was in the ballpark.
There are other ways to measure the top ads released during Super Bowl 2025.
For example, DAIVID used its AI-powered platform to analyze 65 Super Bowl ads aired Sunday night, predicting their emotional impact and effectiveness.
Their AI models combine facial coding, eye tracking, survey data, computer vision, and even listening APIs.
“Somebody | It Takes All of Us SB LIX” was not only the most emotionally engaging ad of the 40+ spots released before the Big Game, but it also generated the most intense positive emotions of the 65 ads shown during Sunday’s Super Bowl broadcast.
The ad attracted the highest attention levels of any spot shown.
In second place was “Jeep | Big Game | Harrison Ford x Jeep | Owner’s Manual.”
As the video’s description acknowledges:
“Life doesn’t come with an owner’s manual – you have to write your own. And no one knows this better than Harrison Ford.”
The ad attracted an intense positive emotional response from 54.2% of viewers, according to DAIVID.
In third was “What is Greatness?” from He Get Us.
The video’s description asks:
“Is being great, as our society defines it, really that great? Or is greatness quite the opposite of what we think it is? In this video, we explore how Jesus redefined true greatness and what it might mean for us. All of us.”
And 53.3% of people had intense positive emotional responses.
“Own the Dream | Rocket” ranked fourth. The video’s description clearly states:
“Everyone deserves their shot at the American Dream.”
This generated intense positive emotional responses with 52.6% of people.
In fifth place was “Pfizer | Big Game Commercial 2025 | Knock Out.”
As the video’s description explains, “Pfizer is fighting for 8 cancer breakthroughs by 2030.” This triggered intense positive emotional responses from 52.3% of people.
It’s also worth noting that “LAY’S | The Little Farmer | :60” ranked no. 8 and “Budweiser | Super Bowl LIX ‘First Delivery’” ranked no. 10 in DAIVID’s list of Top 10 Most Emotionally Engaging Super Bowl 2025 ads.
Different methodologies can still yield similar results.
However, DAIVID’s post-game analysis yielded significantly different insights than USA Today’s.
Super Bowl 2025 ads were the least effective in five years, per DAIVID’s Creative Effectiveness Score.
The average ad scored 6.2 out of 10, the lowest since 2020, generating less attention and positive emotion than previous years.
While many advertisers aimed for humor, serious and purpose-driven ads dominated the top 10, with the NFL’s “Somebody” ad being the most emotionally engaging.
Ian Forrester, DAIVID’s CEO and founder, observed:
“With the vast majority of Super Bowl advertisers trying to make us laugh this year, it’s interesting that brands that stepped away from the usual Super Bowl celebrity/humor trope have attracted the most positivity. It shows just how hard it is to cut-through when so many are trying the same approach. With overall effectiveness also down, maybe it’s time brands tried something different to get people’s attention on game day.”
What’s next: Insights beyond the Big Game
In the coming days, digital marketers can expect more rankings from Kantar, System1, and other organizations analyzing the impact of Big Game ads on both performance and branding.
These insights matter because metrics like Brand Life and Engaged-View Key Events can serve as KPIs for “spike and sustain” campaigns beyond the Super Bowl – whether for back-to-school season, Thanksgiving, or a major product launch.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/k0ikgjxy_gk-Nct0qf.jpeg7201280Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-10 18:47:222025-02-10 18:47:22The top Super Bowl 2025 ads released before and during the game
Are you ready to take your SEO and PPC campaigns to the next level of success? Tackle the challenges of the New Year with actionable tactics, expert guidance, and the inspiration you need to succeed at the spring edition of the SMX Master Classes — happening live online this March.
This spring’s lineup features seven outstanding courses tackling core topics critical to 2025 success:
Your Training, Your Way.
You asked, and we listened: For the first time ever, each Master Class will take place on different days, giving you the flexibility to attend multiple classes live and customize the perfect training experience.
Each Master Class is a two-part deep dive into critical search marketing topics, with live Q&A designed to answer your specific questions and 120 days of on-demand access for deeper learning.
Why Attend?
Affordable Excellence: Just $299 per Master Class. Exclusive Perks: Earn a certificate of completion to showcase your knowledge. No Travel Hassles: Join from anywhere — no plane tickets or hotels needed.
Unlock 15% Off
Create the ultimate cross-training experience by purchasing more than one Master Class – and save 15% on your total registration.
Book Now For Best Rates
What are you waiting for? Choose your classes and secure your spot today!
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/Copy-of-Copy-of-SMX-Master-Classes-spring-2025-email-masthead-CTA-REG-1-W8pu3O.png225975Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-10 16:22:002025-02-10 16:22:00Take your career to the next level: Become a search marketing master
Website migrations are one of the most challenging aspects of SEO.
No matter how much experience you have in technical SEO, how detailed your plan is, or how thorough your checklist may be, unexpected issues can still arise.
That’s why post-migration monitoring is just as crucial as the migration itself – especially in the first month when hidden problems are most likely to surface.
This article tackles some of the most surprising post-launch errors I’ve encountered, along with practical tips on how to identify and resolve them before they cause serious damage.
Random 404 pages
This issue drove me crazy. It’s a nightmare for SEO testing because it skews every tool and report we rely on.
When you can’t trust the data, it’s impossible to know what’s actually broken or how it impacts performance.
During the post-migration phase of updating our JavaScript library, we noticed random 404 errors in our SEO tools and Google Search Console.
The strange part?
The affected pages weren’t consistent, and every time we checked manually, they loaded fine with a 200 status.
As a result, all other reports became unreliable, making proper analysis nearly impossible.
These random 404s often stem from server-side issues such as rate limiting, where the server denies access to bots after too many requests.
Other potential causes include:
Misconfigured caching.
Inconsistent DNS resolution.
Load balancer errors that occasionally route requests to an unavailable server.
Identifying the root cause requires detailed server log analysis to track bot request and response patterns.
And here’s the biggest lesson I learned: Without access to server logs, you’re fighting this battle blind.
Ensure your SEO team has access to the necessary server log tools and, at the very least, understands the basics of how they work.
Monitoring bot activity logs can help you demonstrate the issue to developers. Without them, you risk getting stuck in endless debates over the accuracy of SEO tools.
At first glance, this error looks similar to random 404s, but the cause is usually entirely different and just as difficult to diagnose.
Even SEO tools like Lumar and Screaming Frog can inadvertently trigger these 500 errors while crawling.
Years ago, one of the websites I worked on had a strict rule: no crawling on weekends and no exceeding three URLs per second.
Every time we increased our crawling limits, the database server struggled, slowing down the entire site – or worse, crashing it.
These errors often result from complex database queries overloading the server or improperly configured caching.
Without proper caching, each request is processed individually, compounding the strain and leading to slow load times or intermittent crashes.
And once again, the solution starts with server log access. Without it, you’re just guessing.
Incorrect resource loading
This was one of those moments where I felt like a digital Sherlock Holmes.
The migration had been completed before I joined the company, and I first noticed the issue during an initial technical audit.
The first clue?
A mysterious drop in rankings and traffic shortly after the migration.
There had been a Google update around the same time, so I couldn’t immediately link the decline to the migration.
To complicate things further, this wasn’t a full migration, just a design revamp.
On the surface, everything seemed fine. Pages loaded correctly, and styles and JavaScript worked perfectly for users.
Yet, in Google Search Console’s inspection tool, the same pages often appeared broken and unstyled.
The issue was inconsistent, making it nearly impossible to replicate in front of the dev team.
As a new team member still building trust, convincing them there was a deeper problem wasn’t easy.
In hindsight, my mistake was not checking the browser console earlier.
Three months later, a single browser console message finally revealed the root cause: a script was loading out of order.
Due to caching, Googlebot sometimes saw the website correctly and other times didn’t, explaining the erratic behavior.
It was a tough reminder that small technical details – like the sequence of resource loading – and overlooking an obvious diagnostic step can significantly impact SEO performance.
My key tip: Check your website in different browsers and carefully review the error and warning messages in the console.
If you’re unfamiliar with developer terminology, consult an independent expert or even multiple AI tools for explanations.
Non-existent URLs
While investigating those frustrating random 404 errors, I stumbled upon another issue almost by accident.
While reviewing Google Search Console’s report on pages discovered but not indexed, I noticed an unusual pattern – several non-existent URLs appearing under certain sections, marked as duplicate content.
Instead of returning 404 errors as expected, these URLs resolved as normal pages with a 200 status code.
This type of error presents two major risks:
From an SEO perspective, search engines treat these URLs as legitimate, potentially indexing irrelevant or duplicate pages, wasting crawl budget, and harming rankings.
From a security standpoint, it creates a vulnerability – malicious actors could generate thousands of random URLs, overloading the server.
Unfortunately, this issue is difficult to detect before it becomes a real problem. In my case, I was just lucky.
Don’t wait to stumble upon it. Make sure to:
Regularly check whether sections of your site allow non-existent URLs to resolve with a 200 status.
Build a list of key sections and test them monthly with your crawler. Even minor backend changes – not just full migrations – can trigger this issue.
Prioritize pages generated programmatically or dynamically, as they are the most common culprits.
Hreflang tags or canonical tags to non-existing URLs
Managing hreflang tags on a multilingual website is challenging, and even small mistakes can cause big issues.
On one website I worked on, we typically created pages in English first and then localized them.
However, in some cases, only a local version existed, and the hreflang x-default was mistakenly set to an English page that didn’t exist.
Incorrect hreflang tags confuse search engines, which rely on them to identify the correct language or regional version of a page.
When these tags are wrong, search engines may struggle to understand the site’s structure or ignore the hreflang implementation entirely.
Normally, we would have caught this in our migration checks.
But at the time, we were buried in troubleshooting random 404 errors.
We also made the mistake of not manually testing localized pages across different templates.
To prevent this in future migrations:
Make a detailed list of site-specific checks. Generic migration checklists are a good starting point, but they need to be customized for the website and CMS.
Manually test localized pages across different templates to ensure correct hreflang and canonical tag implementation.
JavaScript-driven content that users can see but search bots can’t is a common and often overlooked issue.
This typically happens when widgets or content sections rely on JavaScript to render, but the scripts aren’t fully crawlable or properly executed by search engine bots.
(Google offers a great resource to help you understand JavaScript basics.)
If you’re unsure how a widget works, use this simple test:
Does it display the full content immediately, or does it require user interaction?
If it’s the latter, it likely relies on JavaScript, meaning search and AI bots might not see everything.
To catch this issue, run both a JavaScript-enabled crawl and a pure HTML crawl, then compare the results.
A quick manual test can also help.
Search for a specific sentence or element from the widget in your rendered HTML source.
If it’s missing, search bots are probably missing it too.
Resolving this often requires improving server-side rendering or ensuring that scripts load properly for both users and crawlers.
Since website migrations often leave little time for testing, make it a priority to run these two crawls post-migration to identify and fix any rendering issues.
Tracking data loss can be a subtle yet costly post-migration issue.
In one real-world case, everything initially appeared fine. Analytics data was flowing and visits were being logged.
However, after a few days, it became clear that users arriving via paid ads were losing their tracking parameters as they navigated the site.
This meant subsequent pageviews within the same session were no longer attributed to the original paid campaign, disrupting remarketing efforts.
The cause?
Improper handling of URL parameters during the migration.
Website migrations require cross-team monitoring, not just from the SEO team.
While this issue didn’t directly impact SEO rankings, it still had major consequences.
Before migration begins, triple-check your plan to ensure all relevant teams are involved.
Migration testing should go beyond SEO, incorporating analytics, development, and marketing teams to safeguard tracking parameters and user attribution.
Each team should have pre-migration reports for comparison after launch.
While planning may not fall under SEO’s direct responsibility, identifying gaps in the project plan and raising concerns is essential.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/Technical-SEO-post-migration-How-to-find-and-fix-hidden-errors-800x450-iXRw9h.png450800Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-10 15:00:002025-02-10 15:00:00Technical SEO post-migration: How to find and fix hidden errors
If you noticed that your local business listing on Google is showing fewer reviews, you are not alone. Since Friday, tons of reviews have been disappearing from the local listings within Google Search and Google Maps.
More details. On Friday, I reported on the issue on the Search Engine Roundtable, not knowing if it was a bug or a feature. I noticed dozens and dozens of complaint threads popping up in the Google Business Profiles forums from concerned small businesses and local SEOs.
Some businesses say they lost only a few reviews, while others say they lost dozens of positive reviews.
Some say the count of reviews is not adding up and the reviews themselves are not missing.
Likely a bug. Joy Hawkins, a local SEO and Google top contributor, later said this is a bug that Google is working to fix.
Why we care. If you noticed that you lost a lot of reviews on your local listing, you should know that you are not alone. It seems to be impacting many Google Business Profile listings and hopefully Google will restore those reviews soon.
Google has not commented on the issue.
Update – Google comment. Victoria Kroll from Google posted a statement in the forums saying:
We’re aware of an issue affecting some Google Business Profiles, causing some profiles to show lower-than-actual review counts due to a display issue. The reviews themselves have not actually been removed. We’re working hard to resolve this and restore accurate review counts as quickly as possible. We appreciate your patience and will share updates on this thread as they become available.
Before reporting missing reviews, please note that there are several reasons why reviews may be removed from maps. Usually, missing reviews are removed for policy violations like spam or inappropriate content. Read more about our Review policy guidelines here before proceeding. You can also refer to the Help Center Article for more information.
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/google-review-stars-1920-800x457-bVSJ5c.jpeg457800Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-10 14:26:472025-02-10 14:26:47Google bug cause reviews to drop out of local listings
One question that we’ve been hearing over and over again since the 2016 election is:
Is Google biased?
There are no shortages of opinions.
Sundar Pichai went before Congress in 2018 and swore under oath, “I’m confident we don’t approach our work with any political bias.”
He also sent an internal memo to staff warning them against letting their personal politics affect their work.
Elon Musk, on the other hand, posted to X, “Google is controlled by far left activists.”
A conservative organization, the Media Research Center, routinely posts articles that show supposed “proof” of Google’s political bias, while left-leaning Vox posted an article mocking conservatives for not understanding how SEO works.
If you’re like me, you’re just reading all the back-and-forth and getting tired of it.
Too many opinions on both sides are based on confirmation bias, sensationalism, or a fundamental misunderstanding of how SEO really works.
And so I thought I’d jump into this hornet’s nest.
Like everyone else, I have my own biases but I’m going to do my best to keep them at bay.
Instead, let’s use SEO tools and techniques to see if we can come to a definitive answer.
‘Google bias’ in the 2024 election?
Throughout the 2024 election, there were many stories about Google’s supposed “bias.”
Let’s take a look at some of the more prevalent ones.
In June, the Media Research Center accused Google of “blacklisting” President Trump’s official campaign website because it wouldn’t rank for [donald trump presidential race 2024] and [republican party presidential campaign websites].
The problem with this is that even a junior SEO could have seen that Donald Trump’s website was pretty horrifically optimized.
Their home page title tag read Home | Donald J. Trump, and most of their substantive content was hidden in a PDF.
In July, many people including Donald Trump, Jr. accused Google of “election interference” because Google autocomplete would not suggest President Trump’s name when someone typed in “assassination attempt on…”
Google’s official explanation was that they have “protections in place against autocomplete predictions associated with political violence.”
To be honest, I didn’t buy that (I could see autocomplete for other contemporary figures), but I just chalked it up to Google autocomplete was embarrassingly slow to update.
The third incident to make waves was on Election Day, when searches for [how to vote harris] spawned a box that told people where to go for their nearest polling place, while [how to vote trump] did not.
Google PR explained that this was because “Harris” is also the name of a county in the U.S., while “Trump” is not.
Again, a perfectly plausible explanation.
Thousands of conservative accounts jumped on these incidents as definitive proof of Google interfering in the election.
The mistake they made was assuming that Google is infallible.
In reality, anyone with a passing understanding of Hanlon’s Razor – which suggests we should not attribute to malice what can be explained by incompetence – would see that it applied in all three cases.
Accusations of Google bias
The problem with focusing on noise like this is that it detracts from the real question.
Do Google search results have bias, and is that bias enough to unduly influence people?
Through the years, a number of whistleblowers and researchers came forward with supposed proof of Google bias. Some highlights:
In November 2016, following the presidential election, an anonymous source within Google sent a leaked video to the conservative outlet Breitbart showing Google executives’ and employees’ negative reaction to the election results.
In subsequent years a number of whistleblowers from within Google came out to provide their reports of bias that they perceived within Google.
In July 2019, senior engineer Greg Coppola came forward to publicly disagree with his CEO’s claim that searches were unbiased.
One of the more interesting presentations was about “algorithmic unfairness,” which discussed the need for search results to reflect a desired state, even if it didn’t reflect current realities.
While not a Google employee, Robert Epstein was a research scientist who went on a number of conservative outlets with research purporting to show Google manipulating public opinion.
The problem with all of this? Because this evidence was mainly hearsay, opinions were split like a Rorschach test.
Let’s take a step back and look at objective facts:
From 1998 to 2018, Google was powered by their original algorithm based mainly on PageRank. That worked really well in the beginning but as more people understood Google’s algorithm, poor quality sites began to rank. Despite their efforts with Panda and Penguin, it became clear that too many legitimately dangerous sites were making their way into Google’s results.
The Aug. 1 2018 broad core update (a.k.a., the Medic Update) was Google’s first big attempt to go beyond reactively fighting content and link manipulation and proactively combat this, starting with financial and medical topics (YMYL).
Most of us in the SEO space had our suspicions that Google was putting its finger on the scales for other types of searches. In May 2024, leaked documents from Google confirmed that Google’s organic algorithm indeed treated COVID and election-related searches differently than others through two factors called IsCovidAuthority and IsElectionAuthority, respectively.
None of this is a smoking gun either.
Those who attack Google say this circumstantial evidence is enough to prove Google’s bias.
Defenders of Google will say that all of these steps were necessary to fight the real problem of bona fide misinformation and scams.
The data
So, is Google biased?
Instead of giving you my opinion, I’m going to show you how you can use SEO tools and techniques to figure it out for yourself.
The two tools I use most often for my SEO work are Semrush and Ahrefs. Both of them have a useful feature: the ability to go back in history and see historical SERPs.
For example, these are the top 10 organic results for searches on “donald trump” that Semrush reports from October 2024, one month before Election Day.
And here’s what Ahrefs reports for October 15, 2024.
Both are similar.
The slight variations are due to variations in the way that Semrush and Ahrefs obtain their Google results.
We’re still in Rorschach test territory.
Those who accuse Google of bias will look at the results and cry foul because CNN, AP, Wikipedia, and The Guardian – all known for being left-leaning – are showing up.
Those defending Google will point to Donald Trump’s website and his multiple social media accounts showing up as proof that Google is unbiased.
What if we could take a look at every question that people asked about Donald Trump and Kamala Harris during the election, take the top 10 results for each, and run an analysis of which media outlets are cited most often?
We can. Here’s how.
For this one I’m going to use Ahrefs (which allows me to output 1,000 queries and their top 10 positions and to filter based on date).
I searched for “Questions” that people ask about “donald trump.” I filtered on searches that were seen before Election Day 2024.
Next, I exported the top 1,000 questions with the top 10 positions for each.
I uploaded the CSV file to ChatGPT and asked it to go through the list and tally up how often each news outlet or website appeared.
I repeated the process for questions containing “kamala harris” and tallied everything up. At this point I had a list of all sites that ranked in the top 10 for the top 1,000 questions about Trump and Harris.
I took every site they listed in this chart that was rated 24.0 and above in News Value and Reliability.
Neither AllSides nor Ad Fontes are perfect.
For example, the left will likely disagree with AllSides’s characterization of AP as “left,” while the right will likely disagree with Ad Fontes’s characterization of RealClearPolitics as “strong right.”
But on the whole, these are the best out there (at least as far as Google is concerned).
Yes, there is pretty clear evidence that Google’s organic results demonstrate bias when it comes to political searches.
But don’t take my word for it. Repeat the process above for any political phrase you can think of.
Note that Google is not “censoring” conservative and right-leaning outlets – you can still find them in search results if you search on their brand name.
But do any kind of non-branded search and you’ll be hard-pressed to find them ranking.
It wasn’t always this way.
Note in this screenshot how in August 2016 it was fairly common to see centrist outlets like RealClearPolitics and right-leaning outlets like the Washington Times alongside left-leaning outlets like CNN and The Atlantic for searches for “donald trump.”
You can see from this chart of SEO traffic and keywords what happened to RealClearPolitics.
Somewhere around April 2020, their SEO traffic and keywords fell off a cliff.
Semrush chart of SEO performance for RealClearPolitics.com
Today, 92% of their Google traffic comes from branded searches.
Contrast that to The Atlantic, where 78.4% of SEO traffic is unbranded.
Semrush chart of Branded vs. Non-Branded Traffic for RealClearPolitics.com
You can see similar patterns around that time with other right-leaning sites like The Blaze, The Federalist, and Breitbart, as well as left-leaning sites like Mother Jones and HuffPost.
While those sites flailed in SEO, mainstream news sites like The New York Times and CNN skyrocketed.
Semrush chart of SEO performance for NYTimes.com
What happened?
In 2020, Google likely implemented changes similar to those in its 2018 Medic update.
The Medic update aimed to protect users from harmful health and finance content.
At the time, black hat SEO tactics allowed fraudulent sites to outrank legitimate ones, leading to financial scams and misinformation, in thousands of cases harming the most vulnerable populations.
Many elderly and low-income individuals were defrauded, and those with serious illnesses were misled by false medical claims.
To counter this, Google manually boosted high-authority sites to ensure reliable information surfaced.
Internally, many within Google likely viewed political content as an extension of the “Your Life” portion of YMYL.
This likely led to the creation of a list of trusted and untrusted sources.
While that information isn’t public, it’s not a stretch to assume it’s similar to the one maintained by Wikipedia editors, one which left-leaning individuals may find reasonable and right-leaning individuals would find extremely biased.
How one-sided news can affect public opinion
Here’s an example of how a lack of diverse perspectives can create a one-sided narrative.
In April 2020, during the COVID-19 lockdown, the virus was spreading rapidly, especially affecting the elderly, with no vaccine or cure in sight.
On April 23, the White House held a press conference where William Bryan from DHS shared promising research updates. (You can read the full transcript here.)
Specifically, he discussed the effects of sunlight and UV rays on the coronavirus and briefly mentioned the effectiveness of isopropyl alcohol in killing the virus on surfaces.
Following Bryan’s remarks, President Trump asked about potential clinical applications of the findings.
I think most would agree his wording was inartful and a bit bombastic, but fact-checking organizations would go on to conclude that he never suggested drinking or injecting household bleach.
A year later, peer-reviewed studies confirmed UV light as a viable concept.
However, a Google search for “trump bleach” immediately after the press conference presented a different picture:
The BBC ranked No. 1 with the headline: “Coronavirus: Trump suggests injecting disinfectant as treatment.”
The New York Times was No. 2 with: “Trump’s Suggestion That Disinfectants Could Be Used to Treat Coronavirus Prompts Aggressive Pushback,” accompanied by a stock photo of household bleach.
The Washington Post was No. 3 with: “Trump asked if disinfectants could be injected to kill coronavirus inside the body. Doctors answered: ‘People will die.’”
The rest of the top results followed the same narrative – mocking or criticizing Trump for allegedly encouraging Americans to ingest or inject household bleach.
This could be attributed to the “fog of war,” but independent and conservative outlets provided alternative perspectives that were virtually invisible in search results.
For instance, RealClearPolitics published the full video and transcript on the day of the press conference, allowing readers to judge for themselves – yet it didn’t even rank in the top 100.
Did Google do anything wrong?
Now I’m going to upset the other half of America.
Did Google do anything wrong?
Not really.
Yes, Google likely tilts the scales – especially in amplifying smaller left-leaning sites over their right-leaning counterparts.
But even if Google didn’t interfere, the mainstream media would still dominate the top 10 rankings for most searches.
Most of us in SEO have experienced the frustration of seeing a niche site with outstanding content outranked by lower-quality content from an “authority” like Reddit or YouTube.
Similarly, major outlets like CNN and The New York Times have far more links and traffic than any conservative or progressive news site.
Big brands dominate the top results, while smaller sites fight for long-tail visibility. That’s how it’s been for a long time.
It’s also worth noting that Google is a private company.
The First Amendment protects speech from government interference – it doesn’t apply to private entities.
Unless the government is compelling Google’s actions, the company is free to serve up whatever results it wants.
Conservatives who cry foul at Google’s dominant position might want to remember how they pushed back in the 1990s against those who wanted to reinstate the Fairness Doctrine when conservative talk radio gained influence.
Their argument back then was that the free market of ideas would self-correct.
This worked to some extent in broadcast and cable news.
MSNBC emerged as a counterbalance to Fox News.
Podcasters like Joe Rogan and the social media platform X attracted audiences seeking more transparency and alternative perspectives outside mainstream media and Google News.
In August 2024, Judge Amit Mehta issued a ruling confirming what many in SEO had long anticipated: Google had maintained a monopoly in General Search Services, covering both paid and organic search.
Evidentiary hearings are set for April 2025, with a final ruling expected by August 2025.
Whether these remedies will – or even can or should – compel Google to present a more diverse range of opinions remains uncertain.
But in my view, a bigger threat to Google is on the horizon.
The future of news
The bigger threat to Google is people realizing that there is a powerful alternative to their curated political content: AI.
Here’s an example: I asked xAI’s Grok to present both perspectives of a highly contentious political question.
You’ll find similar responses on ChatGPT, Claude, Perplexity, and others.
For the first time in eight years, I finally received a balanced answer – one that represents both sides fairly (or, if you prefer, equally unfairly).
Last year, I predicted that people would gradually shift to AI chatbots for search. I began that article predicting it would take three years.
But less than a year later, I find the majority of my own “searches” now happen on ChatGPT and Grok.
This shift reminds me of the search landscape in the late 1990s, when companies like Excite, Lycos, AltaVista, Yahoo, and Google were competing to be the top search engine.
Google won by offering the best experience.
It took years before content manipulation and link schemes forced algorithm updates like Panda and Penguin.
Today, a similar race is underway. ChatGPT, Gemini, DeepSeek, Claude, and Grok are vying to become the new search standard.
Unlike Google, searches won’t take the form of one or two keywords, but detailed questions unlocking an expansive long-tail of search queries.
Many assume the U.S. political landscape consists of two sides, but in reality, there are 335 million perspectives – each shaped by unique experiences and biases.
Since 1998, we’ve been conditioned to search for head terms and accept Google’s 10 organic results as the authoritative answer.
But I continue to believe that the winner of the AI wars will be the platform that, like early Google, embraces free speech and classical liberalism.
That means using training data that reflects all viewpoints – even those that company insiders might find uncomfortable – and allowing AI to answer questions honestly.
Can any American AI companies resist the temptation to limit AI’s knowledge by limiting its access to information and forcing it to follow their internal bias rather than objective truth?
https://blog.c-serp.fr/wp-content/uploads/sites/6/2025/02/Semrush-top-10-organic-results-for-searches-on-donald-trump-bfaFaB.png9021140Hervé @ C-SERPhttps://blog.c-serp.fr/wp-content/uploads/sites/6/2022/01/Logo-C-SERP-Blog-Storytelling-SEO.pngHervé @ C-SERP2025-02-10 14:00:002025-02-10 14:00:00Is Google biased? An SEO veteran’s perspective