The DESCRIBE framework for effective YouTube descriptions

YouTube is rolling out new features that make it easier for creators and brands to connect their collaborative content with advertising campaigns.

This comes after their introduction last year of tools to help brands better leverage creator content and its fast-growing Shorts format to drive measurable business outcomes

The details. The platform has introduced two improvements:

  • Creator-initiated linking: Eligible creators can now directly send linking requests to brands for sponsored videos they’ve already published.
  • Video linking API: Brands working with multiple creators can automate the connection process through a new API integration.

Why we care. The creator economy continues to mature, but administrative friction has been a persistent obstacle in scaling partnerships between brands and content creators. YouTube’s new linking could significantly streamline the process of turning creator partnerships into measurable campaigns.

The creator-initiated linking feature reduces administrative back-and-forth, while the API automation saves substantial time for brands managing multiple creator relationships simultaneously.

The big picture. Consumer trust in creator recommendations continues to outpace traditional advertising. YouTube is positioned as the most efficient ecosystem for formalizing and measuring these relationships.

What’s next. YouTube will likely continue expanding its BrandConnect toolkit.

Google today released the March 2025 core update. Google said this core update “rollout may take up to 2 weeks to complete.”

Google also wrote:

  • “Today we released the March 2025 core update to Google Search. This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites. We also continue our work to surface more content from creators through a series of improvements throughout this year. Some have already happened; additional ones will come later.”

Core updates happen multiple times per year. Core updates can have significant, broad changes to Google’s search algorithms and systems, which is why Google announces them. This is the first core update of 2025.

This core update comes three months after the last core update, the December 2024 core update.

What to do if you are hit. Google didn’t share any new advice specific to the March 2025 core update. However, in the past, Google has provided advice on what to consider if you are negatively impacted by a core update:

  • There aren’t specific actions to take to recover. A negative rankings impact may not signal anything is wrong with your pages.
  • Google offered a list of questions to consider if your site is hit by a core update.
  • Google said you can see some recovery between core updates, but the biggest change would be after another core update.

In short: write helpful content for people and not to rank in search engines.

  • “There’s nothing new or special that creators need to do for this update as long as they’ve been making satisfying content meant for people. For those that might not be ranking as well, we strongly encourage reading our creating helpful, reliable, people-first content help page,” Google said previously.

For more details on Google core updates, you can read Google’s documentation.

Previous core updates. The first core update of 2024 – the March 2024 core update – was the largest core update ever, according to Google. It started March 5 and completed 45 days later on April 19.

Here’s a timeline and our coverage of recent core updates:

Why we care. With any core update, we often see large volatility within the Google search results and ranking. These updates hopefully will improve the rankings of your sites or your clients’ sites. But some of you may see fluctuations or even downgrades in Google rankings and organic traffic.

We hope this update rewards you all and sends you lots of traffic and conversions.

I do wonder if this update will have any positive impact for the creators Google had meetings with months ago.

7 ways to segment Performance Max and Shopping campaigns

Google released a new help page detailing Asset testing for retailers, a specialized experiment type for Performance Max campaigns that lets you measure the effectiveness of your creative assets.

What’s new. The new experimental feature tests asset impact within a single PMax campaign:

  • A control group (feed-only) is compared against a treatment group (with added assets).
  • The results are viewable in the Experiment report.

Split testing without duplicate campaigns. Unlike traditional A/B testing that requires running parallel campaigns, this new feature splits traffic within a single Performance Max campaign:

  • Control group: Shows product feed-only ads.
  • Treatment group: Shows product feed plus additional creative assets.

This approach eliminates the need to manage duplicate campaigns while providing clear performance comparisons.

How it works. The experiment divides traffic between the two variations, allowing you to determine whether adding creative assets (like images, videos, and text) improves performance beyond what the product feed alone can deliver.

You can review results in the dedicated Experiment report section, measuring key metrics like conversions, click-through rates, and return on ad spend between the two groups.

Why we care. Performance Max campaigns have become central to Google’s advertising ecosystem, particularly for retailers. However, many advertisers struggle to understand how much value their creative assets add beyond automated feed-based ads.

This experiment feature addresses that uncertainty, giving retailers data-driven insights into whether investing in additional creative assets delivers meaningful performance improvements.

Go deeper. This launch is part of Google’s broader effort to provide more transparency and control within its automated campaign types, addressing advertiser concerns about the “black box” nature of Performance Max campaigns.

Google’s new help documentation page. About Performance Max optimization experiments: Asset testing.

Google Analytics is enhancing its reporting capabilities with three new features designed to help users better understand their data and identify potential tracking issues.

Percentage values are now included in all detailed reports, new notifications flag missing session_start events and system alerts also highlight high rates of “(not set)” values.

Percentages now standard in reports. Google has added percentage values to each row across all detailed reports in both the “Reports” and “Ads” modules.

The new percentage columns allow analysts to immediately see which traffic sources, pages, or campaigns are driving the most significant portions of their results without manual calculations.

Missing session_start notifications. Google Analytics 4 relies heavily on properly configured session_start events. When reports detect a high rate of missing session_start events, the system will now display an information icon with alert details.

Users can interact with these icons to:

  • Understand the specific issue detected.
  • Learn about potential causes.
  • Access documentation explaining how to resolve the problem.
  • Find implementation guidance to prevent future occurrences.

This proactive notification system should help catch configuration issues before significantly impacting data quality.

High “(not set)” rate alerts. Similar to the session_start notifications, Google Analytics will now flag reports showing an unusually high rate of “(not set)” values. This is a common indicator of implementation problems or tracking gaps.

These alerts provide contextual information about:

  • Why “(not set)” values appear in reports
  • What these values typically indicate
  • How to diagnose and fix the underlying tracking issues

Why we care. These updates help improve data transparency and accuracy, which are critical for optimizing campaigns. The addition of percentage values helps quickly assess the impact of different data points, while the new notifications for missing session starts and high “(not set)” rates highlight potential tracking issues that could distort performance metrics.

By addressing these issues early, advertisers can ensure their reports reflect real user behavior, leading to more informed budget allocation and campaign decisions.

Go deeper. The addition of these features comes as organizations increasingly rely on accurate analytics data for decision making and as proper GA4 implementation remains challenging for many teams.

SMB websites see rising traffic from ChatGPT and other AI engines: Study

As generative AI continues to shape website performance, much of the conversation has focused on its negative impact.

Factors like AI Overviews, increased zero-click searches, and a growing demographic that turns to ChatGPT instead of traditional research methods may all play a role.

While these shifts raise concerns, it’s equally important to examine the traffic these AI engines are driving to websites.

To better understand this, we analyzed traffic data from 391 SMB websites, breaking them down by industry and AI search engine referrals.

Our goal was to see exactly how much traffic these companies were getting – and whether it was significant enough yet to warrant a strategic pivot. 

Screenshot of our executive dashboard with a breakdown of AI referral traffic over the last six months. 
Screenshot of our executive dashboard with a breakdown of AI referral traffic over the last six months. 

Key takeaways

Our analysis of traffic data from 391 SMB websites revealed some interesting insights about how generative AI is contributing to visits. 

Key points from this study include:

  • Consistent growth in ChatGPT referrals over the last six months. While other AI engines fluctuate month to month, ChatGPT has been up 123% since September and has consistently been the largest referrer.
  • Notable increases in AI referral traffic relative to organic traffic over the past six months.
  • Industry-specific differences in AI referral traffic patterns. Travel and finance websites see higher AI referral volumes from ChatGPT, while Perplexity and Gemini play more significant roles in health and ecommerce.

Referral traffic from generative AI is on the rise

One significant insight from studying this data is the clear increase in the amount of referral traffic websites are receiving. 

Between September 2024 and February 2025, referral traffic from generative AI rose by 123%.

Comparing this to overall organic traffic highlights why this trend is worth monitoring.

Six months ago, AI referral traffic amounted to 0.54% of organic traffic. 

Today, that ratio is 1.24% of organic traffic. 

This means the share of AI traffic to organic traffic has increased by 130% in the last six months. 

While the share is still relatively small, the pace at which it is growing warrants attention. 

It’s also important to note that organic traffic remained relatively constant during this time. 

Graph showing the ratio of AI referral traffic to organic traffic for 391 SMB websites over the last six months.
Graph showing the ratio of AI referral traffic to organic traffic for 391 SMB websites over the last six months.

Dig deeper: Google Search is 373x bigger than ChatGPT search

Referral traffic trends over time

While overall AI traffic has clearly increased recently, the consistency of referral traffic growth fluctuates depending on the AI engine. 

  • ChatGPT has consistently provided more referral traffic month over month for the last six months. 
  • Perplexity and Bing (edgeservices) fluctuate wildly month over month.
  • Gemini remains flat most months while jumping up every once in a while. 

Despite these fluctuations, ChatGPT has consistently been the largest source of AI referral traffic, incrementally adding an average of 21% more traffic each month.

Referral traffic trends over time

Get the newsletter search marketers rely on.



Industry fluctuations

ChatGPT consistently drives the most AI referral traffic across all industries.

However, as AI referral traffic continues to grow, other platforms like Perplexity and Gemini are also contributing meaningfully.

When examining traffic by industry, notable differences emerge. 

Travel and finance websites receive the highest ratio of AI referral traffic from ChatGPT. 

Meanwhile, Perplexity accounts for nearly 20% of AI traffic to health and ecommerce websites, with Gemini also contributing a notable share to ecommerce.

These patterns raise important questions about user behavior and how AI search engines manage citations and referrals:

  • Do different demographics prefer certain AI search engines?
  • How does conversational search behavior vary when users inquire about specific industries?
  • Why does Gemini refer so little traffic to travel sites but a substantial amount to ecommerce? Could this reveal insights into how AI Overviews prioritize industries?
Industry fluctuations per AI engine

Next steps

As AI and search evolve, new questions will likely emerge alongside new insights. The impact of AI search engines on website traffic will continue to shift, requiring businesses to stay adaptable.

One key consideration is identifying the threshold at which investing in AI-specific optimization becomes worthwhile. While traditional SEO tactics and AI citation strategies often overlap, they are not identical.

For SMBs, balancing these priorities can be challenging given limited resources. Agencies, therefore, play a crucial role in staying informed and guiding businesses through these changes. 

As AI platforms continue to evolve, ongoing monitoring and strategic pivots will be essential to ensure businesses can benefit from AI-driven traffic – not just mitigate its potential downsides.

About the data

This analysis is based on data from 391 SMB websites, segmented by industry and tracked via Google Analytics to measure AI referral traffic specifically. 

The study focused on the past six months, when referral traffic grew significantly.

Only industries with statistically significant data were included, and only prominent AI search engines were highlighted. 

Engines that referred some traffic but lacked sufficient data for confident reporting include (in order of volume) Jobright AI, Blackbox AI, Allfree AI, and Careerflow AI.

Dig deeper: AI search engines often make up citations and answers: Study

Top 5 tactics to boost PPC lead quality in 2025

Based on the hundreds of conversations I’ve had with B2B and lead generation brands over the years, this could be an annual – if not monthly or even weekly – topic. 

Lead quality remains a constant challenge, regardless of shifts in the advertising landscape.

In 2025, my top PPC strategies for improving lead quality blend time-tested fundamentals with newly released features. 

If you implement everything on this list, you’ll stay ahead of the curve – and you’ll see your marketing dollars drive greater down-funnel impact.

1. Nailing the basics

This falls into the evergreen category, but I still see brands coming up short.

To assess lead quality, you must first understand which leads matter most for your business.

That means refining your buyer persona, which includes attributes such as:

  • Position or title.
  • Vertical.
  • Company size.
  • Target company list.

Our team frequently corrects several foundational issues for new clients. The most common include:

  • Understanding and leveraging the most effective targeting options.
  • Defining and clearly conveying your company’s unique value proposition.
  • Aligning the ad-to-landing page experience.
  • Establishing and optimizing a nurturing process to move leads through the funnel.
  • Understanding intent signals and aligning the call to action accordingly.

In other words, identify the leads you want, and ensure they have every reason to move through the customer journey once they enter your system. 

As you retain more high-value leads, your overall lead quality will improve.

Dig deeper: How to improve PPC lead quality for B2B campaigns

2. Offline conversion tracking

Offline conversion tracking (OCT) allows you to integrate your CRM data into ad campaigns. 

Essentially, this helps Google, Meta, and LinkedIn identify high-quality leads by modeling user data.

Using OCT means connecting a batch of users who fall within the ideal range of data density and your deepest funnel stage – such as SQLs or opportunities for SMBs. 

Successfully implementing OCT requires:

  • A well-organized CRM.
  • Strong data collection practices.
  • Solid bidding strategies in ad platforms.

When used effectively, OCT prevents platforms from cherry-picking the easiest (and often lowest-quality) leads. 

Instead, it trains them to target users who match the characteristics modeled by your CRM lists.

That said, a couple of caveats apply:

  • You may see slightly higher CPLs when using OCT, but these costs typically balance out when you assess CPQL (cost per quality lead) or cost per MQL, SQL, or opportunity.
  • If you’re a new or small account focused on volume or testing value propositions, you may want to observe offline data rather than optimize for it initially. In this case, casting a wider net and bringing in leads can be a necessary first step in understanding your ideal prospects – and how to attract them.

Dig deeper: Google Ads for lead generation: A 6-step framework for success

Get the newsletter search marketers rely on.



3. Targeting refinement

If you don’t target the right users, you won’t bring in quality leads (sorry for stating the obvious). 

To keep your budget focused and effective, take the basics from step one and critically evaluate your audiences.

  • Are your audience sizes manageable, or are they too broad to effectively assess performance?
  • Assuming your ICP is well-defined, what’s your retargeting strategy? 

I recommend building segments based on intent level (high, medium, low), tailoring creative to each segment, and starting with your highest-intent segment to gauge performance. 

If that segment generates quality leads below target CPAs, consider expanding to medium-intent audiences and analyzing the results.

Dig deeper: PPC keyword strategy: How to align search intent with funnel stages

4. Smart ad copy

As the saying goes (I’m about to misquote something I can’t attribute), good ad copy tells the right people to click, and great ad copy tells the wrong people not to.

You can achieve this by clearly defining and calling out your ICP in your ad copy, regardless of the ad type or platform.

Here’s a great example from my LinkedIn feed today:

The copy explicitly highlights a demographic, a position, and a condition (managing two or more people). 

This company has its ICP dialed in – and ad copy to match. 

Anyone who has worked with LinkedIn ads knows that even LinkedIn’s unique targeting can’t perfectly filter for only the users you want. 

However, copy like this does much of the remaining work for you.

Dig deeper: 7 LinkedIn advertising pitfalls: Where your B2B ads setup might stumble

5. More friction in your lead forms

This is the only remotely controversial topic on my list. 

Some brands with robust business development resources prefer to bring in all leads and let their sales teams separate the wheat from the chaff. 

In my experience, though, most are more interested in keeping their CRM data clean and proactively minimizing junk by making lead forms longer and harder to fill out.

Typical lead forms ask for names, emails, and phone numbers at a minimum. 

This creates a low-friction experience and certainly increases form conversion rates.

The alternative is to add additional fields to introduce friction, which can improve lead quality. 

Ensure that the fields you require add valuable information to your lead qualification process. These could include:

  • Company name.
  • Company revenue.
  • Number of employees.
  • Company industry.
  • Reason for contacting/current challenge (free-form).
  • How they heard about your business (free-form).

How much friction to introduce is up to you. 

The more friction, the fewer leads, but the higher the proportion of qualified leads. 

The less friction, the greater the overall lead volume.

LinkedIn now makes it easier to minimize freemail submissions. 

A setting allows you to turn off pre-populated emails (the emails used to sign up for a LinkedIn account, which are usually freemail, like Gmail) and require users to enter a work email instead.

Dig deeper: How to optimize PPC forms and follow-ups for lead gen in 2025

Final thoughts

Before you dive into form fields, spend extra time on the basics outlined in the first lead quality initiative – and revisit them periodically as ICPs evolve.

Also, whatever stage you’re at, having a lead qualification system that is understood and referenced by everyone in your revenue organization is essential.

Before assuming everyone is on the same page:

  • Do a quick sanity check on MQL and SQL definitions.
  • Determine how much you’re willing to pay for each.

Then, roll up your sleeves and start bringing in leads.

How to withstand algorithm updates and optimize for AI search

The SEO industry is undergoing a profound transformation in 2025. 

As large language models (LLMs) increasingly power search experiences, success now depends on withstanding traditional algorithm fluctuations and strategically positioning brands within AI knowledge systems.

This article explores key insights and practical implementation steps to navigate this evolving landscape.

Withstanding algorithm updates in 2025

Traditional algorithm updates remain a reality, but our approach to handling them must evolve beyond reactive tactics. 

The typical SEO response to traffic fluctuations follows a familiar pattern: 

  • Identify the drop date.
  • Cross-check with known updates.
  • Audit on-site changes.
  • Analyze content.
  • Review backlinks.
  • Check competitors.
  • Look for manual actions.
Volatility data via Algoroo 

This reactive methodology is no longer sufficient. 

Instead, we need data-driven approaches to identify patterns and predict impacts before they devastate traffic. 

Let me share three key strategies.

Breaking down the problem with granular analysis

The first step is drilling down to understand what changed after an update. 

  • Was the entire website affected, or just certain pages? 
  • Did the drop affect specific queries or query groups? 
  • Are particular sections or content types (like product pages vs. blog posts) impacted?
Breaking down the problem with granular analysis

Using filtering and segmentation, you can pinpoint issues with precision. 

For example, you might discover that a traffic drop:

  • Primarily affected product pages rather than blog content.
  • Or specifically impacted a single category despite maintaining rankings, potentially due to a SERP feature drawing clicks away from organic listings.

Leveraging time series forecasting

One of the most powerful approaches to algorithm analysis is using time series forecasting to establish a baseline of expected performance. 

Meta’s Prophet algorithm is particularly effective for this purpose, as it can account for:

  • Daily and weekly traffic patterns.
  • Seasonal fluctuations.
  • Overall growth or decline trends.
  • Holiday effects

By establishing what your traffic “should” look like based on historical patterns, you can clearly identify when algorithm updates cause deviations from expected performance.

Leveraging time series forecasting

The key metric here is the difference between actual and forecasted values. 

By calculating these deviations and correlating them with Google’s update timeline, you can quantify the impact of specific updates and distinguish true algorithm effects from normal fluctuations.

SERP intent classification

As search engines’ understanding of user intent evolves, tracking intent shifts becomes crucial. 

By analyzing how Google categorizes and responds to queries over time, you can identify when the search engine’s perception of user intent changes for your target keywords.

SERP intent classification

This approach involves:

  • Classifying search queries by intent (informational, commercial, navigational, etc.).
  • Monitoring how SERP layouts change for each intent type.
  • Identifying shifts in how Google interprets specific queries.

When you notice declining visibility despite stable rankings, intent shifts are often the culprit. 

The search engine hasn’t necessarily penalized your content. It’s simply changed its understanding of what users want when they search those terms.

Get the newsletter search marketers rely on.



The rise of AI-driven search and entity representation

While traditional algorithm analysis remains important, a new frontier has emerged: optimizing for representation within AI models themselves.

This shift from ranking pages to influencing AI responses requires entirely new measurement and optimization approaches.

Measuring brand representation in AI models

Traditional rank tracking tools don’t measure how your brand is represented within AI models. 

To fill this gap, we’ve developed AI Rank, a free tool that directly probes LLMs to understand brand associations and positioning.

Brand AI visibility tracking

Here, I’ll illustrate the approach to measuring and interpreting AI visibility for one participating brand.

We utilize two prompt modes and collect this data on a daily basis:

  • Brand-to-Entity (B→E): “List ten things that you associate with Owayo.”
  • Entity-to-Brand (E→B): “List ten brands that you associate with custom sports jerseys.”
Brand AI visibility tracking

This bidirectional analysis creates a structured approach to AI model brand perception.

The analysis performed after two weeks of data collection revealed that this brand is strongly associated with:

  • “Custom sportswear” (weighted score 0.735).
  • “Team uniforms” (0.626).

This shows strong alignment with their core business.

Bidirectional analysis - brand performance

However, when looking at which brands AI models associate with their key product categories, dominant players like Nike (0.835), Adidas (0.733), and Under Armour (0.556) consistently outrank them.

Tracking association strength over time

In addition to an aggregate overview, tracking how these associations evolve daily is important, revealing trends and shifts in AI models’ understanding.

What do AI models associate this brand with, and how does this perception change over time?
What do AI models associate this brand with, and how does this perception change over time?

For this brand, we observed that terms like “Custom Sports Apparel” maintained strong associations, while others fluctuated significantly. 

This time-series analysis helps identify stable brand associations and those that may be influenced by recent content or model updates.

Competitive landscape analysis

When analyzing which brands AI models associate with specific product categories, clear hierarchies emerge.

Custom Basketball Jerseys - Open AI - Ungrounded Responses
Custom Basketball Jerseys – OpenAI – Ungrounded Responses

For “Custom Basketball Jerseys,” Nike consistently holds Position 1, with Adidas and Under Armour firmly in Position 2 and Position 3, but where is Owayo? 

This visualization exposes the competitive landscape from an AI perspective, showing how challenging it will be to displace these established associations.

Grounded vs. ungrounded responses

A particularly valuable insight comes from comparing “grounded” responses (influenced by current search results) with “ungrounded” responses (from the model’s internal knowledge).

Custom Basketball Jerseys - Google - Grounded Responses
Custom Basketball Jerseys – Google – Grounded Responses
Custom Basketball Jerseys - Google - Ungrounded Responses
Custom Basketball Jerseys – Google – Ungrounded Responses

This comparison reveals gaps between current online visibility and the AI’s inherent understanding. 

Ungrounded responses show stronger associations with cycling and esports jerseys, while grounded responses emphasize general custom sportswear. 

This highlights potential areas where their online content might be misaligned with their desired positioning.

Strategic implications: Influencing AI representation

These measurements aren’t just academic; they’re actionable. 

For this particular brand, the analysis revealed several strategic opportunities:

  • Targeted content creation: Developing more content around high-value associations where they weren’t strongly represented
  • Entity relationship strengthening: Creating explicit content that reinforces the connection between their brand and key product categories
  • Competitive gap analysis: Identifying niches where competitors weren’t strongly represented
  • Dataset contribution: Publishing structured datasets on Hugging Face that establish their expertise in specific sportswear categories

Implementing a proactive AI strategy

Based on these insights, here’s how forward-thinking brands can adapt to the AI-driven search landscape.

Direct dataset contributions

The most direct path to influence AI responses is contributing datasets for model training:

  • Create a Hugging Face account (huggingface.co).
  • Prepare structured datasets that prominently feature your brand.
  • Upload these datasets for use in model fine-tuning.

When models are trained using your datasets, they develop stronger associations with your brand entities.

Creating RAG-optimized content

Retrieval-augmented generation (RAG) enhances LLM responses by pulling in external information. To optimize for these systems:

  • Structure content for easy retrieval: Use clear, factual statements about your products/services.
  • Provide comprehensive product information: Include detailed specifications and use cases.
  • Craft content for direct quotability: Create concise, authoritative statements that RAG systems can extract verbatim.

Building brand associations through entity relationships

LLMs understand the world through entities and their relationships. To strengthen your brand’s position:

  • Define clear entity relationships: “Owayo is a leading provider of custom cycling jerseys.”
  • Create content that reinforces these relationships: Expert articles, case studies, authoritative guides.
  • Publish in formats that LLMs frequently index: Technical documentation, structured knowledge bases.

Measure, optimize, repeat

Implement continuous measurement of your brand’s representation in AI systems:

  • Regularly probe LLMs to track brand and entity associations.
  • Monitor both grounded and ungrounded responses to identify gaps.
  • Analyze competitor positioning to identify opportunities.
  • Use insights to guide content strategy and optimization efforts.

From SEO to AI influence

The shift from traditional search to AI-driven information discovery requires a fundamental strategic revision. 

Rather than focusing solely on ranking individual pages, forward-thinking marketers must now:

  • Use advanced forecasting to better understand algorithm impacts.
  • Monitor SERP intent shifts to adapt content strategy accordingly.
  • Measure brand representation within AI models.
  • Strategically influence training data to shape AI understanding.
  • Create content optimized for both traditional search and AI systems.

By combining these approaches, brands can thrive in both current and emerging search paradigms. 

The future belongs to those who understand how to shape AI responses, not just how to rank pages.

Future work

Savvy data scientists will notice that some data tidying is in order, starting with normalizing terms by removing capitalization and various artifacts (e.g., numbers before entities). 

In the coming weeks, we’ll also work on better concept merging/canonicalization, which can further reduce noise and perhaps even add a named entity recognition model to aid the process. 

Overall, we feel that much more can be derived from the collected raw data and invite anyone with ideas to contribute to the conversation.

Disclosure and acknowledgments: AI visibility data was collected via AI Rank with written permission from Owayo brand representatives for exclusive use in this article. For other uses, please contact the author.

[Watch] How to withstand Google algorithm updates in 2025

Watch my SMX Next session for more insights on how to improve your site and withstand future algorithm updates.

Is your landing page converting better—or worse—than your competitors’? If you’re not sure, now’s the time to find out.

Unbounce’s new Conversion Benchmark Report provides a clear, data-backed look at how landing pages are performing across industries. The report includes median conversion rates by sector, giving marketers a useful baseline to assess their own performance.

The report also offers helpful guidance for interpreting your own results—whether you’re outperforming the median or identifying areas for optimization. It reminds marketers that conversion rate is just one piece of the puzzle: lower conversion rates might still represent high-value leads, and even high-performing pages often have room to improve.

Whether you’re looking to benchmark your performance or spot opportunities to increase conversions, this report is a valuable reference for digital marketers across all industries.

Download the full report to see where your landing pages stand.

Google Ads auction insights

Google released version 2.9 of Google Ads Editor, adding new campaign management tools, video ad enhancements, and better support for Shopping and Performance Max campaigns.

Key updates:

  • Manager Account Labels. Advertisers can now attach labels from Google Ads Manager (MCC) accounts to campaigns, ad groups, and keywords.
  • Expanded Shopping Ads. Retail Performance Max campaigns can now serve shopping ads on brand-related searches, even if those brands are typically excluded.
  • Vertical Video Generation. Responsive video ads now support automatic vertical video creation for Video Views campaigns.
  • Masthead Ads Support. Advertisers can create and manage YouTube Masthead ads directly within Ads Editor.
  • Better Measurement. Limited support for lift measurement now allows adding or removing campaigns from existing studies.
  • Performance Max Age Exclusions. Advertisers can now set negative age criteria at the campaign level.
  • VRC Campaign Conversion Tool. Standard Video campaigns with Target CRM bidding are transitioning to VRC 2.0, which includes inventory control settings and requires responsive video ads.
  • Multi-Tab Google Sheets Export. Advertisers can now export data to Google Sheets with separate tabs for different entity types, improving usability.

Why we care. The latest update helps you streamline workflows, improve video ad performance, and better manage audience targeting across multiple campaigns.

The big picture. These updates reflect Google’s push for automation, video-first advertising, and improved measurement capabilities to help advertisers optimize campaigns more efficiently.

What’s next. Google Ads Editor 2.5 and older versions will no longer be supported, making it essential for advertisers to upgrade to the latest version to access these new features.

Google TV: What you need to know CTV buying in Google Ads

Google is rolling out performance upgrades for Display & Video 360’s connected TV (CTV) ad solutions, enhancing audience targeting, measurement, and campaign insights.

Key upgrades:

  • Enhanced audience targeting. Marketers can now reach households based on demographics, shared interests, or purchase intent.
  • Improved measurement. New conversion tracking capabilities in Display & Video 360 and Campaign Manager 360 will help advertisers connect CTV ads to household purchasing behavior.
  • Household-level insights. Reach metrics will now include household-level data alongside existing people-based reach, improving comparisons between CTV and traditional TV.

Why we care. As CTV viewership grows, advertisers need better tools to reach the right households and measure ad performance across devices. The update introduces custom bidding experiments, enabling A/B testing of bidding strategies to determine the best-performing approach.

Additionally, the new multi-goal bidding capability lets advertisers optimize for multiple objectives (e.g., conversions, viewability) within a single campaign, which could lead to better performance and ROI.

The big picture. By leveraging IP addresses and other privacy-aligned signals, Google aims to help advertisers optimize CTV campaigns while respecting user privacy.

What’s next. These updates will roll out over the coming months, with automatic benefits for advertisers using Display & Video 360 across YouTube and top streaming platforms.