How to turn Claude Code into your SEO command center

Lately, I’ve been spending most of my day inside Cursor running Claude Code. I’m not a developer. I run a digital marketing agency. But Claude Code within Cursor has become the fastest way for me to handle many tasks I want to do, including pulling and analyzing data from Google Search Console, GA4, and Google Ads.
The setup takes about an hour. After that, you can ask things like “which keywords am I paying for that I already rank for organically?” and get an answer in seconds instead of spending an afternoon with spreadsheets. (I wouldn’t have been the one spending an afternoon with spreadsheets anyway, but now nobody has to.)
Here’s the step-by-step process I developed while analyzing data for our agency clients. If this looks too technical, paste the URL of this article into Claude and ask it to walk you through it step by step.
What you’re building
What you end up with is a project directory where Claude Code has access to Python scripts that pull live data from your Google APIs. You fetch the data, it lands in JSON files, and then you just talk to it.
No dashboards to build. No Looker Studio templates to maintain. You’re basically giving Claude Code the same data your team would look at, and letting it do the cross-referencing.
seo-project/
├── config.json # Client details + API property IDs
├── fetchers/
│ ├── fetch_gsc.py # Google Search Console
│ ├── fetch_ga4.py # Google Analytics 4
│ ├── fetch_ads.py # Google Ads search terms
│ └── fetch_ai_visibility.py # AI Search data
├── data/
│ ├── gsc/ # Query + page performance
│ ├── ga4/ # Traffic by channel, top pages
│ ├── ads/ # Search terms, spend, conversions
│ └── ai-visibility/ # AI citation data
└── reports/ # Generated analysis
The SEO toolkit you know, plus the AI visibility data you need.

Step 1: Set up Google API authentication
Everything runs through a Google Cloud service account. One service account covers both GSC and GA4, which is nice. Google Ads needs its own OAuth setup, which is less nice but manageable.

Service account (for GSC + GA4)
- Create a project in Google Cloud Console.
- Enable the Search Console API and Google Analytics Data API.
- Create a service account under IAM & Admin > Service Accounts.
- Download the JSON key file.
- Add the service account email as a user in your GSC property (read access is enough).
- Add it as a Viewer in your GA4 property.
The service account email looks like your-project@your-project-id.iam.gserviceaccount.com. You’ll add this email address to each client’s GSC and GA4 properties, same way you’d add any team member.
For agencies: one service account works across all clients. Add it to each property, update a config file with the property IDs, and you’re set.
Google Ads authentication
Google Ads is different. You need:
- A developer token from the Google Ads API Center (under Tools & Settings > Setup > API Center).
- OAuth 2.0 credentials from Google Cloud (not the service account, a separate OAuth client).
- A one-time browser authentication to generate a refresh token.
The developer token requires an application. For agency use, describe it as “automated reporting for marketing clients.” Approval usually takes 24-48 hours.
If you’re using a Manager Account (MCC), one developer token and one refresh token cover all sub-accounts. You just change the customer ID per client.
If you don’t have API access or MCC, maybe it’s a new client and you’re still getting set up, you can skip the API entirely. Download 90 days of keyword and search terms data as CSVs from the Google Ads UI, drop them in your data directory, and Claude Code will work with those just as well. That’s how we handle clients who aren’t in our MCC yet.
Install the Python dependencies
All the examples below assume you’re working in the terminal on a Mac or Linux machine. If you’re on Windows, the easiest path is Windows Subsystem for Linux (WSL).
pip install google-api-python-client google-auth google-analytics-data google-ads
Step 2: Build the data fetchers
Each fetcher is a short Python script that authenticates, pulls data, and saves JSON. I didn’t write these from scratch. I described what I wanted to Claude Code and it wrote them.
One thing that genuinely surprised me: I never had to read the API documentation. Not for GSC, GA4, or Google Ads.
I’d say something like “I want to pull the top 1,000 queries from Search Console for the last 90 days,” and Claude Code would figure out the authentication, endpoints, and query parameters. It already knows these APIs. You just tell it what data you want.
Here’s what the scripts look like.
Google Search Console fetcher
from google.oauth2 import service_account
from googleapiclient.discovery import build
SCOPES = ['https://www.googleapis.com/auth/webmasters.readonly']
def get_gsc_service():
credentials = service_account.Credentials.from_service_account_file(
'service-account-key.json', scopes=SCOPES
)
return build('searchconsole', 'v1', credentials=credentials)
def fetch_queries(service, site_url, start_date, end_date):
response = service.searchanalytics().query(
siteUrl=site_url,
body={
'startDate': start_date,
'endDate': end_date,
'dimensions': ['query'],
'rowLimit': 1000
}
).execute()
return response.get('rows', [])
You get back queries with clicks, impressions, CTR, and average position. Save it as JSON.
GA4 fetcher
from google.analytics.data_v1beta import BetaAnalyticsDataClient
from google.analytics.data_v1beta.types import (
RunReportRequest, DateRange, Metric, Dimension
)
def get_ga4_client():
credentials = service_account.Credentials.from_service_account_file(
'service-account-key.json',
scopes=['https://www.googleapis.com/auth/analytics.readonly']
)
return BetaAnalyticsDataClient(credentials=credentials)
def fetch_traffic_by_channel(client, property_id, start_date, end_date):
request = RunReportRequest(
property=f"properties/{property_id}",
date_ranges=[DateRange(start_date=start_date, end_date=end_date)],
dimensions=[Dimension(name="sessionDefaultChannelGroup")],
metrics=[
Metric(name="sessions"),
Metric(name="totalUsers"),
Metric(name="bounceRate"),
]
)
return client.run_report(request)
Google Ads fetcher
Google Ads uses something called Google Ads Query Language (GAQL). If you’ve ever written a SQL query, this will look familiar. If you haven’t, don’t worry, Claude Code will write it for you:
from google.ads.googleads.client import GoogleAdsClient
client = GoogleAdsClient.load_from_storage("google-ads.yaml")
ga_service = client.get_service("GoogleAdsService")
query = """
SELECT
search_term_view.search_term,
metrics.impressions,
metrics.clicks,
metrics.cost_micros,
metrics.conversions
FROM search_term_view
WHERE segments.date DURING LAST_30_DAYS
ORDER BY metrics.impressions DESC
"""
response = ga_service.search(customer_id="1234567890", query=query)
This pulls the same data as the Search Terms report you’d download from the Google Ads UI: impressions, clicks, cost, conversions, match type, campaign, and ad group.
Step 3: Create a client config
One JSON file per client. Nothing fancy, just the property IDs and some context:
{
"name": "Client Name",
"domain": "example.com",
"gsc_property": "https://www.example.com/",
"ga4_property_id": "319491912",
"google_ads_customer_id": "9270739126",
"industry": "Higher Education",
"competitors": [
"https://competitor1.com/",
"https://competitor2.com/"
]
}
Step 4: Ask cross-source questions
So now you’ve got JSON files from GSC, GA4, and Ads sitting in your project directory. Claude Code can read all of them at once and answer questions that would normally mean a lot of tab-switching and VLOOKUP work.
The paid-organic gap analysis
The single most valuable question I’ve found:
- “Compare the GSC query data against the Google Ads search terms. Find keywords where we’re paying for clicks but already have strong organic positions. Also, find keywords where we’re spending on ads with zero organic visibility. Those are content gaps.”

When I ran this for a higher education client, it identified:
- 2,742 search terms with wasted ad spend (impressions, zero clicks).
- 351 opportunities to reduce paid spend on terms where organic was already strong.
- 33 high-performing organic queries that paid could amplify.
- 41 content gaps where paid was the only presence (no organic).
That analysis took about 90 seconds. The equivalent manual process (downloading CSVs from GSC and Ads, VLOOKUPing across them, categorizing the overlaps) takes most of an afternoon.
Other questions worth asking
Once you have GSC + GA4 + Ads data loaded:
- “Which pages get the most impressions in GSC but have low CTR? What’s the traffic from GA4 for those same pages?” (identifies meta description/title opportunities)
- “What are the top 20 organic queries by impression that we’re not running ads against?” (paid amplification candidates)
- “Group the GSC queries by topic cluster and show me which clusters have the most impressions but lowest average position.” (content investment priorities)
- “Which pages in GA4 have high bounce rates but strong GSC positions? Those might need content improvement.”
Claude Code isn’t doing anything a human couldn’t do with spreadsheets. It’s doing it in seconds, and you can follow up with another question without rebuilding the whole analysis from scratch.
Step 5: Add AI visibility tracking
Traditional SERP positions aren’t the whole picture anymore. Between Google’s AI Overviews, AI Mode, Copilot, ChatGPT, and Perplexity, you need to know whether AI systems are citing your content.
This is especially true in verticals like higher education, where prospective students increasingly start their research in AI search tools.
If you have a tracking platform
Tools like Scrunch, Semrush’s AI Visibility toolkit, or Otterly.ai will track your brand’s presence across ChatGPT, Perplexity, Gemini, Google AI Overviews, and Copilot.
Export the data as CSV or JSON and drop it in your data directory. Claude Code can then cross-reference AI citations against your GSC and Ads data.

When I did this for our own site, we discovered two blog posts competing for the same AI citations on GEO-related queries.
One had 12 times as many Copilot citations as the other, despite both targeting similar intent. That led to a consolidation decision we wouldn’t have made based solely on traditional rank data. This kind of AI search cannibalization is something most SEO teams aren’t yet checking for.
If you don’t have a tracking platform
You don’t need an enterprise tool to start. There are several APIs that let you pull AI search data directly, and the costs are lower than you’d think.
DataForSEO AI Overview API: The most accessible option. Pay-as-you-go at about $0.01 per query, with a $50 minimum deposit. You send a keyword, and it returns the full AI Overview content from Google SERPs, including which URLs are cited. It also has a separate LLM Mentions API that tracks how LLMs reference brands across platforms.
# DataForSEO AI Overview — simplified example
payload = [{
"keyword": "best higher education marketing agencies",
"location_code": 2840, # US
"language_code": "en"
}]
response = requests.post(
"https://api.dataforseo.com/v3/serp/google/ai_overview/live/advanced",
headers=auth_headers,
json=payload
)
# Returns: AI Overview text, cited URLs, references
SerpApi: Starts at $75/month for 5,000 searches. Returns structured JSON for the full Google SERP, including AI Overviews. Good documentation, Python client library, and a free tier for testing.
SearchAPI.io: Similar to SerpApi, starts at $40/month. Also offers a separate Google AI Mode API that captures AI-generated answers with citations.
Bright Data SERP API: Pay-as-you-go starting around $1.80 per 1,000 requests. Set brd_ai_overview=2 to increase the likelihood of capturing AI Overviews. Also has an MCP server if you want tighter agent integration.
Bing Webmaster Tools: Free, and the only first-party AI citation data available from any major platform right now. Shows how often your content appears as a source in Copilot and Bing AI responses, with page-level data and the “grounding queries” that triggered citations. No API yet (Microsoft says it’s on the backlog), but you can export CSVs.
DIY: Direct LLM API Calls: The cheapest approach for small-scale monitoring. Write a Python script that sends a consistent set of prompts to the OpenAI, Anthropic, and Perplexity APIs, then parses responses for brand mentions. Perplexity’s Sonar API is especially useful here because it includes web citations in responses, and citation tokens are free. Total cost: under $20/month for a modest prompt library.
The general pattern: Pick one SERP API for Google AI Overview data, use Bing Webmaster Tools (it’s free), and supplement with direct LLM API calls or a dedicated tracker if budget allows.
The workflow in practice
So what does this actually look like on a Tuesday morning?
Setup: Once per client, ~15 minutes
- Add service account email to client’s GSC and GA4
- Get their Google Ads customer ID (or export search terms if they’re not in the MCC)
- Create a config.json with property IDs
Monthly data pull: ~5 minutes
python3 run_fetch.py --sources gsc,ga4,ads
Analysis (as needed): Open Claude Code in the project directory and ask questions. The data is right there.
Output: Claude Code generates a markdown report. When I need something client-facing, I push it to Google Docs using a separate tool I built called google-docs-forge. It converts markdown into a properly formatted Google Doc, so the output doesn’t look like it came from a terminal.

The whole process takes about 35 minutes for a new client: setup, fetch, analysis. Monthly refreshes take about 20 minutes, including analysis time. Compare that to the manual alternative of downloading CSVs from three different platforms, cross-referencing in spreadsheets, and writing up findings.
What this doesn’t replace
I don’t want to oversell this. Claude Code is reading your data and finding patterns across sources faster than you can manually. It’s not telling you what to do about those patterns. You still need someone who understands the client’s business, their competitive situation, and what they’re actually trying to accomplish. The tool finds the interesting data. The strategist decides what to do with it.
You also need to verify what it gives you. LLMs can hallucinate, and that includes data analysis. I’ve seen Claude Code confidently report a number that didn’t match the JSON file. It’s rare, but it happens.
Treat the output like you’d treat work from a new analyst: trust but verify, especially before anything goes to a client. Spot-check the numbers against the source data. If something looks too clean or too dramatic, go look at the raw file.
It also doesn’t replace your existing platforms. If you need historical trend data, automated alerts, or a client-facing dashboard, you still want a Semrush or an Ahrefs. What this gives you is the ability to ask ad hoc questions across multiple data sources, which none of those platforms does well on their own.
And the GEO/AI visibility tracking space is still immature. The data from AI citation tools is directionally useful. Wind sock, not GPS. Google doesn’t publish AI Overview or AI Mode citation data through any official API, so every third-party tool is approximating. Bing’s Copilot data is the most reliable because it’s first-party, but it only covers the Microsoft ecosystem.
Track, optimize, and win in Google and AI search from one platform.

Start with GSC, layer in the rest
If you want to give this a shot:
- Start with GSC only. It’s the easiest API to connect (service account, read-only access, free). Fetch your queries and pages for the last 90 days. Ask Claude Code to group queries by topic, identify page-2 ranking opportunities, and find pages with high impressions but low CTR.
- Add GA4 second. Same service account. Now you can ask cross-source questions: “Which pages rank well in GSC but have high bounce rates in GA4?”
- Add Google Ads when you’re ready. The OAuth setup is more involved, but the paid-organic gap analysis alone justifies the effort.
- Layer in AI visibility last. Start with Bing Webmaster Tools (free) and one SERP API for AI Overview data.
Each layer builds on the last. You don’t need all four to get value. The GSC + GA4 combination alone surfaces insights that take hours to find manually.

