I’ve been in performance marketing since 2006. Managing clients, building in-house performance teams. After I installed Claude Code in February, within 3 weeks I built five tools that fundamentally changed how I work with clients. Tools I use daily, on real accounts, with real budgets.
Here’s each one — what it does, how I built it, and what went wrong.
1. Google Ads audit for a new client
The situation: I had a new e-commerce client. After our initial call, we agreed on an audit deal. I would take a look at the ad account and write down what could be done to increase campaign efficiency. So I needed a full account audit — performance overview, revenue breakdown, market comparison, seasonality patterns. The kind of analysis that normally means half a day in the Google Ads interface, clicking through reports, exporting CSVs, building pivot tables.
What I did: Connected Claude Code to the Google Ads API and told it to pull the last year of data. What surprised me immediately was how easy it was to extract revenue data — even though the account had dozens of different conversion actions with different values. Normally I’d spend a significant chunk of time just figuring out which conversions and conversion values to sum and setting up a good pivot table. Claude mapped them all, pulled the numbers, and when I checked the control totals against the Google Ads dashboard — everything matched. To get different views of 14-day, 30-day, 90-day or even a full year window, I just had to ask.

The seasonality and market breakdown: This is where it got genuinely impressive. I asked for ROAS per country. The thing is, campaigns come and go throughout the year — some markets had 15+ campaigns that were created, paused, replaced. Manually, I’d need to aggregate all campaigns per market across the entire year, which is tedious and error-prone. Claude did it instantly. Immediately showed which markets were above average ROAS (worth scaling) and which were below (worth optimizing). After that, the seasonality analysis was straightforward — clear patterns that would have taken me a while to visualize.
The presentation: I generated a Google Slides deck with the findings. I won’t lie — I had to prompt it quite a bit to get the slides looking decent. Programmatic Google Slides formatting is painful regardless of the tool. But after some back and forth, I had a presentation I could walk through on the client call.
Time saved: What would have been a full day of work became about 2-3 hours, most of which was me reviewing the output and refining the presentation. Not to mention the energy saved — I could focus on strategy instead of wrestling with reports and data splits.


2. Meta Ads audit with statistical guardrails
The problem: Same as above, but for Meta. With an extra complication — some Meta data isn’t easily available through the Graph API, so I needed browser automation too.
What I built: A system that combines Chrome DevTools automation (via MCP protocol) with Meta’s Graph API. It extracts campaign data, creative performance, audience insights, and generates an HTML report.
The mistake that made it better: Claude analyzed a carousel campaign and confidently declared certain creatives as “top performers.” The analysis looked solid. I almost sent it to the client.
Then I checked the numbers. Some “top performers” had a sample size of one conversion. Claude had calculated percentages and drawn conclusions from statistically meaningless data.
I built guardrails. The system now checks sample sizes before making performance claims. Low-sample segments get flagged, not analyzed. I documented this in a lessons-learned file that Claude reads before every analysis.
The takeaway: This is probably the most important thing I learned: Claude Code is extraordinarily capable, but it doesn’t know what it doesn’t know. If you don’t understand statistical significance, you won’t catch it when Claude doesn’t check for it. Domain expertise isn’t optional — it’s the whole point.
3. Campaign management CLI with safety rules
The problem: Sometimes I need to make bulk changes across campaigns — adjust bids, change strategies, pause underperformers. The Google Ads interface is slow for this. Google Ads Editor is better but still manual.
What I built: A command-line tool that connects to the Google Ads API and can create, modify, and monitor campaigns directly from the terminal.
Why it scared me: This tool makes real changes to real campaigns with real client budgets. So I built a safety rules engine:
-
CRITICAL (budget changes above threshold): requires explicit confirmation + reason
-
HIGH (bid strategy changes): requires confirmation
-
MEDIUM (ad group modifications): logged with warning
-
LOW (status changes): logged only
The tool also tracks spend thresholds and won’t let you accidentally multiply a budget by 10. It validates conversion tracking names before applying target ROAS strategies.
What surprised me: Building safety rules was the easy part. Claude understood the concept immediately and implemented risk classification in one pass. The hard part was defining what should be critical vs. high vs. medium — that’s pure domain knowledge that I had to specify explicitly.
4. Multi-client data analysis platform
The problem: I manage campaigns across multiple clients and markets. One client alone sells in 6 European countries. Weekly analysis means pulling data for each market, comparing trends, identifying gaps, generating reports. Half a day, every week.
What I built: A structured analysis system with per-client workspaces. Each client has dedicated data fetchers, analysis scripts, and presentation generators. There’s a shared methodology layer — standardized metrics, reporting conventions, A/B testing frameworks — so analysis is consistent across clients.
How it works in practice: I run a script before my morning coffee. It pulls fresh data for all markets, runs comparative analysis, flags anomalies, and generates a report I can review in 15 minutes. If something needs attention, I drill down. If not, I move on.
The real value: It’s not the time saved on any single report. It’s that I can now monitor more clients at a higher frequency without hiring an analyst. The system scales in a way that manual work never could.
5. Client offer generator (7 iterations)
The problem: Sending professional offer presentations to potential clients. Three packages, pricing, scope descriptions, nicely formatted in Google Slides.
What I built: A Python script that generates offer presentations programmatically.
Why I’m including this one: Because it took 7 iterations to get right. And that’s the honest story of working with Claude Code.
Version 1 produced empty slides. Version 2 had text overflowing the boxes. Version 3 had layout issues with the pricing table. Versions 4–6 were incremental fixes where solving one problem created another. Version 7 finally worked.
This isn’t a failure story — it’s a process story. Every iteration took 5–10 minutes. The whole thing was done in an afternoon. And now when I change my pricing or add a service, I edit one data structure and regenerate in seconds.
The point: If someone tells you Claude Code produces perfect results on the first try, they’re either lying or doing something very simple. Real projects require iteration. The difference is that iteration with Claude Code takes minutes, not days.
What doesn’t work
To be fair:
-
Confident mistakes are the biggest risk. The Meta statistical significance issue could have embarrassed me in front of a client. Always verify client-facing outputs.
-
Token limits are real. On the Pro plan I hit limits regularly. Upgrading to Max helped, but marathon sessions can still exhaust it.
-
Context window has limits. On complex projects, Claude sometimes forgets decisions from earlier in the session. You learn to work with this — CLAUDE.md files, clear instructions, reference documents.
-
Google Slides formatting is painful. Every project that involved programmatic Slides generation was harder than expected. This isn’t a Claude problem — it’s a Google Slides API problem.
Where to start
If you’re a performance marketer reading this, don’t start with a big project. Do this:
Connect Claude Code to your Google Ads API. And just play.
Ask questions about your own data. “Show me campaigns where CPC increased more than 20% week over week.” “Find ad groups with impressions but zero conversions.” “Compare brand vs. non-brand ROAS over 6 months.”
This is where the ideas come from. When your own data responds to natural language queries, you’ll start thinking: I could automate that report. I could build a monitor for that. I could audit all my clients with one command.
That’s how these 5 tools started. Not with a plan. With curiosity and problems that needed solving.
Running audits, managing campaigns via CLI, and generating reports — all built in 30 days. The code isn’t always pretty. The results are.