icon-dropdown-arrow
top
JUST DROPPED: The Google Ads Course Built for DTC Brand Operators (Not Just Media Buyers). Start Learning Here
icon-cross
Kirk Williams
 • 
AI in PPC

AI Agents in Google Ads: Is Your Process Defensible?

Date Published: 
April 24, 2026
Last Update: 
April 24, 2026
kirk williams on youtube logo

AI Agents in Google Ads: Is Your Process Defensible?

Post Summary

Yesterday I published what I think is probably one of the more important posts I've written in a while, at least as it pertains to where our industry is heading, and you can read it here: AI Agents in Ad Accounts: Why the Math Doesn't Work Yet. The response was interesting enough that I wanted to spend a little more time thinking through some of the pushback, because I think there are a couple of threads worth pulling on further.

The most common counter-argument I have encountered over the last few months when discussing AI is something like this: "Well, obviously you shouldn't let an LLM make changes without any guidance or system in place." Or this one, "I don't know of anyone utilizing AI in a Google Ads account without some sort of system of safeguards in place." And look, I understand why that feels like a sufficient answer. But here's the question I keep coming back to, one I don't think has a clean answer yet: what are the actual parameters of "guidance or system," and do those parameters reliably equate to "secure enough," "safe enough," and "protect me from liability enough" when something goes sideways?

Who Determines the System for Determining the Systems of AI Process?

I'm genuinely skeptical that most people have thought that through carefully, and here's why. Someone can tell Claude to monitor their account "for any errors" one time and sincerely believe that constitutes a system. Someone else can take a weekend course on MCP implementation, apply a handful of safeguards they learned from it, and call that a process. Both of those people would probably say they're not "just letting it rip," and both of them are probably operating with more risk than they realize. This is all largely self-reported, and I think we're collectively quite bad at evaluating our own AI process rigor, especially when we're under professional pressure to figure this out quickly since we keep being told AI is about to take our livelihoods and financial security away.

Montana, where I live, used to have no posted speed limit. The signs literally said "reasonable and prudent." It sounds almost quaint now, but the problem was that 50mph was reasonable and prudent to one driver, and 120mph was reasonable and prudent to another, and anything contested eventually ended up in court where a judge had to sort it out. The state eventually dropped it... and I don't know exactly why, but I imagine that was because "reasonable and prudent" turned out to be a standard that meant almost nothing in practice when it wasn't defined with enough specificity to hold up to scrutiny.

I think we're in a very similar moment with AI process and security for autonomous agents making changes in ad accounts. "I have a system" is our industry's version of "reasonable and prudent," and I genuinely don't think most people have stress-tested what that means until it fails. And the professional pressure piece matters here too, because someone whose job security feels tied to figuring out AI automation quickly is going to have a very different calibration of "reasonable and prudent" than someone operating without that stress. That's just human nature, and I don't think we should pretend otherwise.


Fallacy: Humans and AI both Carry the Same Liability

The other pushback I want to address is the "humans make mistakes too" argument, which kept coming up in various forms. And yes, of course that's true. I'm not claiming humans are infallible. But I think there's a meaningful difference in what I'd call the "power to experience ratio" between an AI agent and a human employee, and I think collapsing that distinction causes people to reach the wrong conclusion.

Think about it this way. Deploying Claude with a few safeguards from a course you took, pointed at all the accounts in your MCC, is something like hiring an eager, book-smart intern fresh out of college and then immediately giving them full autonomous access to your financial systems, your client relationships, your ad spend, and your CRM... with no manager review, no approval process, and no one checking their work until something has already gone wrong. You would not do that with a human, no matter how smart that human seemed. You'd build a structure around them. You'd have them work up to that level of trust and access over time. The safeguards you build for a new human employee before giving them significant authority are substantial, and we tend to apply them instinctively because we've had centuries of institutional knowledge about how to onboard people responsibly.

We do not have that institutional knowledge yet for AI agents making live changes in accounts, and I think pretending we do is one of the more dangerous moves an agency or consultant can make right now.

To be clear, because I've been misread on this a few times now: I use AI tools daily. I think they're genuinely remarkable and I expect that to continue. The campaign strategy work we do at ZATO has been meaningfully improved by incorporating AI into our research and analysis processes. What I'm not doing is connecting those tools to live accounts with full autonomous change-making capability, myself (we do utilize Optmyzr that employees multiple engineers with careful watch over the safeguards above I've noted!!). That specific thing, in the current moment, with the current state of tooling and process maturity in our industry, is what I think deserves a lot more caution than it's getting.

The lawsuits, I suspect, are coming. And when they do, the standard we'll all be held to won't be "I had something I called a system." It'll be whether the process we had in place was genuinely defensible under scrutiny, in the same way that a judge eventually had to rule on what "reasonable and prudent" actually meant on a Montana highway.

I'd rather we figure out what defensible actually looks like before that moment arrives. If you're working through these questions in your own PPC practice, I think it's worth spending real time on, not as an obstacle to using AI, but as the foundation that makes using it sustainably possible.

Want more free content like this delivered directly to your inbox?
Subscribe Here
Kirk Williams
@PPCKirk - Owner & Chief Pondering Officer

Kirk is the owner of ZATO, his Paid Search PPC micro-agency of experts, and has been working in Digital Marketing since 2009. His personal motto (perhaps unhealthily so), is "let's overthink this some more."  He even wrote a book recently on philosophical PPC musings that you can check out here: Ponderings of a PPC Professional.

He has been named one of the Top 25 Most Influential PPCers in the world by PPC Hero (now PPCSurvey) 10 years in a row (2016-2026), has written articles for many industry publications (including Shopify, Moz, PPC Hero, Search Engine Land, and Microsoft), and is a frequent guest on digital marketing podcasts and webinars.

Kirk currently resides in Billings, MT with his wife, six children, books, Trek Bikes, Taylor guitar, and little sleep.

Kirk is an avid "discusser of marketing things" on Twitter, as well as an avid conference speaker, having traveled around the world to talk about Paid Search (especially Shopping Ads).  Kirk has booked speaking engagements in London, Dublin, Sydney, Milan, NYC, Dallas, OKC, Milwaukee, and more and has been recognized through reviews as one of the Top 10 conference presentations on more than one occasion.

You can connect with Kirk on Twitter or Linkedin.

In 2023, Kirk had the privilege of speaking at the TEDx Billings on one of his many passions, Stop the Scale: Redefining Business Success... which is also the title of his latest book, Stop the Scale, available now on Amazon!

Continue reading

Find what you're looking for here: