Why is it hard to trust Google Ads AI?

By steve

Digital marketer with deep experience in paid and organic search engine marketing driven by website analytics.

Google AI Gemini in Advdertising

There is no doubt that Google is pushing advertisers and their agencies to implement Google Performance Max or PMax (combined with Gemini.) PMax is Google’s latest iteration to automate the advertising program for companies. Whether your program is search, display, or shopping, Google wants to move you to PMax, promising that it will outperform anything you do.

If you are new to digital advertising, you’ll see that implementing a PMax campaign is not too difficult. But that is part of the issue. As the path of least resistance, it makes it easy not to consider what you don’t see and don’t know. For advertisers that have worked through the programs of manually managing ads and automated campaigns, as well as implementations between the two, fully trusting PMax is a challenge.

Trust issue with Google Ads

When you speak with a Google rep, they are confident that the automated system is the best. They quite sincerely believe and recommend advertisers use it. The confidence with which they tout PMax is borderline cult-like. If you dive into the details of the ads and run a PMax for an extended period, your trust in their confidence will wane.

Some basic observations with Search Terms

In a fully automated program, Google will present your ads to users in various channels that the AI considers important and likely precursors to a conversion. With all the data that the AI is fed, it makes sense that it should be able to determine what leads to a conversion. So, why do we lack confidence?

Run a managed search campaign

Even if you use an exact match or phrase match, Google will present your ads to people based on what it deems to be a close variant. While doing this, Google continually asks you to move all your terms to a broad match. In theory, this allows the system to match against what it deems intent, even if the search term used doesn’t appear to match with your target keywords.

Take a look at the search terms that were entered and triggered your ads. Some very basic things stand out. As an example, look at geographic terms. 

Run ads for your local market using geo-targeting. With that, you can also use geo modifiers, such as “xyz companies in Chicago.” You will notice some things:

  1. People who enter “xyz companies in Canada.”  will show up.
  2. If you add “Canada” as a broad negative match, suddenly, “XYZ companies in Toronto” or some other geography like France, New York, San Diego etc will show on the search query report.
  3. Long-tail bidding has not been possible for a long time, but Google is still showing that people are using long-tail searches.

Google Ads Deliver to Inappropriate Searches

These few (of many issues) used to be fully manageable through match-type implementation. Now that Google ignores match types, all we can do is add negatives. This has always been part of the process, but it is now the only tool we have.

To the issue of trust in Google’s AI: A company can only work with other companies in a market, they set up the Geo-targeting correctly and also set up target keywords correctly, but Google is showing ads to people who explicitly are looking for providers in other markets. 

When giving the AIs even a little latitude, Google spends advertising dollars on searches that cannot lead to a qualified prospect. It seems to be a simple concept that when we target a geo and use geo-targeted terms, the AI should be able to weed out bad queries. It just doesn’t.

A cynical perspective is that Google is simply amping up the number of bidders for each user search, thereby driving up the average CPC. Similar to the implication of removing long-tail bidding. Lump every 4+ word query into an auction with three or fewer words, and you increase the number of bidders for each auction. 

Whatever the drivers behind showing queries that are not appropriate, the effect is the same: higher CPC and lower qualified clicks.

The Fragility of PMax

Moving from keywords to complete AI-managed campaigns through PMax, there are a few things that show up.

Consistency matters to AI

The PMax platform depends on a period of learning. Set a budget, and the system starts slowly, learning how people respond and eventually spending the entire daily budget. This can take 1-2 weeks. If you have a change, your PMax campaign can reset itself. 

Budget Matters

The representatives at Google will direct PMax users with large budgets to a special team with more experience on the platform. One thing we did notice is that the performance at the higher spending level did not match the performance at the lower spending level. It deteriorated. 

This can happen with manually managed programs as well. However, there is a notion being presented that PMax is some kind of magic that can make your program work on any budget. 

Learning Matters

As marketers, understanding our customers and how they respond to our marketing is important. We can cross-seed our channels by learning from another channel. PMax campaigns (and, to a lesser extent, the responsive ad campaigns) remove our ability to learn. It either performs or it doesn’t. We can’t tell why because we can see nuance.

Premature Confidence in Artificial Intelligence

Over the past couple of years, AI has been hyped as the holy grail of advertising optimization. In reality, it has a long way to go. From basic mistakes that it doesn’t know are happening, or the need for a relatively steady state in order to maintain optimization, or not truly outperforming manually managed paid search programs at scale, PMax has a role in search but not the only part to play.

Opaque AI Tools

As mentioned earlier, learning matters in marketing. It is not enough that the AI “learns,” but the people managing the programs also learn. This is important to other marketing campaigns. Understanding how changes in your overall marketing, products, and competitors might affect campaign performance is also important. With the black-box AI, not only can you not see what is working, but you can’t see what isn’t… and the AI doesn’t know either.

Take the recent problems with Google Generative AI Gemini and it’s image generation. It produced historically inaccurate images when users asked for it to create images of the pope (and other historical cases), and the images were clearly not an accurate historical representation(CNN.) Given that only white men have been popes (for better or worse), it is a simple historical fact; there is no ambiguity. But Gemini didn’t know that, or wasn’t allowed to apply it. 

If real people hadn’t reviewed the images, the AI would continue to produce inaccurate results and never make adjustments. 

So, we know Google Ads’ AI delivers ads for a Chicago company (that only wants to do business with other Chicago companies) to people interested in Canadian companies. What else is it doing that we can’t see? How much waste is there when implementing a fully AI-driven campaign with no inputs or visibility other than your URL?

By steve

Digital marketer with deep experience in paid and organic search engine marketing driven by website analytics.