As Google continues down the path of AI/ML-driven “everything”, advertisers are left with blind spots. The latest push came when Google announced that last-click attribution will be replaced by data-driven as the default for Google Ads. This, in and of itself, is neither good nor bad. What it means to you depends on your program.
One challenge inherent with AI/ML is that it requires consistency with the inputs. And the inputs encompass everything involved: user behavior, your program parameters (budgets, geo, and any setting not fully controlled by an automated part of Google Ads), as well as your website with UX/UI, goal parameters, and more. If there are regular changes to any portion of these, then the system has to relearn what “optimized” is.
Where we see the biggest challenge is in the area of client-side decisions. If the landing pages are in constant flux or the definition of a goal changes, these throw off the inputs Google uses. Maintaining consistency is a key element in leveraging Google’s automation.
As we stabilize programs and the AI/ML process maximize Google Ads programs, we must determine what’s next.
Optimize what is rather than being able to decide what should be
Another challenge in the AI/ML-driven world (currently at least) is that, while these are great at doing what you tell them to do, they are not great at figuring out what should be done.
When it comes to paid search, sometimes the spark of an idea comes from diving into the data detail. The more Google automates and removes our visibility, the fewer opportunities we have to gain insights based on intuition or have an “ah-ha moment” from some querk in the data.
This becomes particularly acute when we see what Google is doing with ads. Soon, Google will lock Extended Text Ads and force all new ads to be responsive. Unless there is much more transparency in the Responsive Ads reporting, we will lose visibility into the nuance of copy that drives action.
Google will optimize the combination of inputs, but not allow us to “force” inputs in order to see what is actually happening beyond not converting.
Are certain phrases leading to more site-search, or navigation to other pages, or are they more likely to simply bounce? What other behavior are we not seeing because we can’t control our copy? What do these behaviors tell us about our understanding of the prospects? The more that is controlled (and hidden) by automation, the less chance we have of understanding why things happen.
It’s not new
Years ago we use to build paid search programs that were targeting long-tail searches. In fact, it was one of the strategies that Google Reps encouraged. Then, Google stopped allowing long-tail strategies by designating that the searches had “low volume,” no auction would be created. Not only did this take away a key cost-optimization path, but it also blinded us to the best ways to communicate with prospective customers.
But, we still had the query report. While we couldn’t necessarily create ad groups around the queries we saw, we could nuance our ad groups to force them to trigger certain ad groups and not others. In 2020, Google removed a great number of those queries from the SQR under the guise of privacy. They recently reintroduced greater visibility into the search terms as SEMs were fairly vocal about our disappointment in their decision.
With each step Google takes to advance automation, we lose some visibility & insight into the users.