If you have run Google Shopping campaigns before, you know that optimizing the product feed is crucial. In the world of Performance Max, product feed optimization is essential!
One of the key things that can be done to feed to increase exposure, is in optimizing the Product Title. This should come as no surprise, since Google utilizes the Title heavily in its algorithms to match up relevant search terms to your products. Google rightly surmises, that the words you put into your title are likely the most descriptive, and thus, they incorporate that directly into their matching algorithms.
However, what can be difficult, is in knowing if your Title tests or changes are actually doing anything. Once, we had a client who sold pet beds. They tended to be pricier than traditional fluffy pet beds, so when someone was simply comparing the product image and price, buying my client's pet accessory seemed like an unnecessary cost. Who has money to burn on their pets (he quips, since people LOVE burning money on their pets :) )?? We decided to test adding words like "durable" and "longer-lasting" to the front of the titles and we saw an immediate increase in traffic. Once people could see in the Shopping Ad a *reason* for spending more, they were willing to take more time to consider the more expensive option.
The big question is, how do you test this?
Well, unfortunately, there is not an easy-button option within the Google Ads system for completing Product Title tests. This means, you have to do something of a hack in order to run as close of a test as possible. I've seen recently the product variant testing option as increasingly popular, and it is a legitimate option. In this option, you are running two sets of title tests on different product variants at the same time. So you may have the black variant of a popular sweatshirt with Title A, and the red variant of that same product (within a shared item group ID) exhibiting Title B. You would then use Custom Labels to track performance of these two Title variants.
The problem with this, is you're still not getting a perfect A/B test since product variants really can perform drastically different, or at least have significantly different traffic numbers. It's probably the closest you'll get, however, and you can make the test more accurate if you're able to run a similar test across multiple products (using individual variants) at the same time, again, by using your Custom Label to track performance.
Let's say you want to know whether you should have the Brand be in the front, or the back of the title. You may set up a test across all of your products where you have one variant within each item group ID have Brand in the Front, and a second variant in each item group ID has Brand in the Back. This could at least give you an aggregated idea with more data to make a more informed decision.
Another, slightly less complicated option, is that you simply edit the Titles of a set of products in a particular Product Type or Product Brand, and then measure all of the changes between your Test and Control groups (the other Brand or Product Types you did not edit) to see if a greater variance in change occurred during the testing period. You would need to measure the variance in change in both the Control and the Test groups to ensure you're not just seeing a seasonal change. Asynchronous testing gets especially difficult of course, since you don't want to choose a time that may have a sale period during (or before) the test that would skew figures and many things can happen during one of the testing periods that would nullify the test. That's why you at least keep an eye on variance of change within your Control group, but it's still a little messy.
A final thing to keep in mind with Product Title testing, is that with Standard Shopping you can identify the specific word phrase you are testing in your titles by analyzing the change in your Search Terms report. This is a great way to keep an eye on whether you're appearing more for certain keywords you were previously not appearing for, keywords you have added into your Product Titles.
This testing analysis comes to a SCREECHING halt when using Performance Max Campaigns (PMax) since we can't see search term data in PMax! This in itself is a solid argument for the reintroduction of the search term report for PMax campaigns, and hopefully Google hears our cries in that regard. Since we don't currently have STR insights with PMax, we are forced to analyze a more holistic rise/fall in things like impressions, impression/click share, clicks, conversions, etc. More bad news: this really gets muddy in analysis since PMax will dynamically rise and fall as the algorithms sees opportunity without warning or explanation.
At the end of the day, I look at Title testing results like this:
> No evident drop in traffic? That's totally okay, and we need to be comfortable with accepting there might not be clear evidence of success or failure. That is an answer in itself. Shrug, it wasn't a clear success or clear failure. Leave or revert your test (it literally doesn't matter, so I'd probably just leave it). Move on to something else, or rerun the test on more products if it was a limited test.
> Obvious drop in traffic/performance/impressions/etc? This was an obvious failure, revert the titles and run a new test. If you don't trust the test, leave it running longer.
> Clear rise in traffic or sales because of the test? It might just be correlation to something else, but at least it didn't HURT anything, so you might as well leave the changes and test rolling them out further to additional products in your feed.
Do you see from those three options that Title testing is less scientific than a lot of people make it sound? It's not an exact science, and the sooner you can accept that, the better you'll be able to focus on what matters and ignore tests that don't really tell you a whole lot.
Keep title testing! What do you think? Anything to share on LinkedIn or Twitter on this post?