top of page
Writer's pictureFahad H

5 ingredients for writing the perfect expanded text ad

google-authorship-content-writing3-ss-1920

If Google’s expanded text ads (ETAs) are supposed to perform better than standard text ads, why did Google delay the sunset for creating legacy ads from late October 2016 to the end of January 2017?

A recent conversation I had with a Google product manager explained it pretty well. To start: quality matters. When expanded text ads are well-optimized, they outperform legacy ads; however, standard ads have had years to go through numerous optimizations, and in some cases, a brand-new ETA may not immediately outperform.

The date was moved back to give people more time to experiment with their expanded text ads. The extra time is important so people have the time to test and iterate ads, while also giving holiday optimizations their proper due.

At Optmyzr, we were curious what characteristics great-performing ETAs have in common, so we looked at 700 accounts, 1.2 million ads and over one billion impressions. I presented our findings at the MarketingFestival event last week in the Czech Republic and wanted to share them here with everyone who couldn’t make it to the event.

About the Optmyzr Expanded Text Ad study

We decided to look at click-through rate (CTR) as the benchmark for performance because that’s largely how Google evaluated their case studies, where they found that some advertisers were doubling their CTRs. We also believe high CTR is a reasonable KPI because the goal of an ad is to convince a user to click and visit a site. The job of the landing page is to convert that visitor into a buyer, so delivering high conversion is really the job of the landing page.

We visualized our findings using boxplot charts where the black line inside the box represents the median performance. The box itself shows the second and third quartile of the data. We removed outliers from the charts.

The data represents aggregate findings from hundreds of advertisers. To state the obvious, your account is unique — so results may vary, and you should run your own experiments to discover what works best for your situation.

Also note that we looked for correlations, but we don’t imply that there is causality. Our analysis does suffer from a self-selection bias. Advertisers who A/B test ads will gravitate towards better-performing ads, hence there may be less data about things that don’t perform well because advertisers over time tend to do less of this.

We filtered our data to only look at instances where an ad appeared on Google Search so that our data would not be impacted by different ways ads might be shown on search partners.

So, on to the questions we set out to answer and the results we found.

0 views0 comments

Comments


bottom of page