There’s a lot of chatter about 100-percent viewability, and even more slamming the Media Ratings Council’s 50-percent viewability standard (PDF). Neither seems grounded in any data, which begs the question: where is the real intersection of viewability and performance?
The fact is, if you’re a programmatic media buyer measuring return on investment or return on ad spend (ROI/ROAS) by calculating the cost of media minus online sales revenue/lifetime value (LTV), then your intuition is probably right: 100-percent viewability is the wrong metric. Instead, you should adjust your target to sub-50-percent viewability.
Why 100% viewability doesn’t make sense
I don’t make this recommendation lightly. It’s based on intense multiyear-long data analysis and six iterative in-market tests that our agency (IMM) has conducted to arrive at this conclusion. As a disclaimer, let me say that this recommendation should only be considered as directional, given the fluctuations in the media marketplace. That said, our opinion is clear: If you’re measuring media ROI by online sales revenue/LTV minus the cost of media, stop obsessing about 100-percent viewability. Here’s why.
Back in 2012, comScore first sparked interest in viewability, suggesting a relationship between viewability and performance. Amusingly, it’s taken some time for the market to warm up to the idea that ads can’t make an impression unless they’re seen.
At IMM, our initial hypothesis was that conversion rates would be higher, and effective cost per action (eCPA) would be lower as viewability increased. We embarked on historical data analyses, learning what we could from what we had on hand through existing viewability measures, such as Demand Side Platform (DSP) integrated technologies.
At a very high level, we saw that viewability costs more, and the longer someone views an ad, the more likely they are to take an action.
How I came to this conclusion
You might be thinking, no duh. So did we; but knew there was more to it, so we dug deeper. Our methodology evolved over the years, but all experiments shared the same fundamental “practical scientific” approach: Ask a question, do some background research, run a test to address the question, (hopefully) uncover an insight, roll out the applied learnings in market, then continue to iterate our testing.
Ultimately, we landed on solid approach that required controlling for even impression distribution across the range of viewability and letting the conversion rate and CPA fluctuate. With a little polish from our data scientists, five key insights emerged:
With a little polish from our data scientists, five key insights emerged:
Optimizing to CTR works, but it’s not scalable. We briefly focused on the maxim, “You can’t click an ad unless it’s viewed.” We tested optimization of click-through conversions (users who click, then convert) as a proxy for viewability. We discovered that optimizing to click-through conversions certainly increases viewability while decreasing eCPA, but with so few people clicking on display ads, this isn’t a scalable solution for viewability optimization.
Hover rate is directionally correlated, but this approach has limited scale. We also realized that users can only hover and interact with ads they see, which should lead to higher conversion rates. We were able to confirm this relationship; however, like clicks, few people hover within ads.
The long tail of programmatic yields conversions that are few and far between, making large-scale viewability-based optimizations difficult. We were stymied by the fact that the programmatic long tail is so large and fragmented (one test included tens of thousands of placements), and optimization is difficult with low volumes of impressions and conversions per placement. After all, it may take months for a small site to generate a single conversion.
Measurability remains a core, underlying challenge. In desktop and mobile web, our display tests yielded unmeasurable impressions as high as 68 percent for mobile web and 34 percent for desktop web across a variety of third-party technologies. Mobile app measurement is so nascent that we could not reliably execute a structured viewability test.
There is no magic number; optimal viewability depends on each campaign. We secured a variety of private deal ID inventory buys with guaranteed higher viewability across a spectrum of publishers. After six weeks of testing, we concluded that the incremental CPM for greater viewability ROI tapers off after the ~50-percent viewability mark. This result was once again confirmed in a second, alternate advertiser test with similar results for desktop and mobile web, albeit with a higher conversion rate for mobile web.
In the end, we conclude that the value of viewability is subjective and will depend on the numerous factors of a campaign. Given this “no size fits all” outcome, it makes sense for marketers to benchmark their own advertising, determine the importance of viewability to their campaign, and continually test and evolve, especially as market conditions and technologies change.
Commenti