top of page
Writer's pictureFahad H

Achieve big testing wins by analyzing results by device type

For some time now, responsive design has been all the rage — and this trend shows no sign of letting up as the dominant configuration for mobile websites.

Responsive design allows us to present a consistent site experience to users regardless of screen size. Because the HTML is the same for all devices, it has the added benefit of also reducing development and maintenance time. However, different experiences based on screen size or device type can pose challenges when running tests and analyzing results.

Testing challenges

For our discussion, we will consider the three most common device types that visitors use when accessing a website:

  1. Mobile

  2. Tablet

  3. Desktop

Even with responsive design, the user experience for a website can be radically different for each of these devices. Running a test without considering and analyzing results specific to each device type can lead to erroneous conclusions, resulting in poor decisions that could have a potentially disastrous impact on conversion rates.

Understand test results by device type

Many testing tools allow us to segment results by device type in order to get a more granular picture of performance on our website. However, depending on how your responsive site is coded, this may not be the best indicator of the visitor’s experience.

For example, our landing pages are coded responsively based on the width of the visitor’s browser. So, if a visitor has their browser open on only half of their desktop screen, they would actually be experiencing the “tablet” version of our page, even though their device type is still “desktop.”

Understanding how pages behave and how devices are labeled is critical for accurate analysis of your testing data.

Test example

Let’s finish up by looking at a real-world example to show the dramatic difference this can make on the performance of a website or landing page.

We recently ran an A/B/C test of a new look and feel for a responsive landing page. After about six weeks, both test variations were outperforming the control (by 51.79 percent and 51.23 percent, respectively) for all traffic. We could say with statistical confidence that both test variations were an improvement over the control; however, there was no statistically significant difference between test variations to help us determine which one to use.

We could have simply run the test longer until statistical confidence was achieved between test variations, but before doing so, we decided to look at the results by browser screen width (described as mobile, tablet and desktop below). What we found was startling.

  1. Test Variation 1 actually had a 41 percent decrease in conversions for mobile visitors, but a 148.03 percent increase in conversions for desktop visitors.

  2. Test Variation 2 had a 42 percent increase in conversions for mobile visitors and a 62.64 percent increase in conversions for desktop visitors.

  3. Tablet data for both variations was too small for statistically relevant results.

Our conclusion

Based on this data, we concluded the test with a hybrid winning page, combining the desktop experience of Variation 1 and the mobile experience of Variation 2.

This hybrid approach resulted in an overall 86.40 percent increase in conversions instead of the ~51 percent increase that we were looking at previously. The hybrid solution also allowed us to conclude the test sooner than would have otherwise been possible.

I recommend that testers analyze results by device type or screen width whenever possible and implement hybrid solutions utilizing the best results for each experience. The results may surprise you!

1 view0 comments

Recent Posts

See All

Comments


bottom of page