top of page
Writer's pictureFahad H

Conversion Optimization: Measuring Usability In The User Experience (UX) – Part 3

One of the joys of measuring usability on websites is being able to continually improve the user experience. During usability testing, I often observe the effect of improved interfaces first hand.

shutterstock_193851854-usability

Additionally, I commonly observe a shift in perception. Before usability testing, many users believe they are at fault, the ones who are making mistakes. After usability testing, users realize that they did not make mistakes. The poor or substandard interface is the reason for a compromised user experience.

In Part 1 and Part 2 of this article series, I went over four of the six items that usability professionals measure to ensure a positive user experience.

  1. Effectiveness: Can users achieve their desired goals on your website?

  2. Efficiency: How quickly and easily can users achieve their goals on your website?

  3. Learnability: Is the website easy to learn the first time users encounter it?

  4. Memorability: When users return to the website after a period of not using it, how easily can they reestablish proficiency?

For today’s article, I will go over the final two usability items: Error Handling and User Satisfaction.

Error Handling, Prevention & Recovery

When I analyze a website, inevitably I spend considerable time on error prevention and defensive design. No website is perfect to all users because users have different mental models and context. Nevertheless, through heuristic evaluation and performance tests, website owners can improve the user experience on their web pages.


For a professional evaluation, I generally identify and measure the following items:

  1. What errors do users typically make?

  2. How many errors?

  3. How severe are the errors?

  4. Deal breakers

  5. Infrequent

  6. How easily can users recover from the errors?

Some errors are deal breakers, and they absolutely must be addressed and fixed. Some errors are infrequent and not so severe, errors where site visitors can easily recover, such as fixing a simple typo.

If I am unable to conduct usability tests on a website, what I try to do is make purposeful errors on a site so I can see how the interface has (or has not) implemented defensive design.

For example, if a site has a search engine, I will deliberately misspell a word to see if the search engine has a spelling correction algorithm. Here is an example of search results for a typo in the U.S. Center for Disease Control’s site search engine:

Center for Disease Control search results page for the misspelled word, flu

Figure 1: In this search engine results page (SERP) the word “flu” is misspelled. The search results page offers both a spelling modification and a best bet as a form of error prevention.


Here is an example of spelling correction on a web search engine, Google:

Google SERP for the misspelling of the name Amelia Earhart

Figure 2: In this Google SERP, a student is looking for information about Amelia Earhart but does not know how to correctly spell her name. Google’s autosuggestion tool puts the correct spelling at the top of the list and offers reasonable links at the top of search results.


Ideally, if I were to do a usability evaluation on a site, I would have a minimum of three professional evaluators. Usability guru Jakob Nielsen determined that single evaluators found approximately 35% of the usability problems in the interfaces; whereas, five evaluators found between 55-90% of the usability problems.

I also make deliberate errors on forms (such as a Contact Us page) and shopping cart pages to see how the interface responds. Here are some before-and-after interfaces:

Purchase this item interface - before

Figure 3a – Before: One of the most common defensive design issues is forcing users to create an account in order to make a purchase. Also, in this interface, the options look so similar in format that they are difficult to distinguish.



Purchase this item interface - after

Figure 3b – After: Options are more distinguishable using radio buttons. Additionally, the Guest checkout option allows users to make a purchase without having an account.


Another alternative is to simply allow users to click or press the Continue button if they do not wish to create an account at that time. Usability expert Jared Spool calls this option The $300 Million Button.

Shopping cart interface - before

Figure 4a – Before: Forcing users to create an account often leads to cart abandonment.


Shopping cart interface - after

Figure 4b – After: Adding the Guest checkout option can increase sales and other conversions. In this example, I would probably make the Guest checkout link more noticeable.


It is best if the site implements a careful, defensive design from the outset. In other words, it is best to prevent usability and findability problems from occurring in the first place.

I find that the combination of a professional heuristic evaluation and subsequent usability testing gets the best results in terms of the quality and quantity of possible errors. It’s imperative that website owners understand the mental models of their target audience. Anything you can do to improve your interface for a positive UX is a step in the right direction.

User Satisfaction

The user satisfaction metric can be tricky to measure because people often confuse a usability evaluation with a focus group.

Usability tests tend to be task/performance oriented. Usability test results are not a bunch of focus-group opinions. Usability tests are often conducted one person at a time so there is no herd mentality – no one user influences the behaviors of another user.


Like usability tests, focus groups can provide direct, rapid feedback to help identify website problems. However, focus groups are more about opinions than tasks. All too frequently, the more outspoken people in the focus group can influence others’ feedback.

User satisfaction is directly related to task completion. If users can complete their assigned tasks quickly and easily, they often report higher satisfaction, especially if there is an element of delight in the interaction. If users have a difficult time completing their tasks or cannot complete them at all, they often report low satisfaction.

Not only do we want to know the quantitative data (a rating from 1 to 7, averaging the scores to get an average satisfaction measure), we also want to understand the contextual data. Why did users report satisfaction or dissatisfaction with a website? The hows of usability are equally important as the whys of usability.

Context matters.

Effectiveness, Efficiency, Learnability, Memorability, Error Prevention, and User Satisfaction. These are the items usability professionals measure to improve the user experience. How does your website measure up?

 (Stock images via Shutterstock.com. Used under license.)

Comments


bottom of page