6 Usability Testing Examples & Case Studies

Interested in analyzing real-world examples of successful usability tests?

In this article, we’ll be examining six examples of usability testing being conducted with substantial results.

Conducting usability testing takes only seven simple steps and does not have to require a massive budget. Yet it can achieve remarkable results for companies across all industries.

If you’re someone who cannot be convinced by theory alone, this is the guide for you. These are tried-and-tested case studies from well-known companies that showcase the true power of a successful usability test.

Here are the usability testing examples and case studies we’ll be covering in this article:

  • McDonald’s
  • AutoTrader.com
  • Halo: Combat Evolved

Example #1: Ryanair

Ryanair is one of the world’s largest airline groups, carrying 152 million passengers each year. In 2014, the company launched Ryanair Labs, a digital innovation hub seeking to “reinvent online traveling”. To make this dream a reality, they went on a recruiting spree that resulted in a team of 200+ members. This team included user experience specialists, data analysts, software developers, and digital marketers – all working towards a common goal of improving the user experience of the Ryanair website.

What made matters more complicated, however, is that Ryanair’s website and app together received 1 billion visits per year. Working with a website this large, combined with the paper-thin profit margins of around 5% for the airline industry, Ryanair had no room for errors. To make matters even more stressful, one of the first missions for the new team included launching an entirely new website with a superior user experience.

To give you a visual idea of what they were up against, take a look at their old website design:

the case study of usability testing

Not great, not terrible. But the website undoubtedly needed a redesign for the 21st century.

This is what the Ryanair team set out to accomplish:

  • Reducing the number of steps needed to book a flight on the website;
  • Allowing customers to store their travel documents and payment cards on the website;
  • Delivering a better mobile device user experience for both the website and app.

With these goals in mind, they chose remote and unmoderated usability testing types for their user tests. This by itself was a change for the team, as the Ryanair team had relied on in-lab, face-to-face testing until that point. 

By collaborating with the UX agency UserZoom , however, new opportunities opened up for Ryanair. With UzerZoom’s massive roster of user testers, Ryanair could access large amounts of qualitative and quantitative usability data. Data that they badly needed during the design process of the new website.

By going with remote unmoderated usability testing, the Ryanair team managed to:

  • Reduce the time spent on usability testing;
  • Conduct simultaneous usability tests with hundreds of users and without geographical barriers;
  • Increase the overall reach and scale of the tests;
  • Carry out tests across many devices, operating systems, and multiple focus groups.

With continuous user testing, the new website was taken through alpha and beta testing in 2015. The end result of all work this was the vastly improved look, functionality, and user experience of the new website:

Ryanair's new website design

Even before launch, Ryanair knew that the new website was superior. Usability tests had shown that to be the case and they had no need to rely on “educated guesses”. This usability testing example demonstrates that a well-executed testing plan can give remarkable results.

Source:   Ryanair case study  by UserZoom

Example #2: McDonald’s

McDonald’s is one of the world’s largest fast-food restaurant chains, with a staggering 62 million daily customers . Yet, McDonald’s was late to embrace the mobile revolution as their smartphone app launched rather recently – in August 2015. In comparison, Starbucks’ smartphone app was already a booming success and accounted for 20% of its’ overall revenue in 2015.

Considering the competition, McDonald’s had some catching up to do. Before the launch of their app in the UK, they decided to hire UK-based  SimpleUsability  to identify any usability problems before release. The test plan involved conducting 20 usability tests, where the task scenarios covered the entire customer journey from end-to-end. In addition to that, the test plan included 225 end-user interviews.

Not exactly a large-scale usability study considering the massive size of McDonald’s, but it turned out to be valuable nonetheless. A number of usability issues were detected during the study:

  • Poor visibility and interactivity of the call-to-action buttons;
  • Communication problems between restaurants and the smartphone app;
  • Lack of order customization and favoriting impaired the overall user experience.

Here’s what the McDonald’s mobile app looks like today:

the case study of usability testing

This case study demonstrates that investing even a tiny percentage of a company’s resources into usability testing can result in meaningful insights.

Source:   McDonald’s case study  by SimpleUsability

Example #3: SoundCloud

SoundCloud is the world’s largest music and audio distribution platform, with over 175 million unique monthly listeners . In 2019, SoundCloud hired test IO , a Berlin-based usability testing agency, to conduct continuous usability testing for the SoundCloud mobile app. With SoundCloud’s rigorous development schedule, the company needed regular human user testers to make sure that all new updates work across all devices and OS versions.

The key research objectives for SoundCloud’s regular usability studies were to:

  • Provide a user-friendly listening experience for mobile app users;
  • Identify and fix software bugs before wide release;
  • Improve the mobile app development cycle.

In the very first usability tests, more than 150 usability issues (including 11 critical issues) were discovered. These issues likely wouldn’t have been discovered through internal bug testing. That is because the user testers experimented on the app from a plethora of devices and geographical locations (144 devices and 22 countries). Without remote usability testing, a testing scale as large as this would have been very difficult and expensive to achieve.

Today, SoundCloud’s mobile app looks like this:

SoundCloud usability testing example

This case study demonstrates the power of regular usability testing in products with frequent updates. 

Source:   SoundCloud case study (.pdf)  by test IO

Example #4: AutoTrader.com

AutoTrader.com is one of the world’s largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors . The mission of AutoTrader’s website is to empower car shoppers in the researching process by giving them all the tools necessary to make informed decisions about vehicle purchases.

Sounds fantastic.

However, with competitors such as CarGurus gaining increasing amounts of market share in the online car shopping industry, AutoTrader had to do reinvent itself to stay competitive. 

In e-commerce, competitors with a superior website can gain massive followings in an instant. Fifty years ago this was not the case – well-established car marketplaces had massive car parks all over the country, and a newcomer would have little in ways to compete.

Nowadays, however, it’s all about user experience. Digital shoppers will flock to whichever site offers a better user experience. Websites unwilling or unable to improve their user experience over time will get left in the dust. No matter how big or small they are.

Going back to AutoTrader, the majority of its website traffic comes from organic Google search, meaning that in addition to website usability, search engine optimization (SEO) is a major priority for the company. According to John Muller from Google, changing the layout of a website can affect rankings , and that is why AutoTrader had to be careful with making any large-scale changes to their website.

AutoTrader did not have a large team of user researchers nor a massive budget dedicated to usability testing. But they did have Bradley Miller – Senior User Experience Researcher at the company. To test the usability of AutoTrader, Miller decided to partner with UserTesting.com to conduct live user interviews with AutoTrader users.

Through these live user interviews, Miller was able to:

  • Find and connect with target personas;
  • Communicate with car buyers from across the country;
  • Reduce the costs of conducting usability tests while increasing the insights gained.

From these remote usability live interviews, Miller learned that the customer journey almost always begins from a single source: search engines. Here, it’s important to note that search engines rarely direct users to the homepage. Instead, they drive traffic to the inner pages of websites. In the case of AutoTrader, for example, only around 20% of search engine traffic goes to the homepage (data from SEMrush ).

These insights helped AutoTrader redesign their inner pages to better match the customer journey. They no longer assumed that any inner page visitor already has a greater contextual knowledge of the website. Instead, they started to treat each page as if it’s the initial point of entry by providing more contextual information right then and there inside the inner page.

This usability testing example demonstrates not only the power of user interviews but also the importance of understanding your customer journey and SEO.

Source: AutoTrader case study  by UserTesting.com

Example #5: Udemy

Udemy is one of the world’s largest online learning platforms with over  40 million students across the world. The e-learning giant also has a massively popular smartphone app, and the usability testing example in question was aimed at the smartphone users of Udemy.

To find out when, where, and why Udemy users chose to opt for the mobile app rather than the desktop version, Udemy conducted user tests. As Udemy is a 100% digital company, they chose fully remote unmoderated user testing as their testing method. 

Test participants were asked to take small videos showing where they were located and what tasks they were focused on at the time of learning and recording. 

What the user researchers found was that their initial theory of “users prefer using the mobile app while on the go” was false. Instead, what they found was that the majority of mobile app users were stationary. Udemy users, for various reasons, used the mobile app at home on the couch, or in a cafeteria. The key findings of this user test were utilized for the next year’s product and feature development.

This is what Udemy’s mobile app looks like today:

the case study of usability testing

This usability testing case study demonstrates that a company’s perception of target audience behavior does not always match the behavior of the real end-users. And, that is why user testing is crucial.

Source:   Udemy case study  by UserTesting.com

Example #6: Halo: Combat Evolved

“Halo: Combat Evolved” was the first video game in the massively popular Halo franchise. It was developed by Bungie and published by Microsoft Game Studios in 2001. Within 10 years after its’ release, the Halo games sold more than 46 million copies worldwide and generated Microsoft more than $5 billion in video game and hardware sales. Owing it all to the usability test we’re about to discuss may be a bit of stretch, but usability testing the game during development was undeniably one of the factors that helped the franchise take off like a rocket.

In this usability study, the Halo team gathered a focus group of console gamers to try out their game’s prototype to see if they had fun playing the game. And, if they did not have fun – they wanted to find out what prevented them from doing so. 

In the usability sessions, the researchers placed test subjects (players) in a large outdoor environment with enemies waiting for them across the open space.

The designers of the game expected the players to sprint closer towards the enemies, sparking a massive battle full of action and excitement. But, the test participants had a different plan in mind. Instead of putting themselves in danger by springing closer, they would stay at a maximum distance from the enemies and shoot from far across the outdoor space. While this was a safe and effective strategy, it proved to be rather uneventful and boring for the players.

To entice players to enjoy combat up close, the user researchers decided that changes would have to be made. Their solution – changing the size and color of the aiming indicator in the center of the screen to notify players when they were too far away from enemies. 

Here, you can see the finalized aiming indicator in action:

the case study of usability testing

Subsequent usability tests proved these changes to be effective, as the majority of user testers now engaged in combat from a closer distance.

User testing is not restricted to any particular industry, OS, or platform. Testing user experience is an invaluable tool for any product – not just for websites or mobile apps. 

This example of usability testing from the video game industry shows that players (users) will optimize the fun out of a game if given the chance. It’s up to the designers to bring the fun back through well-designed game mechanics and notifications.

Source:  “ Designing for Fun – User-Testing Case Studies ” by Randy J. Pagulayan

The Beginner’s Guide to Usability Testing [+ Sample Questions]

Clifford Chi

Published: July 28, 2021

In practically any discipline, it's a good idea to have others evaluate your work with fresh eyes, and this is especially true in user experience and web design. Otherwise, your partiality for your own work can skew your perception of it. Learning directly from the people that your work is actually for — your users — is what enables you to craft the best user experience possible.

implementing feedback from usability testing

UX and design professionals leverage usability testing to get user feedback on their product or website’s user experience all the time. In this post, you'll learn:

What usability testing is

  • Its purpose and goals
  • Scenarios where it can work
  • Real-life examples and case studies
  • How to conduct one of your own
  • Scripted questions you can use along the way

What is usability testing?

Usability testing is a method of evaluating a product or website’s user experience. By testing the usability of their product or website with a representative group of their users or customers, UX researchers can determine if their actual users can easily and intuitively use their product or website.

UX researchers will usually conduct usability studies on each iteration of their product from its early development to its release.

During a usability study, the moderator asks participants in their individual user session to complete a series of tasks while the rest of the team observes and takes notes. By watching their actual users navigate their product or website and listening to their praises and concerns about it, they can see when the participants can quickly and successfully complete tasks and where they’re enjoying the user experience, encountering problems, and experiencing confusion.

After conducting their study, they’ll analyze the results and report any interesting insights to the project lead.

the case study of usability testing

Free UX Research Kit + Templates

3 templates for conducting user tests, summarizing your UX research, and presenting your findings.

  • User Testing Template
  • UX Research Testing Report Template
  • UX Research Presentation Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

What is the purpose of usability testing?

Usability testing allows researchers to uncover any problems with their product's user experience, decide how to fix these problems, and ultimately determine if the product is usable enough.

Identifying and fixing these early issues saves the company both time and money: Developers don’t have to overhaul the code of a poorly designed product that’s already built, and the product team is more likely to release it on schedule.

Benefits of Usability Testing

Usability testing has five major advantages over the other methods of examining a product's user experience (such as questionnaires or surveys):

  • Usability testing provides an unbiased, accurate, and direct examination of your product or website’s user experience. By testing its usability on a sample of actual users who are detached from the amount of emotional investment your team has put into creating and designing the product or website, their feedback can resolve most of your team’s internal debates.
  • Usability testing is convenient. To conduct your study, all you have to do is find a quiet room and bring in portable recording equipment. If you don’t have recording equipment, someone on your team can just take notes.
  • Usability testing can tell you what your users do on your site or product and why they take these actions.
  • Usability testing lets you address your product’s or website’s issues before you spend a ton of money creating something that ends up having a poor design.
  • For your business, intuitive design boosts customer usage and their results, driving demand for your product.

Usability Testing Scenario Examples

Usability testing sounds great in theory, but what value does it provide in practice? Here's what it can do to actually make a difference for your product:

1. Identify points of friction in the usability of your product.

As Brian Halligan said at INBOUND 2019, "Dollars flow where friction is low." This just as true in UX as it is in sales or customer service. The more friction your product has, the more reason your users will have to find something that's easier to use.

Usability testing can uncover points of friction from customer feedback.

For example: "My process begins in Google Drive. I keep switching between windows and making multiple clicks just to copy and paste from Drive into this interface."

Even though the product team may have had that task in mind when they created the tool, seeing it in action and hearing the user's frustration uncovered a use case that the tool didn't compensate for. It might lead the team to solve for this problem by creating an easy import feature or way to access Drive within the interface to reduce the number of clicks the user needs to make to accomplish their task.

2. Stress test across many environments and use cases.

Our products don't exist in a vacuum, and sometimes development environments are unable to compensate for all the variables. Getting the product out and tested by users can uncover bugs that you may not have noticed while testing internally.

For example: "The check boxes disappear when I click on them."

Let's say that the team investigates why this might be, and they discover that the user is on a browser that's not commonly used (or a browser version that's outdated).

If the developers only tested across the browsers used in-house, they may have missed this bug, and it could have resulted in customer frustration.

3. Provide diverse perspectives from your user base.

While individuals in our customer bases have a lot in common (in particular, the things that led them to need and use our products), each individual is unique and brings a different perspective to the table. These perspectives are invaluable in uncovering issues that may not have occurred to your team.

For example: "I can't find where I'm supposed to click."

Upon further investigation, it's possible that this feedback came from a user who is color blind, leading your team to realize that the color choices did not create enough contrast for this user to navigate properly.

Insights from diverse perspectives can lead to design, architectural, copy, and accessibility improvements.

4. Give you clear insights into your product's strengths and weaknesses.

You likely have competitors in your industry whose products are better than yours in some areas and worse than yours in others. These variations in the market lead to competitive differences and opportunities. User feedback can help you close the gap on critical issues and identify what positioning is working.

For example: "This interface is so much easier to use and more attractive than [competitor product]. I just wish that I could also do [task] with it."

Two scenarios are possible based on that feedback:

  • Your product can already accomplish the task the user wants. You just have to make it clear that the feature exists by improving copy or navigation.
  • You have a really good opportunity to incorporate such a feature in future iterations of the product.

5. Inspire you with potential future additions or enhancements.

Speaking of future iterations, that comes to the next example of how usability testing can make a difference for your product: The feedback that you gather can inspire future improvements to your tool.

It's not just about rooting out issues but also envisioning where you can go next that will make the most difference for your customers. And who best to ask but your prospective and current customers themselves?

Usability Testing Examples & Case Studies

Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in action:

1. User Fountain + Satchel

Satchel is a developer of education software, and their goal was to improve the experience of the site for their users. Consulting agency User Fountain conducted a usability test focusing on one question: "If you were interested in Satchel's product, how would you progress with getting more information about the product and its pricing?"

During the test, User Fountain noted significant frustration as users attempted to complete the task, particularly when it came to locating pricing information. Only 80% of users were successful.

Usability Test Example: User Fountain + Satchel

Image Source

This led User Fountain to create the hypothesis that a "Get Pricing" link would make the process clearer for users. From there, they tested a new variation with such a link against a control version. The variant won, resulting in a 34% increase in demo requests.

By testing a hypothesis based on real feedback, friction was eliminated for the user, bringing real value to Satchel.

2. Kylie.Design + Digi-Key

Ecommerce site Digi-Key approached consultant Kylie.Design to uncover which site interactions had the highest success rates and what features those interactions had in common.

They conducted more than 120 tests and recorded:

  • Click paths from each user
  • Which actions were most common
  • The success rates for each

Usability Test Example: Kylie.Design + Digi-Key

This as well as the written and verbal feedback provided by participants informed the new design, which resulted in increasing purchaser success rates from 68.2% to 83.3%.

In essence, Digi-Key was able to identify their most successful features and double-down on them, improving the experience and their bottom line.

3. Sparkbox + An Academic Medical Center

An academic medical center in the midwest partnered with consulting agency Sparkbox to improve the patient experience on their homepage, where some features were suffering from low engagement.

Sparkbox conducted a usability study to determine what users wanted from the homepage and what didn't meet their expectations. From there, they were able to propose solutions to increase engagement.

Usability Test Example: Sparkbox + Medical Center

For example, one key action was the ability to access electronic medical records. The new design based on user feedback increased the success rate from 45% to 94%.

This is a great example of putting the user's pains and desires front-and-center in a design.

The 9 Phases of a Usability Study

1. decide which part of your product or website you want to test..

Do you have any pressing questions about how your users will interact with certain parts of your design, like a particular interaction or workflow? Or are you wondering what users will do first when they land on your product page? Gather your thoughts about your product or website’s pros, cons, and areas of improvement, so you can create a solid hypothesis for your study.

2. Pick your study’s tasks.

Your participants' tasks should be your user’s most common goals when they interact with your product or website, like making a purchase.

3. Set a standard for success.

Once you know what to test and how to test it, make sure to set clear criteria to determine success for each task. For instance, when I was in a usability study for HubSpot’s Content Strategy tool, I had to add a blog post to a cluster and report exactly what I did. Setting a threshold of success and failure for each task lets you determine if your product's user experience is intuitive enough or not.

4. Write a study plan and script.

At the beginning of your script, you should include the purpose of the study, if you’ll be recording, some background on the product or website, questions to learn about the participants’ current knowledge of the product or website, and, finally, their tasks. To make your study consistent, unbiased, and scientific, moderators should follow the same script in each user session.

5. Delegate roles.

During your usability study, the moderator has to remain neutral, carefully guiding the participants through the tasks while strictly following the script. Whoever on your team is best at staying neutral, not giving into social pressure, and making participants feel comfortable while pushing them to complete the tasks should be your moderator

Note-taking during the study is also just as important. If there’s no recorded data, you can’t extract any insights that’ll prove or disprove your hypothesis. Your team’s most attentive listener should be your note-taker during the study.

6. Find your participants.

Screening and recruiting the right participants is the hardest part of usability testing. Most usability experts suggest you should only test five participants during each study , but your participants should also closely resemble your actual user base. With such a small sample size, it’s hard to replicate your actual user base in your study.

To recruit the ideal participants for your study, create the most detailed and specific persona as you possibly can and incentivize them to participate with a gift card or another monetary reward.

Recruiting colleagues from other departments who would potentially use your product is also another option. But you don’t want any of your team members to know the participants because their personal relationship can create bias -- since they want to be nice to each other, the researcher might help a user complete a task or the user might not want to constructively criticize the researcher’s product design.

7. Conduct the study.

During the actual study, you should ask your participants to complete one task at a time, without your help or guidance. If the participant asks you how to do something, don’t say anything. You want to see how long it takes users to figure out your interface.

Asking participants to “think out loud” is also an effective tactic -- you’ll know what’s going through a user’s head when they interact with your product or website.

After they complete each task, ask for their feedback, like if they expected to see what they just saw, if they would’ve completed the task if it wasn’t a test, if they would recommend your product to a friend, and what they would change about it. This qualitative data can pinpoint more pros and cons of your design.

8. Analyze your data.

You’ll collect a ton of qualitative data after your study. Analyzing it will help you discover patterns of problems, gauge the severity of each usability issue, and provide design recommendations to the engineering team.

When you analyze your data, make sure to pay attention to both the users’ performance and their feelings about the product. It’s not unusual for a participant to quickly and successfully achieve your goal but still feel negatively about the product experience.

9. Report your findings.

After extracting insights from your data, report the main takeaways and lay out the next steps for improving your product or website’s design and the enhancements you expect to see during the next round of testing.

The 3 Most Common Types of Usability Tests

1. hallway/guerilla usability testing.

This is where you set up your study somewhere with a lot of foot traffic. It allows you to ask randomly-selected people who have most likely never even heard of your product or website -- like passers-by -- to evaluate its user-experience.

2. Remote/Unmoderated Usability Testing

Remote/unmoderated usability testing has two main advantages: it uses third-party software to recruit target participants for your study, so you can spend less time recruiting and more time researching. It also allows your participants to interact with your interface by themselves and in their natural environment -- the software can record video and audio of your user completing tasks.

Letting participants interact with your design in their natural environment with no one breathing down their neck can give you more realistic, objective feedback. When you’re in the same room as your participants, it can prompt them to put more effort into completing your tasks since they don’t want to seem incompetent around an expert. Your perceived expertise can also lead to them to please you instead of being honest when you ask for their opinion, skewing your user experience's reactions and feedback.

3. Moderated Usability Testing

Moderated usability testing also has two main advantages: interacting with participants in person or through a video a call lets you ask them to elaborate on their comments if you don’t understand them, which is impossible to do in an unmoderated usability study. You’ll also be able to help your users understand the task and keep them on track if your instructions don’t initially register with them.

Usability Testing Script & Questions

Following one script or even a template of questions for every one of your usability studies wouldn't make any sense -- each study's subject matter is different. You'll need to tailor your questions to the things you want to learn, but most importantly, you'll need to know how to ask good questions.

1. When you [action], what's the first thing you do to [goal]?

Questions such as this one give insight into how users are inclined to interact with the tool and what their natural behavior is.

Julie Fischer, one of HubSpot's Senior UX researchers, gives this advice: "Don't ask leading questions that insert your own bias or opinion into the participants' mind. They'll end up doing what you want them to do instead of what they would do by themselves."

For example, "Find [x]" is a better than "Are you able to easily find [x]?" The latter inserts connotation that may affect how they use the product or answer the question.

2. How satisfied are you with the [attribute] of [feature]?

Avoid leading the participants by asking questions like "Is this feature too complicated?" Instead, gauge their satisfaction on a Likert scale that provides a number range from highly unsatisfied to highly satisfied. This will provide a less biased result than leading them to a negative answer they may not otherwise have had.

3. How do you use [feature]?

There may be multiple ways to achieve the same goal or utilize the same feature. This question will help uncover how users interact with a specific aspect of the product and what they find valuable.

4. What parts of [the product] do you use the most? Why?

This question is meant to help you understand the strengths of the product and what about it creates raving fans. This will indicate what you should absolutely keep and perhaps even lead to insights into what you can improve for other features.

5. What parts of [the product] do you use the least? Why?

This question is meant to uncover the weaknesses of the product or the friction in its use. That way, you can rectify any issues or plan future improvements to close the gap between user expectations and reality.

6. If you could change one thing about [feature] what would it be?

Because it's so similar to #5, you may get some of the same answers. However, you'd be surprised about the aspirational things that your users might say here.

7. What do you expect [action/feature] to do?

Here's another tip from Julie Fischer:

"When participants ask 'What will this do?' it's best to reply with the question 'What do you expect it do?' rather than telling them the answer."

Doing this can uncover user expectation as well as clarity issues with the copy.

Your Work Could Always Use a Fresh Perspective

Letting another person review and possibly criticize your work takes courage -- no one wants a bruised ego. But most of the time, when you allow people to constructively criticize or even rip apart your article or product design, especially when your work is intended to help these people, your final result will be better than you could've ever imagined.

Editor's note: This post was originally published in August 2018 and has been updated for comprehensiveness.

Don't forget to share this post!

Related articles.

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2024

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2024

Using Human-Centered Design to Create Better Products (with Examples)

Using Human-Centered Design to Create Better Products (with Examples)

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

UX vs. UI: What's the Difference?

UX vs. UI: What's the Difference?

The 10 Best Storyboarding Software of 2024 for Any Budget

The 10 Best Storyboarding Software of 2024 for Any Budget

The Ultimate Guide to Designing for the User Experience

The Ultimate Guide to Designing for the User Experience

It’s the Little Things: How To Write Microcopy

It’s the Little Things: How To Write Microcopy

10 Tips That Can Drastically Improve Your Website's User Experience

10 Tips That Can Drastically Improve Your Website's User Experience

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Fitts's Law: The UX Hack that Will Strengthen Your Design

Fitts's Law: The UX Hack that Will Strengthen Your Design

3 templates for conducting user tests, summarizing UX research, and presenting findings.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

  • User Experience (UX) Testing User Interface (UI) Testing Ecommerce Testing Remote Usability Testing About the company ' data-html="true"> Why Trymata
  • Usability testing

Run remote usability tests on any digital product to deep dive into your key user flows

  • Product analytics

Learn how users are behaving on your website in real time and uncover points of frustration

  • Research repository

A tool for collaborative analysis of qualitative data and for building your research repository and database.

See an example

Trymata Blog

How-to articles, expert tips, and the latest news in user testing & user experience

Knowledge Hub

Detailed explainers of Trymata’s features & plans, and UX research terms & topics

Visit Knowledge Hub

  • Plans & Pricing

Get paid to test

  • For UX & design teams
  • For product teams
  • For marketing teams
  • For ecommerce teams
  • For agencies
  • For startups & VCs
  • Customer Stories

How do you want to use Trymata?

Conduct user testing, desktop usability video.

You’re on a business trip in Oakland, CA. You've been working late in downtown and now you're looking for a place nearby to grab a late dinner. You decided to check Zomato to try and find somewhere to eat. (Don't begin searching yet).

  • Look around on the home page. Does anything seem interesting to you?
  • How would you go about finding a place to eat near you in Downtown Oakland? You want something kind of quick, open late, not too expensive, and with a good rating.
  • What do the reviews say about the restaurant you've chosen?
  • What was the most important factor for you in choosing this spot?
  • You're currently close to the 19th St Bart station, and it's 9PM. How would you get to this restaurant? Do you think you'll be able to make it before closing time?
  • Your friend recommended you to check out a place called Belly while you're in Oakland. Try to find where it is, when it's open, and what kind of food options they have.
  • Now go to any restaurant's page and try to leave a review (don't actually submit it).

What was the worst thing about your experience?

It was hard to find the bart station. The collections not being able to be sorted was a bit of a bummer

What other aspects of the experience could be improved?

Feedback from the owners would be nice

What did you like about the website?

The flow was good, lots of bright photos

What other comments do you have for the owner of the website?

I like that you can sort by what you are looking for and i like the idea of collections

You're going on a vacation to Italy next month, and you want to learn some basic Italian for getting around while there. You decided to try Duolingo.

  • Please begin by downloading the app to your device.
  • Choose Italian and get started with the first lesson (stop once you reach the first question).
  • Now go all the way through the rest of the first lesson, describing your thoughts as you go.
  • Get your profile set up, then view your account page. What information and options are there? Do you feel that these are useful? Why or why not?
  • After a week in Italy, you're going to spend a few days in Austria. How would you take German lessons on Duolingo?
  • What other languages does the app offer? Do any of them interest you?

I felt like there could have been a little more of an instructional component to the lesson.

It would be cool if there were some feature that could allow two learners studying the same language to take lessons together. I imagine that their screens would be synced and they could go through lessons together and chat along the way.

Overall, the app was very intuitive to use and visually appealing. I also liked the option to connect with others.

Overall, the app seemed very helpful and easy to use. I feel like it makes learning a new language fun and almost like a game. It would be nice, however, if it contained more of an instructional portion.

All accounts, tests, and data have been migrated to our new & improved system!

Use the same email and password to log in:

Legacy login: Our legacy system is still available in view-only mode, login here >

What’s the new system about? Read more about our transition & what it-->

A case study in competitive usability testing (Part 1)

' src=

This post is Part 1 of a 2-part competitive usability study. In Part 1, we deal with how to set up a competitive usability testing study. In Part 2 , we showcase results from the study we ran, with insights on how to approach your data and what to look for in a competitive UX study.

There are many good reasons to do competitive usability testing. Watching users try out a competitor’s website or app can show you what their designs are doing well, and where they’re lacking; which features competitors have that users really like; how they display and organize information and options, and how well it works.

A less obvious, but perhaps even more valuable reason, is that competitive usability testing improves the quality of feedback on your own website or app. By giving users something to compare your interface to, it sharpens their critiques and increases their awareness.

Read more: 5 secrets to comparative usability testing

If a user has only experienced your website’s way of doing something, for example, it’s easy for them to take it for granted. As long as they were able to complete what was asked of them, they may have relatively little to say about how it could be improved. But send them to a competitor’s site and have them complete the same tasks, and they’ll almost certainly have a lot more to say about whose way was better, in what ways, and why they liked it more.

Thanks to this effect alone, the feedback you collect about your own designs will be much more useful and insight-dense.

Quantifying the differences

Not only can competitive user testing get you more incisive feedback on what and how users think, it’s also a great opportunity to quantitatively measure the effectiveness of different pages, flows, and features on your site or app, and to quantify users’ attitudes towards them.

Quantitative metrics and hard data provides landmarks of objectivity as you plan your roadmap and make decisions about your designs. They deepen your understanding of user preferences, and strengthen your ability to gauge the efficacy of different design choices.

When doing competitive UX testing – whether between your products and a competitor’s, or between multiple versions of your own products – quantitative metrics are a valuable baseline that provide quick, unambiguous answers and lay the groundwork for a thorough qualitative analysis.

Domino's za Pizza Hut pizzas

Domino’s vs Pizza Hut: A competitive user testing case study

We revisited our old Domino’s vs Pizza Hut UX faceoff , this time with 20 test participants, to see what we would find – not just about the UX of ordering pizza online, but also about how to run competitive usability tests, and how to use quantitative data in your competitive study.

Why 20 users? It’s the minimum sample size to get statistically reliable quantitative data, as NNGroup and other UX research experts have demonstrated. In our post-test survey, we included a number of new multiple choice, checkbox-style, and slider rating questions to get some statistically sound quantitative data points.

Read more: How many users should I test with?

Setup of the study

The first choice you need to make when setting up a competitive UX study is whether to test each interface with separate groups of users, or send the same users to each one.

As described above, we prefer sending the same users to both if possible, so that they can directly compare their experiences with a sharp and keenly aware eye. We recommend trying this method if it’s feasible for your situation, but there are a few things to consider:

1. Time: How long will it take users to go through both (or all) of the interfaces you’re testing? If the flows aren’t too long and the tasks aren’t too complicated, you can safely fit 2 or even 3 different sites or apps into a single session.

The default session duration for TryMyUI tests is 30 minutes, which we’ve found to be a good upper limit. The longer the session goes, the more your results could degrade due to tester fatigue, so keep this in mind and make sure you’re not asking too much of your participants.

2. Depth: There will necessarily be a trade-off between how many different sites or apps users visit in a single session, and how deeply they interact with each one. If you need users to go into serious depth, it may be better to use separate groups for each different interface.

3. Scale: To get statistically reliable quantitative data, at least 20 users should be reviewing each interface. If every tester tries out both sites during their session, you only need 20 in all. If you use different batches of testers per site, you would need 40 total users to compare two sites.

So if you don’t have the ability or bandwidth to recruit and test with lots of users, you may want to simplify each flow such that they can fit into a single session; but if your team can handle larger numbers, you can have 20 visit each site separately (or even have some users visit multiple sites, and others go deeper into a single one).

For our Domino’s vs Pizza Hut test, we chose to send the same users to both sites so they could directly compare their experience on each. This wasn’t too much of a challenge, as ordering pizza is a relatively simple flow that doesn’t require intense or deep interaction, and the experience of both sites could fit easily into a 30-minute window.

Learn more: user testing better products and user testing new products

Balancing on a tightrope

Accounting for bias

As with any kind of usability testing, it’s critical to be aware of potential sources of bias in your test setup. In addition to the typical sources, competitive testing can also be biased by the order of the websites.

There’s several ways that this bias can play out: in many cases, users are biased in favor of the first site they use, as this site gets to set their expectations of how things will look and work, and where different options or features might be found. When the user moves on to the next website, they may have a harder time simply because it’s different from the first one.

On the other hand, users may end up finding the second site easier if they had to struggle through a learning curve on the first one . In such cases, the extra effort they put in to understand key functions or concepts on the first site might make it seem harder, while simultaneously giving them a jump-start on understanding the second site.

Lastly, due to simple recency effects , the last interface might be more salient in users’ minds and therefore viewed more favorably (or perhaps just more extremely).

To account for bias, we set up 2 tests: one going from A→B, and one from B→A , with 10 users per flow. This way, both sites would get 20 total pairs of eyes checking them out, but half would see each site first and half of them second.

No matter whether the site order would bias users in favor of the second platform or the first, the 10/10 split would balance these effects out as much as possible.

The other benefit of setting up the study this way is that we would get to observe how brand new visitors and visitors with prior expectations would view and interact with each site. Both Domino’s and Pizza Hut would get their share of open-minded new orderers and judging, sharp-eyed pizza veterans.

Writing the task script for the competitive user test

Writing the task script

We re-used the same task script from our previous Domino’s vs Pizza Hut test, which has been dissected and explained in an old blog post here . You can read all about how we chose the wording for those tasks in that post.

You can do a quick skim of the task list below:

Scenario: You’re having a late night in with a few friends and people are starting to get hungry, so you decide to order a couple of pizzas for delivery.

  • Have you ordered pizza online before? Which website(s) did you use?
  • Does this site have any deals you can take advantage of for your order?
  • Customize your pizzas with the toppings, sauce, and crust options you would like.
  • Finalize your order with any other items you want besides your pizzas.
  • Go through the checkout until you are asked to enter billing information.
  • Please now go to [link to second site] and go through the pizza ordering process there too. Compare your experience as you go.
  • Which site was easier to use, and why? Which would you use next time you order pizza online?

We also could have broken down Task 6 into several more discrete steps – for example, mirroring the exact same steps we wrote for the first website. This would have allowed us to collect task usability ratings , time on task, and other user testing metric s that could be compared between the sites.

However, we decided to keep the flow more free-form and let users chart their own course through the second site. You can choose between a looser task script and a more structured one based on the kinds of data you want to collect for your study.

Writing the questions for the post test survey

The post-test survey

After users complete the tasks during their video session, we have them respond to a post-test survey . This is where we posed a number of different rating-style and multiple-choice type questions to try and quantify users’ attitudes and determine which site performed better in which areas.

Our post-test survey:

After completing both flows and giving feedback on each step, we wanted the users to unequivocally choose one of the websites. This way we could instantly see the final outcome from each of the tests, without trying to parse unemphatic verbal responses from the videos.

For each test, we listed the sites in the order they were experienced, to avoid creating any additional variables between the test.

  • How would you rate your experience on the Domino’s website, on a scale of 1 (Hated it!) to 10 (Loved it!)? (slider rating, 1-10)
  • How would you rate your experience on the Pizza Hut website, on a scale of 1 (Hated it!) to 10 (Loved it!)? (slider rating, 1-10)

Here again we showed the questions in an order corresponding to the order from the video session. First users rated the site they started on, then they rated the site they finished on.

  • Overall mood/feel of the site
  • Attractive pictures, images, and illustrations
  • Ease of navigating around the site
  • Clarity of information provided by the site
  • None of the above

For the fourth question, we listed several different aspects of the user experience to see which site held the edge in each. Users could check any number of options, and we also included a “none of the above” option.

In this case, we asked users to select areas in which the second site they had tested was superior to the first. We felt that since users might tend to favor the first site they experienced, it would be most illuminating to see where they felt the second site had been better.

If we were to run this test again, we would include more options to pick from that later came up in our results, such as the availability of appealing promotions/deals, and the choices of pizza toppings and customizations and other food options.

  • What is the #1 most important thing about a pizza website, to you? (free response)

Since we knew that we probably wouldn’t think of every possible area of the experience that users cared about, we followed up by asking a free-response question about what users prioritized the most in their online ordering experience. This allowed us to get more insight into the previous question, and build a deeper understanding of each user’s mindset while viewing their videos and other responses.

  • Several times a week
  • About once a week
  • Once or twice a month
  • Less than once a month
  • Domino’s
  • Papa John’s
  • Little Caesars

The final 2 questions were just general information-gathering questions. We were interested to see what kind of backgrounds the testers had (and were maybe also a little excited to try out more of the new post-test question types ).

Besides expanding the options in question 4, the other thing we would change about the post-test survey if we ran this study again would be to ask more free-response type questions. We found that with so many quantitative type questions, we actually missed out on some qualitative answers that would have been useful (especially in conjunction with the data we did get).

Some example questions we would add, which we thought of after getting the results in, are:

  • What did you like the best about the [Domino’s/Pizza Hut] website?
  • What did you like the least about the [Domino’s/Pizza Hut] website?
  • Do you feel that your experience on the two sites would influence your choice between Domino’s and Pizza Hut if you were going to order online from one of them in the future?

Wrapping up Part 1

Besides the task script and post-test survey, the rest of the setup just consisted of choosing a target demographic – we selected users in the US, ages 18-34, making under $50,000. Once the tests were finalized, we launched them and collected the 20 results in less than a day.

In Part 2 of this series, we’ll go over the results of the study, including the quantitative data we got, the contents of the videos, and what we learned about doing competitive usability testing.

Part2: Results

Sign up for a free user testing trial of our usability testing tools

By Tim Rotolo

Tim Rotolo is a co-founder at Trymata, and the company's Chief Growth Officer. He is a born researcher whose diverse interests include design, architecture, history, psychology, biology, and more. Tim holds a Bachelor's Degree in International Relations from Claremont McKenna College in southern California. You can reach him on Linkedin at linkedin.com/in/trotolo/ or on Twitter at @timoroto

Related Post

What is persona mapping definition, process and examples.

' src=

What is Content Experience? Definition, Examples and Best Practices

What is explanatory research definition, method and examples, leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

What is Exploratory Research? Definition, Method and Examples

Usability Testing: Everything You Need to Know (Methods, Tools, and Examples)

As you crack into the world of UX design, there’s one thing you absolutely must understand and learn to practice like a pro: usability testing.

Precisely because it’s such a critical skill to master, it can be a lot to wrap your head around. What is it exactly, and how do you do it? How is it different from user testing? What are some actual methods that you can employ?

In this guide, we’ll give you everything you need to know about usability testing—the what, the why, and the how.

Here’s what we’ll cover:

  • What is usability testing and why does it matter?
  • Usability testing vs. user testing
  • Formative vs. summative usability testing
  • Attitudinal vs. behavioral research

Performance testing

Card sorting, tree testing, 5-second test, eye tracking.

  • How to learn more about usability testing

Ready? Let’s dive in.

1. What is usability testing and why does it matter?

Simply put, usability testing is the process of discovering ways to improve your product by observing users as they engage with the product itself (or a prototype of the product). It’s a UX research method specifically trained on—you guessed it—the usability of your products. And what is usability ? Usability is a measure of how easily users can accomplish a given task with your product.

Usability testing, when executed well, uncovers pain points in the user journey and highlights barriers to good usability. It will also help you learn about your users’ behaviors and preferences as these relate to your product, and to discover opportunities to design for needs that you may have overlooked.

You can conduct usability testing at any point in the design process when you’ve turned initial ideas into design solutions, but the earlier the better. Test early and test often! You can conduct some kind of usability testing with low- and high- fidelity prototypes alike—and testing should continue after you’ve got a live, out-in-the-world product.

2. Usability testing vs. user testing

Though they sound similar and share a somewhat similar end goal, usability testing and user testing are two different things. We’ll look at the differences in a moment, but first, here’s what they have in common:

  • Both share the end goal of creating a design solution to meet real user needs
  • Both take the time to observe and listen to the user to hear from them what needs/pain points they experience
  • Both look for feasible ways of meeting those needs or addressing those pain points

User testing essentially asks if this particular kind of user would want this particular kind of product—or what kind of product would benefit them in the first place. It is entirely user-focused.

Usability testing, on the other hand, is more product-focused and looks at users’ needs in the context of an existing product (even if that product is still in prototype stages of development). Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they’re able to accomplish what they need to do with the product.

3. Formative vs. summative usability testing

Alright! Now that you understand what usability testing is, and what it isn’t, let’s get into the various types of usability testing out there.

There are two broad categories of usability testing that are important to understand— formative and summative . These have to do with when you conduct the testing and what your broad objectives are—what the overarching impact the testing should have on your product.

Formative usability testing: 

  • Is a qualitative research process 
  • Happens earlier in the design, development, or iteration process
  • Seeks to understand what about the product needs to be improved
  • Results in qualitative findings and ideation that you can incorporate into prototypes and wireframes

Summative usability testing:

  • Is a research process that’s more quantitative in nature
  • Happens later in the design, development, or iteration process
  • Seeks to understand whether the solutions you are implementing (or have implemented) are effective
  • Results in quantitative findings that can help determine broad areas for improvement or specific areas to fine-tune (this can go hand in hand with competitive analysis )

4. Attitudinal vs. behavioral research

Alongside the timing and purpose of the testing (formative vs. summative), it’s important to understand two broad categories that your research (both your objectives and your findings) will fall into: behavioral and attitudinal.

Attitudinal research is all about what people say—what they think  and communicate about your product and how it works. Behavioral research focuses on what people do—how they actually do interact with your product and the feelings that surface as a result.

What people say and what people do are often two very different things. These two categories help define those differences, choose our testing methods more intentionally, and categorize our findings more effectively.

5. Five essential usability testing methods

Some usability testing methods are geared more towards uncovering either behavioral or attitudinal findings; but many have the potential to result in both.

Of the methods you’ll learn about in this section, performance testing has the greatest potential for targeting both—and will perhaps require the greatest amount of thoughtfulness regarding how you approach it.

Naturally, then, we’ll spend a little more time on that method than the other four, though that in no way diminishes their usefulness! Here are the methods we’ll cover:

These are merely five common and/or interesting methods—it is not a comprehensive list of every method you can use to get inside the hearts and minds of your users. But it’s a place to start. So here we go!

In performance testing, you sit down with a user and give them a task (or set of tasks) to complete with the product.

This is often a combination of methods and approaches that will allow you to interview users, see how they use your product, and find out how they feel about the experience afterward. Depending on your approach, you’ll observe them, take notes, and/or ask usability testing questions before, after, or along the way.

Performance testing is by far the most talked-about form of usability testing—especially as it’s often combined with other methods. Performance testing is what most commonly comes to mind in discussions of usability testing as a whole, and it’s what many UX design certification programs focus on—because it’s so broadly useful and adaptive.

While there’s no one right way to conduct performance testing, there are a number of approaches and combinations of methods you can use, and you’ll want to be intentional about it.

It’s a method that you can adapt to your objectives—so make sure you do! Ask yourself what kind of attitudinal or behavioral findings you’re really looking for, how much time you’ll have for each testing session, and what methods or approaches will help you reach your objectives most efficiently.

Performance testing is often combined with user interviews . For a quick guide on how to ask great questions during this part of a testing session, watch this video:

Even if you choose not to combine performance testing with user interviews, good performance testing will still involve some degree of questioning and moderating.

Performance testing typically results in a pretty massive chunk of qualitative insights, so you’ll need to devote a fair amount of intention and planning before you jump in.

Maximize the usefulness of your research by being thoughtful about the task(s) you assign and what approach you take to moderating the sessions. As your test participants go about the task(s) you assign, you’ll watch, take notes, and ask questions either during or after the test—depending on your approach.

Four approaches to performance testing

There are four ways you can go about moderating a performance test , and it’s worth understanding and choosing your approach (or combination of approaches) carefully and intentionally. As you choose, take time to consider:

  • How much guidance the participant will actually need
  • How intently participants will need to focus
  • How guidance or prompting from you might affect results or observations

With these things in mind, let’s look at the four approaches.

Concurrent Think Aloud (CTA)

With this approach, you’ll encourage participants to externalize their thought process—to think out loud. Your job during the session will be to keep them talking through what they’re looking for, what they’re doing and why, and what they think about the results of their actions.

A CTA approach often uncovers a lot of nuanced details in the user journey, but if your objectives include anything related to the accuracy or time for task completion, you might be better off with a Retrospective Think Aloud.

Retrospective Think Aloud (RTA)

Here, you’ll allow participants to complete their tasks and recount the journey afterward . They can complete tasks in a more realistic time frame  and degree of accuracy, though there will certainly be nuanced details of participants’ thoughts and feelings you’ll miss out on.

Concurrent Probing (CP)

With Concurrent Probing, you ask participants about their experience as they’re having it. You prompt them for details on their expectations, reasons for particular actions, and feeling about results.

This approach can be distracting, but used in combination with CTA, you can allow participants to complete the tasks and prompt only when you see a particularly interesting aspect of their experience, and you’d like to know more. Again, if accuracy and timing are critical objectives, you might be better off with Retrospective Probing.

Retrospective Probing (RP)

If you note that a participant says or does something interesting as they complete their task(s), you can note it and ask them about it later—this is Retrospective Probing. This is an approach very often combined with CTA or RTA to ensure that you’re not missing out on those nuanced details of their experience without distracting them from actually completing the task.

Whew! There’s your quick overview of performance testing. To learn more about it, read to the final section of this article: How to learn more about usability testing.

With this under our belts, let’s move on to our other four essential usability testing methods.

Card sorting is a way of testing the usability of your information architecture. You give users blank cards (open card sorting) or cards labeled with the names and short descriptions of the main items/sections of the product (closed card sorting), then ask them to sort the cards into piles according to which items seem to go best together. You can go even further by asking them to sort the cards into larger groups and to name the groups or piles.

Rather than structuring your site or app according to your understanding of the product, card sorting allows the information architecture to mirror the way your users are thinking.

This is a great technique to employ very early in the design process as it is inexpensive and will save the time and expense of making structural adjustments later in the process. And there’s no technology required! If you want to conduct it remotely, though, there are tools like OptimalSort that do this effectively.

For more on how to conduct card sorting, watch this video:

Tree testing is a great follow up to card sorting, but it can be conducted on its own as well. In tree testing, you create a visual information hierarchy (or “tree) and ask users to complete a task using the tree. For example, you might ask users, “You want to accomplish X with this product. Where do you go to do that?” Then you observe how easily users are able to find what they’re looking for.

This is another great technique to employ early in the design process. It can be conducted with paper prototypes or spreadsheets, but you can also use tools such as TreeJack to accomplish this digitally and remotely.

In the 5-second test, you expose your users to one portion of your product (one screen, probably the top half of it) for five seconds and then interview them to see what they took away regarding:

  • The product/page’s purpose and main features or elements
  • The intended audience and trustworthiness of the brand
  • Their impression of the usability and design of the product

You can conduct this kind of testing in person rather simply, or remotely with tools like UsabilityHub .

This one may seem somewhat new, but it’s been around for a while–though the tools and technology around it have evolved. Eye tracking on its own isn’t enough to determine usability, but it’s a great compliment to your other usability testing measures.

In eye tracking you literally track where most users’ eyes land on the screen you’re designing. The reason this is important is that you want to make sure that the elements users’ eyes are drawn to are the ones that communicate the most important information. This is a difficult one to conduct in any kind of analog fashion, but there are a lot of tools out there that make it simple— CrazyEgg and HotJar are both great places to start.

6. How to learn more about usability testing

There you have it: your 15-minute overview of the what, why, and how of usability testing. But don’t stop here! Usability testing and UX research as a whole have a deeply humanizing impact on the design process. It’s a fascinating field to discover and the result of this kind of work has the power of keeping companies, design teams, and even the lone designer accountable to what matters most: the needs of the end user.

If you’d like to learn more about usability testing and UX research, take the free UX Research for Beginners Course with CareerFoundry. This tutorial is jam-packed with information that will give you a deeper understanding of the value of this kind of testing as well as a number of other UX research methods.

You can also enroll in a UX design course or bootcamp to get a comprehensive understanding of the entire UX design process (to which usability testing and UX research are an integral part). For guidance on the best programs, check out our list of the 10 best UX design certification programs . And if you’ve already started your learning process, and you’re thinking about the job hunt, here are the top 5 UX research interview questions to be ready for.

For further reading about usability testing and UX research, check out these other articles:

  • How to conduct usability testing: a step-by-step guide
  • What does a UX researcher actually do? The ultimate career guide
  • 11 usability heuristics every designer should know
  • How to conduct a UX audit
  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Usability Testing

What is usability testing.

Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product’s release.

“It’s about catching customers in the act, and providing highly relevant and highly contextual information.”

— Paul Maritz, CEO at Pivotal

  • Transcript loading…

Usability Testing Leads to the Right Products

Through usability testing, you can find design flaws you might otherwise overlook. When you watch how test users behave while they try to execute tasks, you’ll get vital insights into how well your design/product works. Then, you can leverage these insights to make improvements. Whenever you run a usability test, your chief objectives are to:

1) Determine whether testers can complete tasks successfully and independently .

2) Assess their performance and mental state as they try to complete tasks, to see how well your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity .

5) Find solutions .

While usability tests can help you create the right products, they shouldn’t be the only tool in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve the usability overall.

the case study of usability testing

There are different methods for usability testing. Which one you choose depends on your product and where you are in your design process.

Usability Testing is an Iterative Process

To make usability testing work best, you should:

a. Define what you want to test . Ask yourself questions about your design/product. What aspect/s of it do you want to test? You can make a hypothesis from each answer. With a clear hypothesis, you’ll have the exact aspect you want to test.

b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g., navigation) and stick to it throughout the test. When you test aspects individually, you’ll eventually build a broader view of how well your design works overall.

2) Set user tasks –

a. Prioritize the most important tasks to meet objectives (e.g., complete checkout), no more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals .

c. Create scenarios where users can try to use the design naturally . That means you let them get to grips with it on their own rather than direct them with instructions.

3) Recruit testers – Know who your users are as a target group. Use screening questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer incentives . You can also find contacts through community groups , etc. If you test with only 5 users, you can still reveal 85% of core issues.

4) Facilitate/Moderate testing – Set up testing in a suitable environment . Observe and interview users . Notice issues . See if users fail to see things, go in the wrong direction or misinterpret rules. When you record usability sessions, you can more easily count the number of times users become confused. Ask users to think aloud and tell you how they feel as they go through the test. From this, you can check whether your designer’s mental model is accurate: Does what you think users can do with your design match what these test users show?

If you choose remote testing , you can moderate via Google Hangouts, etc., or use unmoderated testing. You can use this software to carry out remote moderated and unmoderated testing and have the benefit of tools such as heatmaps.

the case study of usability testing

Keep usability tests smooth by following these guidelines.

1) Assess user behavior – Use these metrics:

Quantitative – time users take on a task, success and failure rates, effort (how many clicks users take, instances of confusion, etc.)

Qualitative – users’ stress responses (facial reactions, body-language changes, squinting, etc.), subjective satisfaction (which they give through a post-test questionnaire) and perceived level of effort/difficulty

2) Create a test report – Review video footage and analyzed data. Clearly define design issues and best practices. Involve the entire team.

Overall, you should test not your design’s functionality, but users’ experience of it . Some users may be too polite to be entirely honest about problems. So, always examine all data carefully.

Learn More about Usability Testing

Take our course on usability testing .

Here’s a quick-fire method to conduct usability testing .

See some real-world examples of usability testing .

Take some helpful usability testing tips .

Questions related to Usability Testing

To conduct usability testing effectively:

Start by defining clear, objective goals and recruit representative users.

Develop realistic tasks for participants to perform and set up a controlled, neutral environment for testing.

Observe user interactions, noting difficulties and successes, and gather qualitative and quantitative data.

After testing, analyze the results to identify areas for improvement.

For a comprehensive understanding and step-by-step guidance on conducting usability testing, refer to our specialized course on Conducting Usability Testing .

Conduct usability testing early and often, from the design phase to development and beyond. Early design testing uncovers issues when they are more accessible and less costly to fix. Regular assessments throughout the project lifecycle ensure continued alignment with user needs and preferences. Usability testing is crucial for new products and when redesigning existing ones to verify improvements and discover new problem areas. Dive deeper into optimal timing and methods for usability testing in our detailed article “Usability: A part of the User Experience.”

Incorporate insights from William Hudson, CEO of Syntagm, to enhance usability testing strategies. William recommends techniques like tree testing and first-click testing for early design phases to scrutinize navigation frameworks. These methods are exceptionally suitable for isolating and evaluating specific components without visual distractions, focusing strictly on user understanding of navigation. They're advantageous for their quantitative nature, producing actionable numbers and statistics rapidly, and being applicable at any project stage. Ideal for both new and existing solutions, they help identify problem areas and assess design elements effectively.

To conduct usability testing for a mobile application:

Start by identifying the target users and creating realistic tasks for them.

Collect data on their interactions and experiences to uncover issues and areas for improvement.

For instance, consider the concept of ‘tappability’ as explained by Frank Spillers, CEO: focusing on creating task-oriented, clear, and easily tappable elements is crucial.

Employing correct affordances and signifiers, like animations, can clarify interactions and enhance user experience, avoiding user frustration and errors. Dive deeper into mobile usability testing techniques and insights by watching our insightful video with Frank Spillers.

For most usability tests, the ideal number of participants depends on your project’s scope and goals. Our video featuring William Hudson, CEO of Syntagm, emphasizes the importance of quality in choosing participants as it significantly impacts the usability test's results.

He shares insightful experiences and stresses on carefully selecting and recruiting participants to ensure constructive and reliable feedback. The process involves meticulous planning and execution to identify and discard data from non-contributive participants and to provide meaningful and trustworthy insights are gathered to improve the interactive solution, be it an app or a website. Remember the emphasis on participant's attentiveness and consistency while performing tasks to avoid compromising the results. Watch the full video for a more comprehensive understanding of participant recruitment and usability testing.

To analyze usability test results effectively, first collate the data meticulously. Next, identify patterns and recurrent issues that indicate areas needing improvement. Utilize quantitative data for measurable insights and qualitative data for understanding user behavior and experience. Prioritize findings based on their impact on user experience and the feasibility of implementation. For a deeper understanding of analysis methods and to ensure thorough interpretation, refer to our comprehensive guides on Analyzing Qualitative Data and Usability Testing . These resources provide detailed insights, aiding in systematically evaluating and optimizing user interaction and interface design.

Usability testing is predominantly qualitative, focusing on understanding users' thoughts and experiences, as highlighted in our video featuring William Hudson, CEO of Syntagm. 

It enables insights into users' minds, asking why things didn't work and what's going through their heads during the testing phase. However, specific methods, like tree testing and first-click testing , present quantitative aspects, providing hard numbers and statistics on user performance. These methods can be executed at any design stage, providing actionable feedback and revealing navigation and visual design efficacy.

To conduct remote usability testing effectively, establish clear objectives, select the right tools, and recruit participants fitting your user profile. Craft tasks that mirror real-life usage and prepare concise instructions. During the test, observe users’ interactions and note their challenges and behaviors. For an in-depth understanding and guide on performing unmoderated remote usability testing, refer to our comprehensive article, Unmoderated Remote Usability Testing (URUT): Every Step You Take, We Won’t Be Watching You .

Some people use the two terms interchangeably, but User Testing and Usability Testing, while closely related, serve distinct purposes. User Testing focuses on understanding users' perceptions, values, and experiences, primarily exploring the 'why' behind users' actions. It is crucial for gaining insights into user needs, preferences, and behaviors, as elucidated by Ann Blanford, an HCI professor, in our enlightening video. 

She elaborates on the significance of semi-structured interviews in capturing users' attitudes and explanations regarding their actions. Usability Testing primarily assesses users' ability to achieve their goals efficiently and complete specific tasks with satisfaction, often emphasizing the ease of interface use. Balancing both methods is pivotal for comprehensively understanding user interaction and product refinement.

Usability testing is crucial as it determines how usable your product is, ensuring it meets user expectations. It allows creators to validate designs and make informed improvements by observing real users interacting with the product. Benefits include:

Clarity and focus on user needs.

Avoiding internal bias.

Providing valuable insights to achieve successful, user-friendly designs. 

By enrolling in our Conducting Usability Testing course, you’ll gain insights from Frank Spillers, CEO of Experience Dynamics, extensive experience learning to develop test plans, recruit participants, and convey findings effectively.

Explore our dedicated Usability Expert Learning Path at Interaction Design Foundation to learn Usability Testing. We feature a specialized course, Conducting Usability Testing , led by Frank Spillers, CEO of Experience Dynamics. This course imparts proven methods and practical insights from Frank's extensive experience, guiding you through creating test plans, recruiting participants, moderation, and impactful reporting to refine designs based on the results. Engage with our quality learning materials and expert video lessons to become proficient in usability testing and elevate user experiences!

Answer a Short Quiz to Earn a Gift

What is the primary purpose of usability testing?

  • To assess how easily users can use a product and complete tasks
  • To document the number of users visiting a product’s webpage
  • To test the market viability of a new product

At what stage should designers conduct usability testing?

  • Only after the product is fully developed
  • Only during the initial concept phase
  • Throughout all stages of product development

Why do designers perform usability testing multiple times during the development process?

  • To increase the product cost
  • To lengthen the development timeline
  • To refine the design based on user feedback and improve user satisfaction

What type of data does usability testing typically generate?

  • Only qualitative
  • Only quantitative

Which method is a common practice in usability testing?

  • To ask users only closed-ended questions post-test
  • To observe users as they perform tasks without intervention
  • To provide users with solutions before testing

Better luck next time!

Do you want to improve your UX / UI Design skills? Join us now

Congratulations! You did amazing

You earned your gift with a perfect score! Let us send it to you.

Check Your Inbox

We’ve emailed your gift to [email protected] .

Literature on Usability Testing

Here’s the entire UX literature on Usability Testing by the Interaction Design Foundation, collated in one place:

Learn more about Usability Testing

Take a deep dive into Usability Testing with our course Conducting Usability Testing .

Do you know if your website or app is being used effectively? Are your users completely satisfied with the experience? What is the key feature that makes them come back? In this course, you will learn how to answer such questions—and with confidence too—as we teach you how to justify your answers with solid evidence .

Great usability is one of the key factors to keep your users engaged and satisfied with your website or app. It is crucial you continually undertake usability testing and perceive it as a core part of your development process if you want to prevent abandonment and dissatisfaction. This is especially important when 79% of users will abandon a website if the usability is poor, according to Google! As a designer, you also have another vital duty—you need to take the time to step back, place the user at the center of the development process and evaluate any underlying assumptions. It’s not the easiest thing to achieve, particularly when you’re in a product bubble, and that makes usability testing even more important. You need to ensure your users aren’t left behind!

As with most things in life, the best way to become good at usability testing is to practice! That’s why this course contains not only lessons built on evidence-based approaches, but also a practical project . This will give you the opportunity to apply what you’ve learned from internationally respected Senior Usability practitioner, Frank Spillers, and carry out your own usability tests .

By the end of the course, you’ll have hands-on experience with all stages of a usability test project— how to plan, run, analyze and report on usability tests . You can even use the work you create during the practical project to form a case study for your portfolio, to showcase your usability test skills and experience to future employers!

All open-source articles on Usability Testing

7 great, tried and tested ux research techniques.

the case study of usability testing

  • 1.2k shares
  • 3 years ago

How to Conduct a Cognitive Walkthrough

the case study of usability testing

How to Conduct User Observations

the case study of usability testing

Mobile Usability Research – The Important Differences from the Desktop

the case study of usability testing

How to Recruit Users for Usability Studies

the case study of usability testing

Best Practices for Mobile App Usability from Google

the case study of usability testing

Unmoderated Remote Usability Testing (URUT) - Every Step You Take, We Won’t Be Watching You

the case study of usability testing

Making Use of the Crowd – Social Proof and the User Experience

the case study of usability testing

Agile Usability Engineering

the case study of usability testing

Four Assumptions for Usability Evaluations

the case study of usability testing

  • 8 years ago

Transform Your Creative Process with Design Thinking

the case study of usability testing

Revolutionize UX Design with VR Experiences

the case study of usability testing

Start Your UX Journey: Essential Insights for Success

the case study of usability testing

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

Integrations

What's new?

In-Product Prompts

Participant Management

Interview Studies

Prototype Testing

Card Sorting

Tree Testing

Live Website Testing

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

A Beginner's Guide to Usability Testing

0% complete

8 Essential usability testing methods for UX insights

There are various usability testing methods available, from lab-based usability testing to in-house heuristic evaluation —in this chapter, we look at the top techniques you need to know when running a usability test. We explore the difference between quantitative and qualitative data and how to choose between moderated and unmoderated usability testing, and pick the right UX research method .

What to do before your usability test

Before we get into the different usability methods available, let’s look at how to prepare for your usability test. Here are our top tips to consider before delving into your testing period:

  • What’s your goal?
  • What results do you expect?
  • Who will conduct the test?
  • Where are you finding participants?
  • What tool are you using (if any)?
  • How will you analyze the results?

Once you’ve thought about the above questions, it’s time to move onto the final decision.

  • What method are you using?

Selecting your usability testing method is a challenging, but critical, decision. Work with your team and research experts to determine the best method to achieve the insights you’re looking to gather.

To help make your choice, here’s our breakdown of usability testing methods .

Quantitative vs. qualitative usability testing

quantitative vs qualitative usability testing

Anytime you collect data when usability testing , it's going to be in one of two types of studies: qualitative or quantitative. Neither is empirically better, though there are specific use-cases where one may be more beneficial than the other. Most research benefits from having both types of data, so it's important to understand the differences between the two methods and how to employ each best. Think about them as different players on the same team. Their goal is the same–gain valuable insights–but their approach varies.

We do both types of usability testing: qualitative and quantitative. I think it's really important to have a mix of both. Quantitative testing gives us hard numbers, and those metrics are key in making data-driven design decisions. On the other hand, qualitative testing is incredibly useful because you get the voice of the user, and they're not just a "metric".

Matt Elbert, Senior Design Manager at Movista

Matt Elbert , Senior Design Manager at Movista

In the end, qualitative and quantitative usability testing are both valuable tools in the user research toolkit, but which one works for you will depend on your research goal.

Start usability testing with Maze

Collect qualitative and quantitative usability data, get actionable insights in hours, and make your product truly customer-centric.

user testing data insights

The difference between qualitative and quantitative data

All usability testing involves participants attempting to complete assigned tasks with a product. Though the format of qualitative and quantitative tests doesn't change that much for the participant, how and what kind of data you collect will differ significantly.

Qualitative data

Qualitative data consists of observational findings. That means there isn't a hard number or statistic assigned to the data. This type of data may come in the form of notes from observation or comments from participants. Qualitative data requires interpretation, and different observers could come to different conclusions during a test.

The main distinction between quantitative and qualitative testing is in the way the data are collected. With qualitative testing, data about behaviors and attitudes are directly collected by observing what users do and how they react to your product.

In contrast, quantitative testing accumulates data about users' behaviors and attitudes in an indirect way. With usability testing tools , quantitative data is usually recorded automatically while participants complete the tasks.

Examples of qualitative usability data: product reviews, user comments during usability testing, descriptions of the issues encountered, facial expressions, preferences, etc.

Quantitative data

Quantitative data consists of statistical data that can be quantified and expressed in numerical terms. This data comes in the form of metrics like how long it took for someone to complete a task, or what percentage of a group clicked a section of a design, etc.

With quantitative data, you need the context of the test for it to make sense. For example, if I simply told you, "50% of participants failed to complete the task," it doesn't give much insight as to why they had trouble.

Examples of quantitative usability data: completion rates, mis-click rates, time spent, etc.

When to do qualitative or quantitative testing

You can cut your grass with a pair of scissors, but a lawnmower is far more efficient. The same is true with qualitative and quantitative usability testing. In most scenarios, you could use either to collect data, but one will be better depending on the task at hand.

The type of data you collect will depend a lot on the goal and hypothesis of your test. When you know what you want to test and you identify the goals of your test, you will know what type of data you need to collect.

Vaida Pakulyte, UX Researcher and Designer at Electrolux

Vaida Pakulyte , UX Researcher and Designer at Electrolux

Below we provide some example scenarios for both. It's not meant to be exhaustive, but representative of when you may employ one type of testing over the other.

Quantitative usability testing: Measuring user experience with data

With quantitative testing, the goal is to uncover what is happening in a product. Quantitative testing works well when you're looking to find out information about how your design performs, and if users encounter major usability problems while using your product.

For example, let's say you just released a reminder function in your app. You can run a test where you ask participants to set a reminder for a day in the week. You want to know if participants are able to complete the task within two minutes. Quantitative usability testing is great for this scenario because you can measure the time it takes for a participant to complete the task.

Let's say you find only 30% of participants are able to complete the task within two minutes. Now that you have that data, you can study the heatmaps of the journey the user takes to understand what usability issues the user has encountered when trying to complete the task. Or, you can follow-up the quantitative study with a few user interviews to dive deeper into the experience of those users who've struggled to complete the task.

Product tip ✨

Maze automatically collects quantitative data such as time spent, success rates, and mis-click rates, and gives you heatmaps for each session so you can dive deeper into the test results and improve the UX of your product. Try it out for free .

Qualitative Usability Testing: Understanding the why behind actions

Qualitative user testing enables you to understand the reason why someone does something in a product and research your target audience's pain points, opinions, and mental models. Qualitative usability testing usually employs the Think out loud method during the testing sessions. This research technique asks the participant to voice any words in their mind as they're completing the tasks.

This way, you get access to users' opinions and comments, which can be very useful in trying to understand why an experience or design doesn't work for them, or what needs to be changed. As you collect more qualitative data, you may start to discover trends among users, which you can use to make changes in the next design iteration.

Qualitative and quantitative user testing perform best when used in conjunction with one another. So, whereas they are separate, thinking about them as two pieces of a whole may be the best approach.

Moderated vs. unmoderated usability testing

moderated vs unmoderated usability testing

The mantra of usability testing is, "We are testing the product, not you."

Carol Barnum , Usability Testing Essentials: Ready, Set...Test!,

When running a usability study, you have to decide on one of two approaches: moderated or unmoderated. Both are viable options and have their advantages and disadvantages depending on your research goals. In this section, we talk about what moderated and unmoderated usability testing is, the pros and cons of each, and when to use each type of usability testing.

Moderated usability testing

As the name suggests, in a moderated usability test, there's a moderator that's with the participant during the test to guide them through it. The role of the moderator is to facilitate the session, instruct the user on the tasks to complete, ask follow-up questions, and provide relevant guidance.

Moderated usability tests can happen either in-person or remotely. When it's done remotely, the moderator usually joins through a video call, and the participant uses screen sharing to display their screen, so the moderator can see and hear exactly what the participant is doing during the test.

You should be careful not to implement your bias into the user test. Phrasing your questions and your tasks to allow for clear, open-ended, and non-biased responses is very important.

When running a moderated usability test, it's important to be aware of a few best practices. The first is not to lead the participant towards an answer or action. That means you have to carefully phrase questions in a way that prompts users to complete the tasks but leaves enough room for them to find out how to do it. Make sure you're not asking them things like "Click here" or "Go to that page." Even if the intention is good, these types of instructions bias the results.

Another best practice to keep in mind is to encourage participants to explore the product or prototype as it naturally comes to their mind. There is no wrong or right answer—the goal of usability testing is to understand how they experience your product and improve accordingly. As a moderator, you should clearly explain that the thing being tested is the product, not the user.

I know the strong desire to leap to a user’s aid, defend your design, and convince the world you’re right. It’s okay to let users struggle, to accept moments of silence or thought, and let a design fail a test. Interrupting that process might be one of the worst things you can do for your design.

Taylor Palmer, Product Design Lead at Range

Taylor Palmer , Product Design Lead at Range

Examples of moderated usability tests are lab tests, guerrilla testing, card sorting, user interviews, screen sharing, etc.

The benefits of moderated usability testing

One of the biggest advantages of moderated usability testing is control. Since these tests are guided, you're able to keep participants focused on completing the tasks and answering your questions. If you're conducting the experiment in a lab, you're able to control for environmental factors and make sure those don't skew your results.

Most importantly, the biggest benefit of moderated tests is that you're able to ask the participant follow-up questions about why they did something. In an unmoderated test you're sometimes left guessing, so having the chance to dive deeper into an issue or question with participants helps you uncover learnings about user behavior and pain points.

By moderating the usability tests, I can observe body language and understand the pain points better. I also get to know the user, build trust, and show that their insights are truly important to the team and me.

For example, if you're running a moderated session to test a new user interface for a check-out process—and notice a participant struggling with a part of the process—you can ask them what they thought about the process, how they would improve it, and why they struggled to use the product. Such opportunities rarely arise in an unmoderated session, making moderated usability testing essential if you're looking to get rich, qualitative insights from your users.

The disadvantages of moderated usability testing

Moderated tests require investment, both in terms of resources like a tool or a lab to organize the tests, but also an investment of time. Moderated usability testing sessions take time to plan, organize, and run, as each individual session needs to be facilitated by a researcher or someone with experience in the field.

With these constraints, your pool of possible participants may also shrink. Finding participants to come to your lab or join a user interview call can be a hassle, so usually, you can only collect qualitative user feedback. For those reasons, moderated user tests only work at the start of the UX design process , usually when doing formative research.

When to run moderated usability testing

Moderated tests work best at the initial stages of the design process, as they allow you to dig deeper into the experience of the participants, and get early feedback to inform the overall direction of the design.

You can run moderated usability tests with low- to mid-fidelity prototypes or wireframes to collect users' opinions, comments, and reactions to a first iteration of the design. At this stage of the process, you'll usually be testing the information architecture, the layout of the webpage, or simply do focus groups to research if your solution works with real users.

By asking usability testing questions before, during, and after the test, you and your team can uncover insights that help you make better design decisions.

As you move through the design process, you can continue doing moderated usability tests with users after each iteration, and based on the results, design the final iteration.

Unmoderated usability testing

An unmoderated usability test happens without the presence of a moderator. The participant is given instructions and tasks to complete beforehand, but there's no one present as they're completing the assigned tasks.

Unmoderated user tests happen mostly at the place and time of the participant's choosing. Similar to moderated testing, you can run an unmoderated test either in-person or remotely. Depending on your resources, one might be better than the other. We look at remote vs. in-person usability testing in more detail in the next chapter.

Examples of unmoderated usability tests are first-click tests, session recordings, eye-tracking, 5-second tests, etc.

Maze allows you to run unmoderated tests with unlimited users. Get started by importing your prototype into Maze.

The benefits of unmoderated usability testing

One of the advantages of unmoderated usability testing is that it has a lower cost and quicker turnaround. The obvious reason for this is that you don't have to hire a moderator, find a dedicated lab space, and look for test participants who are willing to come to your lab.

Along those same lines, unmoderated remote tests are a bit more advantageous as the participant can complete the assigned tasks at the time and place of their choosing. The convenience of having a participant complete tasks in an uncontrolled environment is that it more closely resembles how someone would use your product in a natural environment, thus yielding more accurate results.

Last but not least, one of the biggest benefits of unmoderated usability testing is the ability to collect results from a larger sample size of test participants. Because you don't have to moderate each session, measuring usability metrics , or doing A/B testing is much easier in an unmoderated environment.

Unmoderated tests are useful for large-sample studies, such as quantitative testing, or when I need the results quicker.

Vaida Pakulyte, UX researcher and Designer at Electrolux

Vaida Pakulyte , UX researcher and Designer at Electrolux

With unmoderated remote usability testing, you can collect results in hours or even minutes. Plus, testing with a global user base in different time-zones is possible with unmoderated usability testing.

The disadvantages of unmoderated usability testing

On the other hand, unmoderated usability testing has a couple of drawbacks you'll need to keep in mind when choosing this method. Since unmoderated tests happen without your presence, they might be limiting in the types of insights and data you can gather as you won't be there to delve into users' actions in real time.

When to run unmoderated usability testing

When you need to collect quantitative data, consider doing an unmoderated usability test. In those scenarios, you're looking for statistically relevant data, so testing a large sample size is faster and easier with unmoderated tests.

Additionally, unmoderated usability testing works best towards the end of the product development process. When you finish designing a final solution, you can run an unmoderated test with a high-fidelity prototype that resembles the final product. This will ensure the solution works before you move on to the development process.

Another use case for unmoderated usability testing is measuring the performance of tasks within the product. For instance, examples of tasks for users are: sign up, subscribe to the newsletter, create a new project, etc. For each of these tasks, you can run quick unmoderated usability tests to make sure the user flow is intuitive and easy to use.

Unmoderated tests are usually done when remote testing using a combination of prototyping and testing tools. These allow you to create a test based on a prototype, share links to the test with participants, and even hire testers from a specialized testers' panel.

6 additional usability testing methods

usability testing methods comparison chart

Lab usability testing

In a lab usability test, participants attempt to complete a set of tasks on a computer or mobile device under the supervision of a trained moderator who observes their interactions, asks questions, and replies to their feedback in real-time.

As the name suggests, lab usability testing takes place in a purpose-built laboratory. Typically, this has two rooms divided by a one-way mirrored window to allow note-takers and observers to watch the test without being seen by the participants. Sessions can also be recorded for later review and in-depth analysis.

One main advantage of lab usability testing is that it provides extensive information on how users experience your product. Since there is a moderator, you can collect more qualitative data to get in-depth insights into users’ behaviors, motivations, and attitudes.

On the downside, this type of testing can be expensive and time-consuming because you need a dedicated environment, test participants, and a moderator. Also, it usually involves a small number of participants (5-10 participants per research round) in a controlled environment, with the risk of not being reflective of your user base.

When to use lab usability testing: If you’re looking to maximize on in-depth, extensive feedback from participants

Contextual inquiry

The contextual inquiry method involves observing people in their natural contexts, such as their office or home, as they interact with the product the way they would usually do. The researcher watches how users perform their activities and asks them questions to understand the reasons behind those actions.

As the research takes place in the users’ natural environment, this method is an excellent way to get rich, reliable information about users— their work practices, behaviors, and personal preferences. These insights are essential at the beginning of a project when evaluating requirements, personas, features, architecture, and content strategy. However, you can also use this method after a product release to test the success and efficiency of your solution.

There are four main principles of contextual inquiries:

  • Context : Interviews are conducted in the user's natural environment
  • Partnership : The researcher should collaborate with the user, observe their actions, and discuss what they did and why
  • Interpretation : During the interview, the researcher should share interpretations and insights with the user, who may expand or correct the researcher's understanding
  • Focus : It's important to plan for the inquiry and have a clear understanding of the research goal

When to use contextual inquiries: If it’s important that results reflect an organic scenario in a user’s real-world circumstance

Remote usability testing

Remote usability testing is a research method where the participant and the researcher are in different locations. In a remote test, the participants complete the tasks in their natural environment using their own devices. Sessions can be moderated or unmoderated and can be conducted over the phone or through a usability testing platform like Maze.

Among the main benefits of remote usability testing, there's the opportunity to recruit a large number of participants coming from different parts of the world. Also, this method is faster and cheaper than in-person testing. However, you'll have less control over the test environment and procedure overall. That is why it's crucial to choose the right tools and devices.

When to use remote usability testing: If you want to test a large group of users, or it’s important to test in multiple user locations

Better product decisions start with Maze

Run expert-level research, get real customer insights, and deliver better products faster to drive business growth.

the case study of usability testing

Phone interview

A phone interview is a remote usability test where a moderator instructs participants to complete specific tasks on their device and give feedback on the product. All feedback is collected automatically as the user’s interactions are recorded remotely.

Phone interviews work best to collect feedback from test participants scattered around different parts of the world. Another benefit is that they are less expensive and time-consuming than face-to-face interviews, allowing researchers to gather more information in a shorter amount of time.

When to use phone interviews: If you need to gather information from participants in different locations, in a short amount of time

Session recording

Session recordings are a fantastic way to see exactly how users interact with your site. They use software to record the actions that real, anonymous visitors take on a website, including mouse movement, clicks, and scrolling. Session recording data can help you understand the most interesting features for your users, discover interaction problems, and see where they stumble or leave.

This type of testing requires a software recording tool such as Maze . To get the most out of your results, you should also consider combining session recordings with other testing methods. In this way, you can gather more insights into why users performed certain actions.

When to use session recordings: If you want to see how users naturally interact with and navigate your product

Guerrilla usability testing

Guerrilla usability testing is a quick and low-cost way of testing a product with real users. Instead of recruiting a specific targeted audience to participate in the research, participants are approached in public places and asked to perform a quick usability test in exchange for a small gift such as a coffee. The sessions usually last between 10 and 15 minutes and cover fewer tasks than some other approaches.

This usability testing method is particularly useful to gather quick feedback without investing too much time and resources and works best at an early stage of the design process when you need to validate assumptions and identify core usability issues. However, if you need more finely-tuned feedback, it’s better to complement your research with other methods.

When to use guerrilla testing: If you’re looking for a low-cost testing method to gather results quickly

Moving forward

That’s it for our chapter on usability testing methods, head onto our next chapters for in-depth breakdowns of remote usability testing and guerrilla testing, plus how to write your test questions.

Learn more about usability testing with Maze 🎯

Test and validate usability across your wireframes and prototypes with real users to get early feedback on your design process.

Frequently asked questions about usability testing methods

What is usability testing?

Usability testing is the process of evaluating how easy to use and intuitive a product is by testing it with real users. Usability testing usually involves getting participants to complete a list of tasks while observing and noting their interactions to identify usability issues and areas of improvement.

What are the benefits of usability testing?

One of the benefits of usability testing is that you can fix usability issues before launching your product and make sure you create the best possible user experience. Ultimately, ensuring users can accomplish their goals smoothly will lead to long-term customer success..

In a nutshell, wireframe testing allows you to:

  • Explore different early layout concepts
  • Test your ideas rapidly and validate them with users
  • Learn valuable information that you can use to guide the design of higher fidelity prototypes.

What are the different types of usability testing methods?

Quantitative or qualitative : quantitative testing measures users’ performance on a given task, such as the percentage of users who completed a given task. Qualitative testing involves observing users to understand how they experience your product and why they performed certain actions.

Moderated or unmoderated : in a moderated usability test, a moderator guides the participants through the test. In an unmoderated test, the participants receive instructions and tasks to complete beforehand, but there's no one present as they accomplish the tasks.

Remote or in-person : remote usability tests happen when the participant and the researcher are in separate locations. In-person tests take place in a testing lab or office under the guidance of a moderator.

Other types of usability testing methods include guerrilla testing , lab usability testing, contextual inquiry, phone interview, and session recordings.

How to conduct heuristic evaluation: Key principles + checklist

  • Human-Computer Interaction
  • User Experience

Usability Testing: A Practitioner's Guide to Evaluating the User Experience

  • Synthesis Lectures on Human-Centered Informatics 1(1):i-105

Morten Hertzum at Roskilde University

  • Roskilde University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Ardwi Pradnyana

  • I Gede Mahendra Darmawiguna
  • Billy M.H. Kilis
  • Julita Veronica Mamonto
  • Hizkia Legesan
  • Zachary Sagai
  • Fiqhan Arslayandi

I Gede Iwan Sudipa

  • Dewa Ayu Kadek Pramita
  • Komang Kurniawan Widiartha
  • Shin-Ping Tu
  • Brittany Garcia

Xi Zhu

  • Xingzhi Han

Morten Hertzum

  • Rani Novariany
  • Rahma Djati Kusuma

Mubashrah Saddiqa

  • Jens Myrup Pedersen

Jürgen Sauer

  • Klaus Heyden
  • Andreas Uebelbacher

Kristina B. Kristoffersen

  • James F. Sorce

Leslie Beth Herbert

  • Gerard Jounghyun Kim

Lene Nielsen

  • K. Anders Ericsson
  • Herbert A. Simon
  • Mingming Fan
  • Jinglan Lin
  • Christina Chung
  • Khai N. Truong

Niklas Arvidsson

  • W. Craig Tomlin
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Next-Gen App & Browser Testing Cloud

Trusted by 2 Mn+ QAs & Devs to accelerate their release cycles

Next-Gen App & Browser Testing Cloud

  • Testing Basics
  • Learning Hub
  • Usability Testing Tutorial
  • February 16 2024

Usability Testing: Its Methods, Types, Examples, And Best Practices

Learn more about usability testing, including methods, real-world examples, challenges, best practices, and more for a comprehensive understanding.

Testμ Conference

Usability testing is a testing technique for evaluating the user experience of software applications by allowing real users to test the application. It is an approach to identifying issues that users may encounter when testing a web application or a mobile app and collecting feedback about enhancing its usability.

Functional testing verifies that the software aligns with its specified requirements and operates correctly, whereas usability testing concentrates on evaluating the user experience and interface.

In this tutorial, we will explore usability testing in great detail so that you have a better understanding of this. This tutorial will guide you through its various types, methodologies, strategies, key concepts, best practices, and others. So, let us start by first understanding the meaning of usability testing.

  • What is Usability Testing?

Usability testing is a method organizations use to gain direct insight into how real people interact with software applications while performing tasks based on the functionalities of the website or app. It is a qualitative research approach that helps identify usability issues and assess whether the software application is user-friendly or possesses a modern web design .

In the testing world, 'usability' denotes a quality attribute that evaluates how easy the user interface is for a given user. It enhances the performance, dependability, and satisfaction with which specific users achieve their objectives. It is instrumental in obtaining valuable insights before developing a website or app during the design phase.

Over the years, with increasing competitors, organizations have actively researched and invested in understanding usability. Design and content need to be more adequate; the engaging, intuitive, and responsive user experience of a software application, which an average person can use to achieve specific goals, has also become important. To address this user experience, usability testing is performed by the QAs, who check the user-friendliness of the software application.

The primary objectives of usability testing are to ascertain whether software applications are easy to use and to find ways to enhance interaction design. This process involves testing prototypes of software applications with real users, who provide feedback on their experience. Conducting these tests in person is essential, as it is challenging to replicate the exact conditions under which users will use the software application, such as time constraints, distractions, or environmental factors.

It provides valuable information about what users require from an interface, which can guide developers in creating a new interface. Additionally, it offers insights into user behavior while interacting with interfaces, informing future modifications to existing interfaces.

Usually, the tests consist of learnability, memorability, efficiency, satisfaction, and errors as parameters.

  • Learnability This parameter evaluates how easily a new user can perform tasks successfully on software applications for the first time. It measures the intuitiveness and ease of learning of the interface.
  • Memorability This parameter assesses how users can recall how to use the software applications after some time or after being away. It indicates how well the design facilitates users' ability to remember how to navigate and perform tasks upon subsequent visits.
  • Efficiency This parameter measures how quickly and effectively users can accomplish tasks once familiar with the software application. It gauges the speed and effectiveness of task completion, indicating the effectiveness of the interface design in supporting users' workflow.
  • Satisfaction This parameter captures users' comfort and overall positive experience when using the software application. It evaluates users' emotional responses to the design, including aesthetics, ease of use, and whether the interface meets their expectations.
  • Errors This parameter assesses the frequency, severity, and nature of errors users encounter while interacting with the software application. It aims to identify usability issues, understand the root causes of errors, and determine how to address them to improve the user experience.

Note: According to ISO 9241, usability refers to the degree to which specified users can utilize a product to accomplish particular goals effectively, efficiently, and satisfactorily within a defined context. .

Why perform Usability Testing?

With rapid technology growth, organizations heavily rely on their online presence to achieve their objectives. With many websites available, users have plenty of alternatives, and if a website proves challenging to navigate, they are likely to move on quickly. Therefore, it's essential to prioritize user-friendliness through usability testing.

One common mistake is delaying usability testing until the end of the development lifecycle. This oversight can lead to late detection of bugs or issues, increasing costs and project delays. Early detection allows for prompt rectification, saving time and resources.

Apart from web application testing , mobile app testing is also gaining significance, as users expect seamless performance from their apps. A user-friendly app enhances customer engagement and retention. Providing clear instructions during onboarding is crucial to prevent user frustration. Mobile app usability testing is instrumental in understanding and optimizing the user experience.

Usability testing serves three primary objectives:

  • Problem Identification: Through comprehensive review and research, it helps uncover challenges and obstacles within the software application.
  • Opportunity Discovery: Analyzing results reveals new opportunities for enhancing website and application functionality and user experience.
  • Behavioral Insights: It facilitates a deeper understanding of the behavior and preferences of target users, enabling organizations to tailor their products to better meet user needs and expectations.

Along with three primary objectives, let us look into the advantages and disadvantages below.

Advantages:

With usability testing, the tests are performed with users without prior experience; thus, the responses are fair and help to understand where the modifications are required.

  • It verifies the software application meets the user’s expectations.
  • It helps discover usability bugs before the final release.
  • It helps create a highly effective and efficient website and improves the end-user experience.
  • It can be challenging to execute a large-scale release; however, involving a new set of fresh minds can facilitate rectifying any minor errors.
  • It saves cost & time; if executed from the initial stage, it helps detect poor UI experiences and helps build something more suitable.
  • It gives a clear and broader picture of what users do on your site and why they take these actions.

Implementing usability testing results enriches the satisfactoriness and consistency of the software application.

Disadvantages:

  • It is proven to be time-consuming and costly, especially when utilizing paid services. Allocating time for implementing design changes based on received feedback is crucial. Therefore, it's essential to dedicate time and resources to this activity from the outset.
  • It can be challenging to implement changes based on feedback, as even when following best practices, there's no assurance that they will enhance the user experience, especially with complex software applications.
  • It's a non-functional test , relying on manual testing for a software application, making the process more laborious than automated tests and demanding considerable time and effort from the QA team to test and analyze the data.
  • It is unrealistic to expect completely accurate results from usability testing for a software application, as the debatability of outcomes arises from randomly selected users and the lack of complete representation of real-life experiences, which may compromise the test results.
  • It requires a specific level of expertise to conduct user testing for a software application, potentially necessitating hiring external assistance if you need more familiarity with the process, which adds to the overall cost and time investment.

When performing usability testing, it's helpful to consider the 5 E's; in the below section, let us learn more about these 5 E's in detail.

  • 5 E's of Performing Usability Testing

User experience is widely acknowledged as one of the most critical factors in the success of an online business. Consequently, it has become an indispensable practice for all organizations. It enables a deep understanding of the target audience, allowing businesses to uncover their needs, behaviors, challenges, and preferences by directly involving them in evaluating software applications. Skipping this crucial step can significantly impact your conversion rate, ultimately determining the success or failure of your business.

Here are the 5 E's of usability testing:

5 E's of Performing Usability Testing

  • Engagement It plays a crucial role in measuring how captivating your application is and how much time users spend on it. The application should have clean and minutest details (design elements, micro-interactions, chat-bots, etc.) that elevate users' experience.
  • Effectiveness It ensures the application solves the core values of its foundation and that the enabled features are working as intended.
  • Efficiency It monitors how your navigation works and how concise and well-structured the design layout is. It measures efficiency regarding the number of keystrokes; the fewer they are, the better the opportunity to achieve goals faster.
  • Error Tolerance It continuously checks the error and how to resolve it faster. The usability testing checks how fast and efficiently your application allows users to reverse the errors.
  • Ease of Learning Its goal is to enable new and existing users with continued learning that is easy to remember.

Online streaming OTT platforms have quickly become a favorite pastime, and you're planning to launch one soon, too. Given the complexity of OTT platforms with multiple filters and searchability tabs, it's crucial to conduct thorough usability testing to ensure optimal user experience.

For this purpose, you'll guide users through a series of tests:

  • Search Functionality: Test the effectiveness of searching by genres or language preferences.
  • Sharing Feature: Evaluate how well the sharing function works.
  • Save, Upvote, And Downvote Buttons: Assess the functionality of these buttons for user interaction.
  • Performance Across Network Speeds: Test the application performance under various network conditions.
  • Sign-in and Sign-out Processes: Ensure seamless functionality of these essential user actions.
  • "Remember Me" Feature: Check the functionality of the Remember Me option for user convenience.
  • Language Selection: Evaluate the feasibility of choosing and continuing in the preferred language.

These questions will serve as a guide for users during usability testing. Conduct qualitative remote testing to evaluate performance across diverse locations, devices, operating systems, and networks. Geolocating testing can provide valuable insights for online streaming OTT platforms, where content availability may vary due to geographical restrictions or licensing agreements. By assessing how the platform performs in different regions, you can ensure a natural and seamless user experience tailored to each user's location.

  • Who Does Usability Testing?

A team of experts with experience in user experience (UX) design and related tests is essential to conduct adequate testing. Here are the key groups involved in performing usability tests:

  • UX Designers: Responsible for creating user interfaces for software applications, UX designers focus on ensuring user-friendliness and intuitive navigation. They play a crucial role in crafting designs that prioritize user experience.
  • Usability Engineers: These professionals specialize in designing and conducting usability studies. Based on their findings or research, they gather user feedback, analyze results, and collaborate with other team members to enhance the overall user experience.
  • Product Managers: Product managers oversee the development process and ensure the software application meets its specified requirements and objectives. They are pivotal in prioritizing features and functionalities that enhance user satisfaction and usability.
  • Developers: While developers are primarily responsible for coding and programming the software application, they also play a vital role in usability testing. By participating in usability tests, developers gain insights into the performance of the software and can address any issues or optimize functionalities accordingly.
  • Quality Assurance (QA) Team: The QA team is tasked with testing the functionality and performance of the software application being developed. They rigorously test for bugs, inconsistencies, and usability issues, providing valuable feedback to developers for early-stage rectification.

Note : Fix bugs, inconsistencies, and usability issues with better testing infrastructure. Try LambdaTest Now!

Now that we are familiar with what usability testing is, who does it, and what its 5Es are, let's dive deeper into the learning of usability testing by understanding how different it is from user testing in the section below.

  • Usability Testing vs. User Testing

While usability and user testing may appear similar and share a common ultimate objective, they use distinct approaches. Below, we will outline the differences to provide a comprehensive understanding of usability testing and user testing:

Aspect Usability Testing User Testing
Ultimate Objective Evaluating users' needs within the context of an existing software application, even during prototype stages of development Assessing the desirability of a specific software application for users.
Focus More software application-focused Entirely user-focused
Approach Examines how users interact with and accomplish tasks using an existing software application Questions about whether users want a specific software application
Purpose Improving the usability and functionality of an existing software application Identifying the type of software application beneficial for users
Observation Evaluate user interactions and task completion with the software application Measure user preferences and desires
Timing Applied to existing software applications, including those in prototype stages Often used for new software application ideas or concepts
Emphasis Software application usability and functionality Software application desirability
  • Usability Testing vs. Accessibility Testing

Since we understood usability testing vs user testing, you might also think of accessibility testing as similar to usability testing. However, this is not true. Below are the exact differences between usability testing and accessibility testing :

Aspects Usability Testing Accessibility Testing
Focus Ensures the application is easy and intuitive to use Focuses on making the application accessible to disabled people
Purpose Verifies the user-friendliness of the software application Measures the extent of accessibility that an application can provide
Methodology Identifies and fixes usability issues for a seamless user experience Performed the following WCAG (Web Content Accessibility Guidelines) to analyze accessibility strategically
Key Considerations Focuses on flexibility, learnability, functionality, and industrial design Emphasizes robustness, understandability, operability, and perceptibility
User Involvement Essential tools include Optimizely, Crazy Egg, Userlytics, Qualaroo, Usabilla, and UserFeel. Essential tools include Google Lighthouse, Wave, Dynomapper, Accessibility Checker, and the Axe Chrome plugin
Technical Requirements Involves real users to test and evaluate user-friendliness Requires reviewing website code and matching it with WCAG guidelines, with knowledge of CSS, HTML, and JavaScript
  • Usability Testing in Agile Development

Performing usability tests within Agile methodologies enables the timely detection of bugs or issues at every stage of the software development process. To facilitate this, creating an easy-to-understand and informative plan for usability testing is essential, ensuring the entire team knows what needs testing and why. Developing a test script that outlines the process or steps of the usability test is advisable, as it provides guidance throughout the testing process and provides better test results.

Agile usability tests often involve repetitive tasks, which can be addressed using templates such as Test Notes, Testing Plan, and Finding Table. These templates streamline the development process of software applications, making it more efficient.

Get access to all the test case templates to enhance your test planning process and effectively avoid repetitive tasks. These templates enable smoother navigation through test scenarios , ensuring comprehensive coverage and efficient execution.

It enables the evaluation of user experience for software applications based on QA metrics at various stages of the Software Development Life Cycle (SDLC) . It involves collecting quantitative information on user experience, such as the number of clicks on a submit button or the average purchase time, through surveys and analytics.

  • Real-World Usability Testing Examples

Usability testing is critical to evaluate the effectiveness and user-friendliness of software applications. Below are five real-world examples highlighting the diverse applications and benefits:

Usability Testing Example #1: Zara

An illustrative instance of usability testing is demonstrated through Zara, a renowned clothing brand. A user expressing frustration with the Zara app's shopping experience initiated usability studies to identify common challenges and enhance the interface.

Usability Test:

Guerrilla usability testing was chosen as the primary method. The researcher engaged seven regular Zara shoppers at a mall for the test.

Tasks included:

  • Search for a winter coat suitable for the recent cold weather.
  • Evaluate a coat to determine its suitability.

The researcher gathered valuable information about users' behavior on the app. They reviewed recordings to pinpoint the application's main usability issues. Utilizing affinity mapping, they categorized problems into groups like search/filter, cart edits, picture carousel, dropdown menu, and others. The prototype underwent testing with seven additional users to ensure no usability issues remained.

Usability Testing Example #2: Quora

The second usability study example is of Quora, a popular Q&A social network. The objectives include identifying usability issues on the Quora website, uncovering improvement opportunities, and gaining insights into user interactions with the platform.

Rangga Ray Irawan conducted the test during the COVID-19 pandemic, opting for remote usability testing. Five users with diverse backgrounds participated in the testing session.

Participants were initially posed with a few pre-test questions about their use of social media, Quora, opinions, and standard demographic information. The test comprised nine tasks designed to simulate real-life scenarios of users engaging with the platform and testing its key features.

An Example Task Was:

You're a beginner in stock investing and need clarification about suitable stocks for beginners. You decide to ask questions on Quora. Demonstrate how you can ask a question on Quora. Following the test, participants were subjected to post-study interview questions.

The overall success rate of the test was approximately 71%, with 45 attempts to perform tasks and 10 failures.

Uncovered usability problems included:

  • Icons and buttons must be more extensive and easily identified.
  • Confusing terms such as "downvote."
  • The "find room" button does not meet users' expectations.

Researchers devised solutions to these problems, detailed in the Quora Usability Testing Case Study.

Usability Testing Example #3: McDonald's

In 2016, McDonald's in the UK launched its mobile ordering app and enlisted user testing to uncover issues and evaluate the user-friendliness of their interface. SimpleUsability, a behavioral research company, managed the study.

A total of 15 usability sessions were conducted, supplemented by a survey completed by 150 participants who interacted with the McDonald's app.

Using the gathered information, SimpleUsability identified several notable usability issues. These included poorly designed CTAs, communication breakdowns between the app and the restaurant, and a need for more personalized options for specific orders. Recommendations were provided to enhance the application's UI.

The above examples underscore the importance of conducting usability testing during the development process to address issues and gain valuable insights. By analyzing the results and identifying potential usability challenges early on, teams can make informed decisions to improve the user experience before the software is fully built and released.

This proactive approach helps resolve issues promptly and ensures that the final product effectively meets user expectations and requirements.

  • Usability Testing Types

Understanding how users interact with digital products is essential for optimizing their experience. Various methods assess user interactions, uncover issues, and enhance overall usability. Let's explore different types of evaluating user experience and improving product usability.

Moderated vs. Unmoderated

In a moderated process, a facilitator guides participants through testing, providing assistance and instructions as needed. On the other hand, in an unmoderated process, participants complete tasks independently without direct guidance from a facilitator. Below are the differences between moderated and unmoderated for better understanding.

Aspects Moderate Unmoderated
Facilitator Presence Administered under human presence (on-site or virtually) Done without direct facilitator presence, often remotely or with minimal involvement
Interaction and Queries The facilitator interacts with participants, answers queries, and asks follow-up questions Queries may be addressed but lack real-time interaction, limiting the depth of insight
Testing Environment Typically performed in controlled environments like labs It can occur on participants' own devices, allowing testing in real-world scenarios
Depth of Results Results tend to be in-depth due to real-time interaction and facilitator guidance It may still yield comprehensive results but may lack depth of insights from real-time facilitation
Cost and Time It may be more expensive and time-consuming due to facilitator involvement and specialized environments Generally more cost-effective and faster due to reduced facilitator involvement and fewer resource requirements
Participant Recruitment Participants are usually selected through direct interaction or facilitator recruitment. Participants may be recruited using third-party software or platforms without direct facilitator interaction.

Remote vs. In-person

Remote testing conducts usability evaluations remotely, typically through online platforms or software, while in-person testing involves in-person assessments in a physical location. Below are the differences between remote and in-person for better understanding.

Aspects Remote In-person
Control over the test environment Can't control the , as participants test remotely Can control the test environment in a controlled setting
Website Accessibility Participants access and analyze the website remotely Tests locally hosted websites, which may not be accessible to participants' devices
Scalability Allows for a large number of tests to be conducted simultaneously Conducting tests in large numbers can be expensive and logistically challenging
Geographical Coverage Enables checking websites across multiple geographical locations Restriction to testing at a specific site
Device Familiarity Tests run on participants' familiar devices, allowing freedom to explore Participants may face restrictions in exploring due to unfamiliar devices
Technical Glitches Technical glitches can occur during remote testing If technical glitches arise, the device can be changed quickly in-person

Qualitative vs. Quantitative

Qualitative research or analysis focuses on understanding experiences, behaviors, and phenomena through subjective interpretations rather than numerical measurements. Quantitative research or analysis focuses on measuring and analyzing numerical data to draw objective conclusions and make statistical inferences. Below are the differences between qualitative and quantitative for better understanding.

Aspects Qualitative Quantitative
Focus Gathers insights, opinions, and subjective feedback Gathers numerical data and objective metrics
Methods Utilizes interviews, observations, and open-ended surveys Utilizes task completion rates, error rates, time on task, etc.
Emphasis Emphasizes understanding the "why" behind user behavior Emphasizes statistical analysis and data-driven decision-making
Insights Provides detailed insights into user perceptions and attitudes Provides precise measurements of usability metrics
Example Understanding user preferences through in-depth interviews Measuring task completion rates and error rates during usability testing
  • Usability Testing Techniques

Several usability testing methods are designed to indicate behavioral or attitudinal insights, and many can uncover both findings. Let’s discuss some of the popular usability testing techniques:

  • Performance testing
  • Card sorting
  • Tree testing
  • 5-second test
  • Eye tracking

Performance Testing

In performance testing , users engage in tasks using the system, often using a combination of methods and approaches. It includes user interviews, observation of interactions, and gaining insights post-experience. Depending on the chosen approach, observations, notes, and usability testing questions may be administered before, after, or during the session.

This testing provides valuable qualitative insights, requiring careful planning before initiation. Moderating a performance test can be approached in four ways, each demanding careful consideration.

  • Concurrent Thinking (CT): Participants express their thoughts during the session, detailing actions, reasons, and feelings about the results.
  • Retrospective Thinking (RT): Participants complete tasks and later share their insights, providing information about thoughts and feelings, although potentially missing fine distinctions.
  • Concurrent Probing (CP): Participants are prompted for details about their experience performing tasks, including expectations, reasons for actions, and feelings about results.
  • Retrospective Probing (RP): Participants' notable actions or statements during tasks are gathered and later discussed. This approach is often combined with Concurrent Thinking (CT) or Retrospective Thinking (RT) to capture subtle aspects without distracting participants from task completion.

Card Sorting

Card sorting is a method used to evaluate the usability of information architecture. Users receive either blank cards (open card sorting) or cards labeled with names and brief descriptions of primary items/sections (closed card sorting). Their task is to organize the cards based on perceived connections between items. Additionally, users may be prompted to categorize cards into larger groups and assign names to these groups.

This technique aligns the information architecture with users' thought processes rather than structuring a site or app according to a designer's understanding. Implemented early in the design phase, card sorting proves beneficial, offering cost-effectiveness and avoiding the need for later structural adjustments, thus saving time and resources.

Tree Testing

Tree testing is a follow-up to card sorting, yet it can be performed independently. In tree testing, a visual information hierarchy, or "tree," is created, and users are instructed to complete a task using this hierarchy. For instance, users might be asked, "To achieve X with this product, where would you navigate?" The goal is to observe how easily users can locate their desired information.

This technique is valuable early in the design process and can be executed with paper prototypes, spreadsheets, or digitally using tools like TreeJack.

5-Second Test

In the 5-second test, users are exposed to a specific section of the software application (typically the top half of a screen) for five seconds. Subsequently, they are interviewed to gather their thoughts regarding the software application's purpose, main features, intended audience, brand trustworthiness, and perceptions of usability and design.

This test can be executed in person or remotely using tools such as UsabilityHub.

Eye Tracking

While eye tracking may seem relatively recent, it has been used for a while, with advancements in tools and technology. Eye tracking alone doesn't determine usability but complements other usability testing measures.

In eye tracking, the focus is monitoring where users' eyes land on the designed screen. The significance lies in ensuring that elements drawing users' attention convey crucial information. Although challenging to perform analogously, numerous tools like CrazyEgg and HotJar simplify the process.

Now that we are familiar with usability methods, let's delve into the tools that can help us enhance the testing process for usability.

  • Usability Testing Strategies

Usability is an area of expertise for UX/UI designers and developers as the team collects necessary details about the website's usability using various methods.

  • A/B Testing: It is also known as split testing, an experimental analysis method where two versions of a website or its components (such as color, text, or interface differences) are compared to determine which performs best. It helps in deciding which version should be used on the website.
  • Hallway Testing: It is conducted with individuals who have yet to gain prior experience or knowledge of the product. This approach yields more reliable responses and helps identify bugs in crucial environments. However, it may occasionally lead to unproductive outcomes and website lag.
  • Expert Review: It involves selecting skilled professionals to assess a website's or software application's usability. This method is efficient and rapid compared to other types of testing, as experts can quickly identify loopholes and flaws. However, it is more costly due to the requirement for skilled personnel.
  • Automated Expert Review: Automation testing is a technique where automated scripts are executed using automated testing framework and tools. Automation experts write scripts and develop them using automation testing frameworks to execute tests and record test results. These frameworks provide a structured approach to automate the testing process, allowing for efficient and repeatable testing of software applications. Automation testing helps increase test coverage , reduce the time and effort required for testing, and identify defects early in the development lifecycle.
  • Moderated Usability Testing: It focuses on obtaining live feedback from participants. This method allows moderators to observe users' reactions, hear their live comments, and answer their questions in real time. This interactive approach increases participant engagement and interest in the study.
  • Unmoderated Remote Usability Testing: It is an efficient method where users independently carry out tasks and report their experiences in real-time. Data gathered during testing is used to analyze. This approach is typically conducted remotely without human moderators, making it simpler, faster, and easier to manage.
  • Surveys: It is a widely used method for conducting usability tests. This approach involves utilizing questionnaires with various question patterns for users to answer, providing developers with the necessary information. It offers scalability, allowing for data collection on a large scale, which aids in comprehensive analysis.
  • User Persona: It involves creating a fictional representation of an ideal consumer, reflecting their goals, features, and attitudes. This approach focuses on understanding user needs and expectations, ensuring that development aligns with user-centric objectives. Developers analyze various aspects influencing users and incorporate them into the development process to meet user expectations effectively.
  • Eye-Tracking: It captures users' unconscious and conscious experiences while using a website, tracking the eye's motion, movement, and position. This data is then analyzed in real-time to provide immediate feedback on users' reactions, allowing for insights into their interaction with the interface.
  • Usability Testing Phases

Usability testing is crucial in ensuring a product meets user needs and expectations. It involves evaluating a product by testing it with representative users to uncover usability issues and gather feedback. Here are the different phases involved while performing usability testing:

  • Plan The Test
  • Recruit Participants
  • Prepare Materials
  • Set Up Environment
  • Conduct The Tests
  • Analyze Data
  • Report Results

usability-testing-phases

  • Plan The Test: In this stage, you must define your goals clearly before initiating tests; establishing goals will guide the selection of appropriate test types. The objective is not just executing tests but determining the essential functionalities and objectives of the system. Be specific about your goals to ensure objectives align and assess your test format accordingly.
  • Recruit Participants: In this stage, finding the right participants is crucial for successful usability testing. Identifies a diverse user mix that aligns with your target audience's demographic (age, sex, etc.) and professional profile (education, job, etc.). It's important to select candidates who can relate to the issues you're trying to address and screen the desired number of testers accordingly. However, be cautious only in recruiting biased participants to meet response quotas, as this may hinder your ability to gather honest feedback and criticism.
  • Prepare Materials: In this stage, usability tests are conducted early in the software development lifecycle to identify potential challenges or changes that could become costly later. Be precise about the functionalities and features you want to ensure are performing effectively. This early testing allows for timely adjustments and optimizations, reducing the risk of costly revisions later.
  • Set Up Environment: In this stage, testers and developers must plan the layout and design of the testing environment where the tests will be conducted. They document everything according to the defined purpose, including decisions on recording tests and tracking and executing tasks.

Moderators should follow the same document in each user session to ensure consistency and impartiality. Decide on the usability testing method, whether lab testing (in-person at your offices) or remote testing (where participants can log in from anywhere). Choose the style that aligns best with your goals and objectives.

  • Conduct The Tests: It executes tests in a quiet room or distraction-free environment to minimize bias. Avoid influencing participants' responses by asking leading questions or expressing opinions.

For example, refrain from asking questions like "Do you like the flow of subcategories?" Instead, focus on observing participants' natural reactions. Before the final tests, run a dry run to ensure everything is set up correctly. During the tests, avoid seeking feedback and instead focus on observing participants' responses.

  • Analyze Data: It collaborates with your team to thoroughly analyze all data collected from the usability test. Use the findings to generate meaningful hypotheses and actionable recommendations for improving the overall usability of your software application. Maintain an open-minded approach toward the test results, as they may highlight potential issues and areas for improvement. Use these insights as a starting point for refining future software versions.
  • Report Results: It presents the key insights from the data analysis to relevant stakeholders across your team. Provide actionable recommendations for improving the website's design based on the findings. This report will help guide the next steps for enhancing your website's usability and user experience.

Now that we have a better understanding of usability testing, its types, tools, methods, strategies, and process, let us delve further into usability testing tools.

  • Tools for Usability Testing

In this section, we'll explore various software and resources used to assess the effectiveness and user-friendliness of digital interfaces. The following are some of the most popular usability testing tools that streamline the process of remote testing. Choose the one that best fits your requirements.

LambdaTest is an AI-powered test orchestration and execution platform that lets you run manual and automated tests at scale with over 3000+ real devices, browsers, and OS combinations. It enables team collaboration without regard to time zone differences, offering a centralized area for testing activities.

This platform has a convenient feature that allows it to record tests and generate detailed test results. It allows for easy tracking of the testing process and enables easy sharing of results among team members. Leveraging LambdaTest for usability testing proves to be cost-effective by eliminating the need for an in-house lab and reducing operational costs.

Through the utilization of LambdaTest, teams can improve their remote testing capabilities, streamline collaboration, and ensure a testing process that is both efficient and effective.

Based on your requirements, you can also use the LambdaTest platform to test mobile apps on cloud-based Android Emulators and iOS Simulators .

You can subscribe to the LambdaTest YouTube Channel and stay updated with the latest Selenium testing , Cypress testing , Playwright testing , and more tutorials.

This innovative, easy-to-use tool allows you to track your customers' scroll and click behavior on your website or mobile app. It provides features like User Recordings, A/B testing , and Heatmaps. With filters, you can easily understand detailed customer segments & use cases, while the recording will help you learn an individual's journey through your website. Evaluate how they engage with your site, experience it the user's way, and discover the shortcomings where they get stuck.

It is a reliable remote usability testing tool, handy when you have your pool of testers. It provides both moderated and unmoderated usability tests, along with interview features. Lookback allows you to track users' interactions with your website effortlessly.

The tool supports note-taking on participant actions and enables direct insights recording within the app. Lookback promotes team collaboration by allowing researchers to invite others from the product team to observe the test, leave comments, and tag colleagues on an internal hub without interrupting the participant's progress.

It is a UX research solution designed for larger enterprises. It analyzes and enhances your website's UX, helping create software applications that resonate with users. UserZoom offers moderated and unmoderated usability testing options, featuring surveys, card sorting, tree testing, and click testing.

It is another noteworthy online tool that facilitates website usability testing, A/B testing, and prototype testing. It leverages GPT-4 for AI-powered summaries, audio transcripts, and reports. It allows your team to blend tasks and questions in tests for quantitative and qualitative results. Additionally, it provides various video, audio, and screen recording features, making it suitable for unmoderated usability testing. Loop11 also offers a participant pool for recruitment.

UserTesting

It is a comprehensive platform offering diverse remote usability testing services and solutions. They present specialized packages with customized tools for UX and product designers, marketers, executives, and design teams. Depending on the chosen tier or package, features may include mobile and website testing, prototype testing, integration and collaboration tools, card sorting, tree testing, etc.

This tool is known for its extensive and unique features that allow users to dig deep into test response data based on details about each participant in every study. Its advanced feature can help set tests up in minutes, and results can be concluded in hours. One of its kind, it is a platform that records the webcam, audio, and screen of the user's device while also permitting an infinite number of annotations, users/admins, testers per session, highlight reels, and the number of concurrent studies.

This automated research platform is designed to help teams collect valuable insights from their users at scale. It prompts site visitors to answer surveys and helps teams understand their users in real time. It relieves teams of the burdens that come with user research. It can be deployed from mobile to web apps to use features like advanced targeting, dynamic insight reporting, sentiment analysis, and more.

One of its kind, this platform allows you to report “crash trends” to help research which user actions cause your application to crash. It provides visual reports such as session replays, heatmaps, error reports, form and funnel analysis, and more to help brands see and understand the individuals behind their numbers. It allows you to build multiple dashboards for different users or departments in your organization. Apart from that, the platform records participants' behavior in detail and offers several different ways of processing that information. Users can see what participants did and how by recording a chronological breakdown of participants’ actions within the app.

...

  • Common Mistakes in Usability Testing

While performing usability testing, testers are expected to observe, report, and analyze the application to search for potential defects. However, they can make mistakes that can be costly and time-consuming to fix. Therefore, here are a few common usability testing mistakes you must avoid.

  • Lack of Proper Planning: Planning is one of the most critical aspects of the testing process, requiring active involvement and contributions from every team member to ensure effectiveness. Proper planning entails mapping out all stages of the testing phase meticulously. With this essential step, user experience testing may yield more results in identifying critical bugs or enhancing user experience.
  • Misinterpretation of the Goal: Many testers need to understand usability testing, thinking it solely focuses on improving an application's look, feel, and design. However, the goal extends beyond aesthetics; its primary objective is to assess how users interact with the software. It aims to identify areas where users experience frustration, whether due to design elements or functionality issues. Understanding this broader objective is crucial for conducting effective testing.
  • Testing with Incorrect Audience: The testing process involves engaging with the application's users. Testers sometimes conduct usability testing with friends or co-workers to expedite the process. However, this approach often yields results that lack proper validation. Therefore, it's essential to conduct thorough screening before selecting users. If clients provide users, ensure they are equipped with precise requirements specifying the types of users to choose and those to avoid.
  • Last Moment of Testing: A significant percentage of projects fail, often because bugs are only identified at the final moment. While technical issues may contribute, many failures stem from inadequate usability testing. Therefore, it is crucial to conduct usability testing ideally from the development phase onwards. This approach allows for informed decision-making throughout the project, leading to faster, more accurate identification of usability issues and increased productivity.
  • Creating Interruptions: The goal is to gather feedback from end users to develop a more desirable software application. It's important to note that users should not be pushed to follow specific guidelines or forced to complete tasks in any way. Allowing users to explore the software application independently is preferable if they are comfortable doing so. While instructions and attention to specific features may be provided as needed, avoiding interrupting their exploration process is essential.
  • Conducting Only One Test: To maximize effectiveness, it's crucial to conduct tests at multiple stages throughout development. This approach saves time by identifying potential problems early, making the development process more efficient. For instance, fixing a bug encountered during the design phase is more effortless than addressing it after completion. Regular testing throughout the development lifecycle ensures a smoother and more successful outcome.
  • Missing Pilot: Pilot testing is crucial, allowing designers to evaluate a software application at its early stages of development, typically using prototypes or mockups. Conducting pilot tests helps identify and address potential issues before the final software application is completed. This proactive approach helps prevent last-minute planning errors during the final testing procedure, leading to a smoother and more effective testing process.
  • Improper Designed Tasks: The design of test tasks significantly impacts test outcomes. It is crucial to carefully create tasks uniquely to ensure participants execute them correctly. Using proper guidelines can help participants understand precisely what they need to do and streamline the performance testing process.
  • Testing Wrong Potential Solutions: Testing the performance of new features, ensuring interface elements work, and evaluating tasks for all types of users are essential. Testers view usability as a valuable exercise that contributes to creating better software applications and increasing revenue by making it easier for customers to accomplish their goals with the software.

In the next section, we will learn how to perform usability testing on a cloud platform like LambdaTest.

  • How to Perform Usability Testing?

There are two approaches to perform usability testing of your website.

  • Set up an in-house physical device lab, which can be costly and comes with on-premise challenges and scalability issues.
  • Utilize a cloud platform, eliminating the need for an in-house lab and instantly reducing operational costs.

We will perform usability tests over the LambdaTest cloud platform. However, before starting the testing process, you will need to follow the steps below:

  • Create a LambdaTest account . You can log in to the LambdaTest dashboard directly if you already have an account.
  • From the dashboard left menu, click on Real Time .

real-time-usability-testing

  • Click on Desktop under the Web Browser Testing option from the left menu.
  • Enter the URL, select the operating system, browser, browser version, and resolution, and click Start .

start-usability-testing

Upon selecting browser-OS combinations, a new virtual machine will open, allowing you to test your software application for usability issues.

browser-os-combinations-usability-testing

When performing usability testing, you have several options available from the left side of the menu. You can use Mark as Bug to capture and report any bugs you encounter to your team members. You can also utilize Record Session to document all website activities. Moreover, you can just use the browser version, operating system, and resolution using the Switch option from the same menu to simulate different environments for testing purposes.

different-environments-for-testing-purposes-usability-testing

  • Challenges in Usability Testing

Teams may encounter challenges that could lead to test failure. It's crucial to be aware of these challenges to address them proactively before they escalate into serious issues in the usability testing process.

  • Securing the Appropriate User Sample

In larger organizations, it's common to have a random group of individuals selected based on availability for the test. However, meaningful insights can only be gained when the user sample consists of individuals who intend to use the application. An inappropriate sample could distort the results and mislead the design direction.

  • Recruiting the Right Testers

Involving expert testers who are professionally testing websites or inappropriate testers may uncover fundamental usability issues. However, they may need to provide insights into the testers' perception of the content's usefulness.

  • Define Appropriate Tasks

To gauge the effectiveness of the site and the value derived from your research project, testers must realistically explore your site. Assigning predetermined tasks to testers may not offer a user's perspective on the site's effectiveness in meeting their goals. While you may assess their ability to complete tasks on your site, those tasks may align differently from what they genuinely want to accomplish.

  • Secure Organizational Support

Understanding users' behaviors and perspectives, which can be challenging to express, may be overlooked without adequate support from your organization. With this support, insights from testing may be addressed, and implementing changes may face significant resistance, hindering progress in improving user experience.

  • Addressing Overconfidence

Stakeholders may act as subject matter experts (SMEs) while positioning themselves as end-users, leading to potential biases in decision-making. However, there's a risk of becoming too comfortable making decisions on behalf of users, assuming knowledge of their preferences. While stakeholder inputs may clarify workflow complexities, validating ideas through user testing is essential to ensure alignment with user needs and preferences.

  • Best Practices for Usability Testing

Testing with real users allows you to compile the data needed to identify usability issues, improve the design, and ensure it's easy to use. Here are some of the best tricks and practices to adhere to when validating your software application with real users, which can help you avoid potential pitfalls and ensure a successful testing process.

  • Test Early: Testing early in the development process is crucial as it allows the team to make changes quickly. Delaying testing can have a more significant impact on the quality of the software application. It's okay to wait for a prototype or final product; testing can begin as soon as the idea is conceived.
  • Stick To Common Design Elements: When conducting a usability test, ensure consistency in design elements across the platform to facilitate easy navigation for users.
  • Establish Evaluation Criteria: Utilize your knowledge of the product while maintaining an open mind to define benchmark criteria for determining the website's success.
  • Ensure Your Content Is Ambiguous: Ensure your content is clear and easy to understand by crafting straightforward and neatly organized messaging.
  • Remember, It’s A Constant Loop of Learning: Launching a product is often seen as a linear process, starting with research, then prototyping, and ending with testing. However, it's crucial to recognize that it's an iterative process. Testing should occur in every phase to ensure success for the team.
  • Be Inclusive: To gain diverse perspectives, conduct usability testing with a wide audience. However, prioritize testing with the target audience who will benefit most from your website's solutions.
  • Be Mindful: Respect your participants' time, as they are investing their valuable time in the testing process. Avoid overly lengthy tests, as they may leave less time for valuable feedback. Similarly, avoid overwhelming participants with too many questions, as it may result in less precise data.
  • Testing Environment: Before deploying your software application, ensure thorough testing across various environments to assess its performance and compatibility with diverse audiences.
  • Think Quality Over Quantity: Usability testing doesn't require many users; instead, focus on testing at each phase with a targeted group. This approach allows for testing at multiple stages, saving data analysis time while yielding valuable insights.
  • Solve One Bug At A Time: Trying to solve everything at once is quite impossible. Instead, fix each issue at a time, prioritizing the critical first. It is a constant learning process, so fix issues to the best of your ability, ship the software application, learn from the feedback, and iterate accordingly.

Performing usability testing is a great way to discover unexpected bugs, find what is unnecessary or unused before going any further to the actual users, and get unbiased opinions from an outsider.

Often considered an expensive and time-consuming process, usability testing is a favorable process before final implementations. You might not implement at a larger scale; an internal team or close group can run tests on different prototypes, from a journey of rough ideas to fully functioning software applications. Also, by the end of the tests, always ask for recommendations (be polite & take feedback positively).

Remember to include your quality assurance team in usability testing. It gives them a fresh perspective on users, acknowledges how they operate the set, and examines ways to bridge what the software application can do. It will also profit the users as quality assurance will help them learn how to engage and use features. This infinite loop will help you make a better software application & understand what your target audience needs.

On this page

  • Why Perform Usability Testing?
  • Frequently Asked Questions (FAQs)

Frequently asked questions

Did you find this page helpful?

Author's Profile

...

Nazneen Ahmad

Nazneen Ahmad is an experienced technical writer with over five years of experience in the software development and testing field. As a freelancer, she has worked on various projects to create technical documentation, user manuals, training materials, and other SEO-optimized content in various domains, including IT, healthcare, finance, and education. You can also follow her on Twitter.

Reviewer's Profile

...

Saniya Gazala

Saniya Gazala, a Computer Science graduate, brings two years of hands-on experience in manual testing and delving into technical writing, showcasing her detail-oriented and systematic approach. As a technical reviewer, her key responsibilities involve spotting inconsistencies and factual errors in content. Leveraging her in-depth knowledge, she ensures the accuracy and coherence of the material under review.

Automation Testing

14 Chapters

Automation Testing

The quintessential guide to Automation Testing. This is an in-depth resource covering all aspects of testing.

Selenium Python

27 Chapters

Selenium Python

Want to learn automation testing with Python? Check out our step-by-step tutorial on Python with examples!

Selenium Locators

12 Chapters

Selenium Locators

Here we explore different types of Selenium locators and learn how they are used with different automation testing.

Selenium NUnit

Selenium NUnit

Isn’t it better to use an effective test framework like NUnit on a powerful and scalable Selenium Grid?

Selenium WebdriverIO

10 Chapters

Selenium WebdriverIO

Through this guide, we will learn how to use WebdriverIO, a next-gen test automation framework for Node.js.

Selenium C#

Selenium C#

This tutorial will teach how to master Selenium, making your test automation more streamlined and efficient.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Usability Testing Methods: the Ultimate Guide

Dmytro Trotsko

One of the fundamental functions of a product or service is solving user pain points and addressing their needs. Theoretically, we all get that, right? We all understand that a product needs to provide its users with value while also being intuitive and easy to use. 

But how does one know that their product satisfies the criteria above? Usability testing is a practice that a wide array of companies need to start investing their time in.

To many, this is still an untrodden path. This is why we’ve decided to put together a comprehensive usability testing guide. We’ll discuss the entire process, how to ask better questions , user recruitment, tools you need for usability testing , and much much more.

Let’s get right into it!

Download our free Usability Testing Checklist here.

What is usability testing?

Before we start exploring the subtleties of the practice, let’s look into its fundamental functions first. 

What does this type of testing aim to achieve in the first place? Essentially, its goal is to assess how usable a particular piece of design is.

Generally speaking, usability testing comes in two types: moderated and unmoderated. Moderated sessions are guided by a researcher or a designer, while the unmoderated ones rely on users’ own unassisted efforts. 

Moderated tests are an excellent choice if you want to observe users interact with prototypes in real-time. This approach is more goal-oriented — it lets you confirm or disconfirm existing hypotheses with more confidence. 

On the other hand, unmoderated usability tests are convenient when working with a substantial pool of subjects. A large number of participants allows you to identify a broader spectrum of issues and points of view. 

However, it’s important to underline that testing isn’t that black and white. It’s best to look at this practice as a spectrum between moderated and unmoderated testing. 

Sometimes, during unmoderated sessions, we like to nudge our subjects into the right direction through mild moderation when necessary. 

Why do you need usability testing?

Testing our prototypes can provide us with a wide array of insights. Fundamentally, it helps us spot flaws in our designs and identify potential solutions to the issues we’ve uncovered. 

We learn about the parts of our product that confuse or frustrate our users. By disregarding this step, we’re opening up to the possibility of releasing a product that causes too much friction. As a result, you’re increasing the chances of an unsuccessful release.

We can boil down the main benefits of usability testing to the following: 

It allows us to identify usability issues early in the design process. As a result, we can address them promptly. Fixing usability problems in a live product is considerably more expensive and time-consuming. It puts your product at risk and your company through unnecessary financial stress ;

It validates the product and solution concepts. Plus, it helps us answer some essential questions about the product: “Is it something users need?” and “Is the current design solution the optimal way for users to go about solving this problem?”;

Not only does testing help us spot problems, but it also allows us to find better solutions quicker. The insight we extract from these tests will enable us to quickly brainstorm, prototype, and validate concepts at the very early stages;

It’s important to also mention the things that this type of testing is not suitable for:

Identifying the emotions and associations that arise from your prototypes;

Validating desirability;

Identifying market demand & doing market validation ;

Why you need usability testing? – Identify issues early, save time and money, and validate your assumptions.

The grand scheme of things

Let’s take a bird’s eye view of the Design Thinking model to better understand the value usability testing brings to the table. Below, you’ll find an illustration of how agile ideas should be executed . 

Let’s do a quick recap of the process: 

We start by gaining an empathetic understanding of the user’s problem we’re trying to solve via design;

Once we’ve identified the problem, it’s essential to define it as best as we can;

We brainstorm a broad spectrum of solutions to that problem; 

We then shortlist the most feasible and practical solutions. Further, we leverage these solutions to create rough prototypes with them in mind; 

In turn, these prototypes are then tested on actual users to validate our solutions . This brings us to usability testing. This is one of the ways you validate your design; 

Design thinking diagram illsutrating the path from empathy to definition to ideation to prototyping and ultimately testing

When should it be conducted?

Usability testing is useful in multiple phases of the design process and many contexts. What’s certain is that it should be conducted early on and at least a few times—as a result, ensuring that our decisions are founded in empirical evidence.

The way we see it, it’s essential to distance testing from simply fishing for bugs later in the design process. That’s QA, not UX. 

The most valuable part of it isn’t just identifying problems. Instead, we’re looking to learn from our mistakes and use this information to create superior iterations of our designs.

Here are a few common scenarios when testing usability is essential: 

Any time we develop a prototype for a solution;

At the beginning of a project that has a legacy design. This allows us to get an idea of the initial benchmarks;

Validating our assumptions about metrics. When your tools indicate unsatisfactory metrics, and there could be multiple (often conflicting) reasons for it — usability testing sessions help us identify the root cause of the problem;

Usability Testing Algorithm

One of the most critical components of a successful usability assessment is a detailed plan. This document will allow you to prescribe the goals, number of participants, scenarios, and metrics for your usability testing sessions . 

Traditionally, usability experts collect this information from the product owner and or other stakeholders. As a result, experts draft a preliminary version of the usability testing plan and submit it for feedback. As it is customary in UX circles, most processes are iterative.

Usability testing algorithm: 1. Formulate a plan. 2. Recruit users. 3. Conduct at least 5 interviews. 4. Document the findings

Let’s take a more detailed look at the process of drafting a test plan.

1. Formulate a plan

While running usability tests without a clear scope can provide us with some insight, it can also end up being a burden. 

Start with your goals

Tests that don’t seek to validate or invalidate specific hypotheses can often produce lots of chaotic data. As a result, we fail to connect the dots due to the incoherent nature of the information we’ve collected. 

Start by establishing clear goals and identify the hypotheses you’re looking to test. 

Flesh out your hypotheses

It can be argued that the quality of our usability assessment partly depends on the quality of the hypotheses we’ve developed. Here are a few necessary steps we recommend going through when drafting them: 

Bear in mind that a hypothesis is an assumption that can be tested;

Document your assumptions about the product’s usability. Think about the outcome of your test;

Formulate hypotheses based on your assumptions and their potential outcomes;

Establish the outcomes and metrics that will validate your hypotheses;

Test your hypotheses during usability sessions; 

Create scripts with goals and hypotheses in mind

Once you’ve developed your hypotheses, use them to formulate your research questions. Try to break them down into logical divisions and write your questions around them. Here are a few essential things to keep in mind: 

Make sure that your questions are specific and answerable. It’s imperative to eliminate any possible confusion and ambiguity in the subjects’ answers; 

Your questions need to be rooted in your hypotheses and must address them directly. Asking questions that are remotely connected to them could make your analysis more complicated and less precise; 

Ask as many questions related to a hypothesis as necessary. Clarity and precision are key;

Once formulated, make sure that the answers to all of your questions provide you with “the big picture.”;

Make sure that your questions leave little wiggle room. Qualitative research is often subject to bias. Provide subjects with the necessary guidance to make them comfortable with answering your questions truthfully; 

Protip : It’s essential not to overwhelm users with too many questions. This is often the case when we commit to too many goals. By broadening the scope of your usability testing, you’re risking diminishing the quality of your data and insight. 

2. Recruit your users

An important part of usability studies is working with users that are representative of your user personas. Focus on recruiting people that have a significant overlap in your potential user demographics. Look for similar goals, aspirations, attitudes, and so forth.

Failing to do so might result in misleading data, which renders your usability test pointless. Of course, this depends on the nature of your product as well. Specialized products demand a specialized audience and vice versa. 

Where do I look for users? 

There are multiple sources of subjects for your usability tests, depending the type of product you’re looking to test. Below, you’ll find the list of platforms we use to find suitable users for our tests:

Ideally, the product team should be able to help you find users. In turn, you could also ask whether the interviewed person has any acquaintances that fit your user profile. If that doesn’t work for you, consider the resources below;

Facebook and LinkedIn groups — works for both B2B and B2C;

Upwork — works better for B2C than B2B since business owners and stakeholders rarely look for odd jobs;

respondent.io — This tool is generally best-suited for finding busy users that are hard to come by;

userinterview.com — This tool boasts a large pool of potential respondents, making it easy to find just the ones you need. It should work for both B2B and B2C.

Do I need incentives?

Incentivizing your users is a common practice in usability testing.

Typically, the higher the requirements and more challenging they are to come by (CEOs, executive board members, senior software engineers, etc.), the higher the incentive you’d generally need to make it worthwhile.

Some platforms (such as Upwork or respondent.io) allow you to pay users directly, but you can also consider gift cards. Another common practice is providing your respondents with a premium version of your product if it’s something they might find useful. 

The economics of test incentives

According to a study published in 2003 by the Nielsen Norman Group , the average per-user cost about $171. The economics of test incentives haven’t changed dramatically since then.

Sometimes companies don’t offer monetary incentives to participants. They can be remunerated with gift cards, coupons, and so forth.

The study above mentions that the average compensation for external test subjects was about $65 per hour. The United States West Coast averages at little over $80 per hour.

The discrepancy between “general population” and highly-qualified users is something worth noting as well. The latter are typically paid around $120 per hour, while the former are rewarded with approximately $30 per hour.

The advent of user testing platforms has significantly facilitated the companies’ access to users. As a result, allowing businesses to somewhat decrease prices for the users’ time. 

3. Run usability interviews

Before you run the actual tests, there’s plenty of prep work that needs to be done. In this section, we’ll take a closer look at the optimal number of tests that need to be conducted, tools, questions, and so forth. 

How many tests? How many people?

Back in 2000, the Nielsen Norman Group posted yet another essential article on strategizing usability testing . The piece reports on their findings on the optimal number of users necessary to test a product's usability. 

“Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.”

— Jakob Nielsen

A study conducted by Jakob Nielsen and Tom Landauer illustrates the number of usability issues you can identify with a varying number of users. 

Their report suggests that once you reach a number of 3 users, you’ll be able to uncover over 60% of usability problems. Going above that will only result in diminishing returns — your ROI will continuously decrease. 

By testing 5 people, you’ll be able to uncover 85% of usability issues, which is generally considered to be the sweet spot in terms of price-to-value ratio. 

Rather than investing in one or two elaborate tests, approach design iteratively. We recommend running 5 tests for a given set of tasks. 

This number provides you the most insight, with the least number of people. You could run more tests than that, but that could have a toll on your finances while providing you with little value.

What tools should I use?

There is great variety when it comes to tools. The type of products you’ll be using depends on the kind of tests you’re looking to conduct. 

You can always opt for platforms like Maze and UserTesting that specialize specifically in testing and research. However, if that’s not essential — Zoom, Skype, Google Meet are a more cost-efficient option. 

One vital feature that needs to be taken into account is the recording function. You’ll want to have your sessions recorded and saved to allow you to access them at all times, thus minimizing bias. 

More importantly, it will allow you to better empathize with your users in the long run. 

4. Remote or in-person interviews?

At the time of publishing this article, we’re still in the midst of the COVID-19 pandemic. So, it’s safe to assume that we won’t and shouldn’t be doing any in-person interviews anytime soon. 

Let’s take a second to explore the benefits and drawbacks of remote and face-to-face testing. 

In-person interviews

While lab testing typically demands more work, it’s considered to yield a broader range of data. The fact that you’re there with the user ensures that you can closely observe their body language and more subtle reactions. Remote testing has its limitations in this regard.

Respectively, this allows us to better understand our users’ behavior and dig for deeper insights. Plus, the fact that we’re in the user's immediate presence helps us ask the right questions at the right time. 

Going remote

Given that we are a globally-distributed team at the Adam Fard Studio, we’re all for remote testing. We consider it more advantageous for a variety of reasons. 

First off, it’s much easier to recruit users and run tests. Testing platforms provide companies with immense databases of users categorized in a wide array of parameters. Plus, the large size of the user bases allows you to conduct usability tests concurrently and around the clock. 

Secondly, we believe that tests have to be conducted in a “natural environment,” which is specifically at a person’s device, not in a sterile lab. 

Last but not least, it’s a more cost-effective way of conducting tests. There are simply fewer expenses compared to driving a person to an on-site lab. It takes up less of their time, which results in a smaller cost per conducted test. 

5. What questions should I ask?

It’s important to underline that usability interviews are more than just the questions you ask during the test. Typically, interviewers need to ensure proper communication before and after the test as well. 

Before the interview

“ Tests” are generally a triggering word. We associate it with our knowledge or understanding of something being assessed. It’s imperative to stress for the participants that this is not the case. Underline the idea that you’re testing designs, not users. Make them feel comfortable by mentioning that there are no wrong answers. 

We like to kick things off with some small talk. It allows users to relax and be more comfortable with the entire process. 

There are a variety of useful questions that can be asked before the interview. Generally, they can be grouped into two categories: background and demographic questions. 

The latter allows us to identify potential usability trends in various demographics, ages, genders, income groups, and so forth. Needless to say, we need to do it in a thoughtful and careful manner. 

The former has to do with the users’ interaction with products similar to yours and their preferences regarding them. 

During the interview

Given that you’re looking to learn about your users’ experience, there are two central things to keep in mind: 

Don’t ask leading questions;

Ask questions that encourage open-ended answers; 

Leading questions are extremely harmful to a usability test. It’s safe to say that they can pretty much invalidate your results, rendering the test pointless. To make sure that doesn’t happen, we should always invest enough time and effort to formulate questions that are neutral and open.

Here are a few examples of questions you should avoid:

What makes this experience good?

How intuitive was the interface?

Was the product copy easy to understand?

It’s common knowledge that people want to be polite to other people. Such loaded questions will involuntarily skew the users’ opinions on the experience. As a result, that will stop you from extracting real insight from the test. 

Here are a few examples of questions we’d recommend asking your users:

What’s your opinion on the navigation within the product?

How did you feel about using this particular feature?

Would you prefer to perform this action differently?

What would you change in the product’s workflow? 

After the interview

Once the test comes to an end, it’s always a good idea to ask some general questions about the person’s experience with the product. Feel free to ask more open-ended questions that will allow you to extract even more insight from the test.

Is there anything we didn’t ask you that you’d like to share your opinion on?

How did you like the experience in general on a scale from 1 to 10? 

What are the things you enjoyed most/least about the product? 

How long should the sessions last?

Generally, 30-40 minutes per session is a good ballpark. Try not to go beyond one hour. Fatigue can be a factor when it comes to the objectivity of the user’s experience. Furthermore, longer sessions typically result in greater amounts of data, making it more complicated to analyze.

Document your findings

If you’ve conducted a session that was recorded, go through it once again and document the answers, the user’s body language, etc. It’s essential to explore any linguistic and behavioral peculiarity that can bring you closer to more in-depth insight. 

Using spreadsheets or tools like Miro can help you structure your findings in a meaningful manner. Consider structuring your findings as follows:

the case study of usability testing

Plus, you can also extract quantitative data from usability tests by quantifying the frequency of certain events. 

What’s next?

Now that the bulk of the work is done, it’s time to analyze the data you’ve collected and extract insight. At this stage, we typically create usability reports that allow us to communicate our findings and devise a plan of action.

Define the issues

Look into the data you’ve collected and define the most critical issues the users have come across. It’s always a good idea to complement the issues with a clear description, as well as when and where it had occurred. 

Prioritize the issues

Not all usability problems are equal. This is the phase where you prioritize them based on their criticality. As a result, your team’s time will be invested responsibly by favoring the most pressing issues and pushing less important ones further in the backlog. 

Consider prioritizing the issues as follows:

testalt

The bottom line

Usability testing plays a crucial role in creating an excellent user experience. Fortunately, more and more companies recognize its value, and it seems like this practice is gradually becoming the new norm. 

However, as we’ve mentioned previously, conducting usability tests just for the sake of conducting them will yield little useful data. 

By following the principles above, you’ll be able to get a better understanding of your users, their preferences, and the shortcomings in the current iteration of your design. As a result, you’ll be steadily moving towards creating a useful product that’s also a pleasure to use. 

Good luck! 

the case study of usability testing

Usability Testing Checklist

Download our free usability testing checklist to empower your product with the knowledge that comes directly from your users:

Don't forget to share this post:

Need help with usability testing?

We can help empower your product idea with design users find useful and useable.

Related Stories

Ethical Considerations When Using AI for Behavioral Targeting

Ethical Considerations When Using AI for Behavioral Targeting

Uncover the dark side of AI-driven advertising. Explore privacy concerns, bias, and manipulation as we dive into the ethical implications of behavioral targeting.

10 Best Product Vision Statement Examples That Actually Work (And Why)

10 Best Product Vision Statement Examples That Actually Work (And Why)

Discover the 10 best product vision statement examples that truly inspire and drive success. Learn why these statements work and how they can guide your own product vision. Boost your strategy now!

Relume vs UX Pilot AI

Relume vs UX Pilot AI

Learn how AI can take your design process to the next level as we analyze Relume and UX Pilot. We compare their features, capabilities, and pricing to help you decide which tool is right for you.

Exclusive UX Articles & Strategies

for Startups, UX Designers & Entrepreneurs

  • Learn about our financial technology consulting, UX design, and engineering services
  • Investing and Wealth
  • Fintech SaaS
  • Why Praxent
  • Capabilities
  • Capabilities demo
  • Schedule a call
  • Uncategorized
  • Development
  • Life at Praxent
  • Project Management
  • UX & Design
  • Tech & Business News
  • Product Management
  • Financial Services Innovators
  • UX Insights

Usability Testing Case Studies: Validate Assumptions and Build Software with Confidence

We define usability and examine some usability testing case studies to demonstrate the benefits.  

As we’ve said before, one of the most important benefits of software prototyping is the early ability to conduct usability testing. The truth of the matter is that no one will use your product if it’s not easy and intuitive or if it doesn’t solve a problem that users have in the first place.

The easiest way to make sure your software project meets these requirements is with usability testing, and the most effective way to implement usability testing early in the development process is with a prototype .

What Is Usability Testing?

Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product’s features, functions and purpose. Developing with a clear purpose and research-based data will ensure your goals and plans are in alignment with what an end user wants and needs, and as a result that your product will be more likely to succeed. Usability testing is a type of user research, and like all user research is instrumental in building more informed products that contribute to a business’ long term success.

Intentionally observing real-life people as they interact with a product is an important step in effective user experience design that should not be missed. Without usability testing, it’s very difficult to determine or validate that your product will provide something people are willing to pay for. Companies that don’t invest in this type of upfront testing often create products that are built around their own goals, as opposed to those of their customers, which do not always align. People don’t simply want products just because they exist, and users sometimes approach applications in unexpected ways. Thus, usability testing is key for confidence building during product development.

In this post, we look at a few usability testing examples to illustrate how the process works and why it’s so essential to the overall development process.

Download Praxent user journey map template and ebook Praxent

Create Your Own User Journey Maps in Sketch or Illustrator

Maximize ROI on usability tests and foster smart decisions for new products with user journey maps.

Get the free step-by-step guide and handy template..

>> Download the e-book and templates for creating user journey maps in Sketch and Illustrator.

User Testing Case Studies

Usability testing case study #1: cisco, usability testing for user experience.

We worked with Cisco’s developer program group to craft a new, more immersive user experience for Cisco DevNet, their developer resources website. Their usability case study illustrates how we tackled their challenge, and the instrumental role that an effective prototyping strategy played in the process.

The Challenge

The depth and breadth of content on Cisco’s DevNet had spawned hundreds of micro-sites, each with different organizational structures and their own navigation paradigms. Existing visitors to the site would only visit a few specific pages, meaning they were never exposed to newly released tools and technologies. Also, new visitors struggled to discover where to begin or how to find the resources most relevant to them. Users were missing out on a lot of valuable resources, and the user experience was less than ideal.

ClickModel® Usability Testing

Cisco wanted to implement a new user experience to the homepage of DevNet in order to make it easier to dive from the homepage deep within the site’s resources to find information on a particular tool or technology. We were charged with prototyping the proposed user experience, so that Cisco could conduct usability testing with developer focus groups. To build our prototype, we implemented our ClickModel tool.

At Praxent, prototyping the user experience allows stakeholders and users to give feedback before the software development process begins.

Confidence to Move Forward with Development

The ClickModel prototype emulated the new site that would appear to users. The prototype prompted insightful feedback from the developer focus groups regarding both the proposed information architecture and the priority and placement of various navigational elements on the homepage and subsequent interior landing pages. The prototype also made it easier to collect feedback on the utility of a proposed color-coding scheme for sorting resources into major technology categories.

This feedback and testing allowed Cisco’s DevNet project to course correct in the Structure, Skeleton, and Surface areas before they spent significant money building in the wrong direction. Cisco took their prototype in-house and moved forward decisively and with confidence to create better resources for the developer community.

DeveloperProgram.com runs developer programs for some of the world’s largest technology and telecoms companies. We rely on our partner Praxent who understands our business, our clients, the developer’s needs, and are able to articulate that into a portal design that is easy to navigate and understand, with the foresight to create an infrastructure that allows for untethered growth. The design team is a pleasure to work with, quickly comprehending our needs and converting that to tangible deliverables, on time and always outstanding.

— Steve Glagow, Executive Vice President • DeveloperProgram.com

Usability Testing Case Study #2: NORCAL

Responsive data displays with usability testing.

In the wake of a corporate merger, NORCAL, a provider of medical professional liability insurance, was looking to build a new online portal. The portal would allow their insurance brokers to review their book of business and track which policyholders were behind on payments. Their billing department was inundated with phone inquiries from brokers who needed information about specific policyholder accounts, which was hindering their ability to attend to important billing tasks.

NORCAL’s insurance brokers are constantly on the go, so it was crucial that the proposed portal not just be accessible by mobile smartphones and tablets, but the portal be optimized specifically for use on those devices.

A native app solution was discussed, but NORCAL determined early on that they wanted to invest in a responsive web application that could be accessed on desktops and mobile devices by both their internal teams and brokers in the field.

Prototyping to the Rescue

The primary user experience challenge tackled during the engagement was how to display complex data tables in a way that would be equally useful on large screen desktop computers as well as handheld smartphone screens. Since multi touch smartphone devices don’t have cursors, they can’t display information using hover states like a desktop computer can.

During the ClickModel process, we prototyped various on- and off-screen methods of data interaction displays for NORCAL’s team to review and test. This provided a few real-life usability testing examples of how they might tackle their problem.

Praxent prototypes the user experience across smartphone, tablet, laptop, and desktop devices to arrive at a responsive web design that works in various contexts.

Interacting with the clickable, tappable prototype on both desktop and mobile devices gave NORCAL crucial insight to determine what pieces of data were most essential to be displayed on the smaller smartphone screens and which additional data fields would be displayed only on desktop screens.

The ClickModel iterative prototyping process provided a clear-cut way for stakeholders from billing, marketing, and engineering to communicate effectively about the user experience. This led to important consensus and direction regarding feature requirements and scope, which was able to guide their project as they moved forward.

What Next? Getting Started With Usability Testing Studies for UX

As you can see, there are many benefits of having a prototype that looks, feels and acts real. In the two usability testing case studies above, ClickModel was an effective tool to build such prototypes, and helping clients garner the information and data-backed insight they needed to proceed with confidence. Learn more about our testing process, and how it also leads to reliable project estimates that are so important as you move forward with the development process.

ClickModel® Overview Guide

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Turn user goals into task scenarios for usability testing.

the case study of usability testing

January 12, 2014 2014-01-12

  • Email article
  • Share on LinkedIn
  • Share on Twitter

The most effective way of understanding what works and what doesn’t in an interface is to watch people use it . This is the essence of usability testing . When the right participants attempt realistic activities, you gain qualitative insights into what is causing users to have trouble . These insights help you determine how to improve the design.

Also, you can measure the percentage of tasks that users complete correctly as a way to communicate a site’s overall usability.

In This Article:

What users need to be able to do, engage users with task scenarios.

In order to observe participants you need to give them something to do. These assignments are frequently referred to as tasks . (During testing I like to call them “ activities ” to avoid making the participants feel like they’re being tested).

Rather than simply ordering test users to "do X" with no explanation, it's better to situate the request within a short scenario that sets the stage for the action and provides a bit of explanation and context for why the user is "doing X."

Before you can write the task scenarios used in testing, you have to come up with a list of general user goals that visitors to your site (or application) may have. Ask yourself: What are the most important things that every user must be able to accomplish on the site?

For example, nngroup.com users must be able to accomplish 3 main goals:

  • Find articles on a specific topic
  • Sign up for UX Week seminars
  • Learn about our consulting services

Once you’ve figured out what the users' goals are, you need to formulate task scenarios that are appropriate for usability testing. A task scenario is the action that you ask the participant to take on the tested interface. For example, a task scenario could be:

You're planning a vacation to New York City, March 3 − March 14. You need to buy both airfare and hotel. Go to the American Airlines site and jetBlue Airlines site and see who has the best deals.

Task scenarios need to provide context so users engage with the interface and pretend to perform business or personal tasks as if they were at home or in the office.

Poorly written tasks often focus too much on forcing users to interact with a specific feature, rather than seeing if and how the user chooses to use the interface. A scenario puts the task into context and, thus, ideally motivates the participant.

The following 3 task-writing tips will improve the outcome of your usability studies.

1. Make the Task Realistic

User goal: Browse product offerings and purchase an item. Poor task: Purchase a pair of orange Nike running shoes. Better task: Buy a pair of shoes for less than $40.

Asking a participant to do something that he wouldn’t normally do will make him try to complete the task without really engaging with the interface. Poorly written tasks make it more difficult for participants to suspend disbelief about actually owning the task. In the example, the participant should have the freedom to compare products based on his own criteria.

Coming up with realistic tasks will depend on the participants that you recruit and on the features that you test. For example, if you test a hotel website, you need to make sure that the participants would be the ones in their family responsible for travel research and reservations.

Alternatively, you can decide to let the participants define their own tasks. For example, you could recruit users who are in the process of buying a car and let them continue their research during the session, instead of giving them a task scenario. ( Field studies are ideal for observing users in their own environment as they perform their own tasks, but field studies are more expensive and time consuming.)

2. Make the Task Actionable

User goal: Find movie and show times. Poor task: You want to see a movie Sunday afternoon. Go to www.fandango.com and tell me where you’d click next. Better task: Use www.fandago.com to find a movie you’d be interested in seeing on Sunday afternoon.

It’s best to ask the users to do the action , rather than asking them how they would do it. If you ask “How would you find a way to do X?” or “Tell me how you would do Y” the participant is likely to answer in words, not actions. And unfortunately, people’s self-reported data is not as accurate as when they actually use a system. Additionally, having them talk through what they would do doesn’t allow you to observe the ease or frustration that comes with using the interface.

You can tell that the task isn’t actionable enough if the participant turns to the facilitator, takes her hand off the mouse, and says something like “I would first click here, and then there would be a link to where I want to go, and I’d click on that.”

3. Avoid Giving Clues and Describing the Steps

User goal: Look up grades. Poor task: You want to see the results of your midterm exams. Go to the website, sign in, and tell me where you would click to get your transcript. Better task: Look up the results of your midterm exams.

Step descriptions often contain hidden clues as to how to use the interface. For example, if you tell someone to click on Benefits in the main menu, you won’t learn if that menu label is meaningful to her. These tasks bias users’ behavior and give you less useful results.

Task scenarios that include terms used in the interface also bias the users. If you’re interested in learning if people can sign up for the newsletter and your site has a large button labeled Sign up for newsletter , you should not phrase the task as " Sign up for this company's weekly newsletter. " It's better to use a task such as: “ Find a way to get information on upcoming events sent to your email on a regular basis .” 

Avoiding words used in the interface is not always easy or natural and can even be confusing to users, especially if you try to derive roundabout ways to describe something that already has a standard, well-known name. In that case, you may want to use the established term. Avoiding clues does not mean being vague. For example, compare the following 2 tasks:

Poor task: Make an appointment with your dentist. Better task: Make an appointment for next Tuesday at 10am with your dentist, Dr. Petersen.

You might think that this second task violates the guideline for tasks to be realistic if the user's dentist isn't really Dr. Petersen. However, this is one of those cases in which users are very good at suspending disbelief and proceeding to make the appointment just as they would with a differently-named dentist. You might need to have the user pretend to be seeing Dr. Petersen if you're testing a paper prototype or other early prototype design that includes only a few dentists.

If the task scenario is too vague, the participant will likely ask you for more information or will want to confirm that she is on the right path. Provide the participant with all the information that she needs to complete a task, without telling her where to click. During a usability test, mimic the real world as much as possible. Recruit representative users and ensure that each task scenario:

  • is realistic and typical for how people actually use the system when they are on their own time, doing their own activities
  • encourages users to interact with the interface
  • doesn’t give away the answer.

Related Courses

Usability testing.

Learn how to plan, conduct, and analyze your own studies, whether in person or remote

Remote User Research

Collect insights without leaving your desk

Related Topics

  • User Testing User Testing

Learn More:

the case study of usability testing

What Is User Research?

Caleb Sponheim · 3 min

the case study of usability testing

Data vs. Findings vs. Insights

Sara Paul · 3 min

the case study of usability testing

Usability Test Facilitation: 6 Mistakes to Avoid

Kate Moran and Maria Rosala · 6 min

Related Articles:

Qualitative Usability Testing: Study Guide

Kate Moran · 5 min

Project Management for User Research: The Plan

Susan Farrell · 7 min

Employees as Usability-Test Participants

Angie Li · 5 min

Team Members Behaving Badly During Usability Tests

Hoa Loranger · 7 min

Affinity Diagramming for Collaboratively Sorting UX Findings and Design Ideas

Kara Pernice · 9 min

Avoid Leading Questions to Get Better Insights from Participants

Amy Schade · 4 min

How usability testing works

the case study of usability testing

What is usability testing?

If you've ever spoken to your users about how they interact with your product, chances are you've probably conducted some type of usability testing . Usability testing is a form of user research where test participants use your product to complete tasks. It examines your product's functionality and validates the intuitiveness of your user interface and design. Your product must operate on a high level of functionality because if users cannot complete tasks without difficulty or frustration, they may switch gears and work with a competitor. These areas of confusion pinpoint where your product has room for improvement. Check out some usability testing examples to see how it works in the real world.

When should you do usability testing?

When it comes to the question of when you should do usability testing, the short answer to is early and often. Usability testing is an iterative process from the prototyping phase to the post-launch. At the conceptual stage, before making any design decisions, test your low-effort prototypes, as it'll reveal user feedback and pain points that you can work to resolve. However, it doesn't stop there. Test at every phase of the product development cycle to learn user behavior and understand what works well, what needs improvement, and how you can fix it. Continuously testing keeps your customer at the forefront and enables you to design products with vast human insight. 

How to run a usability test

Usability testing can be done remotely or in person, in either a moderated or unmoderated format. Although it's conducted differently, the usability testing framework will usually be the same. Determine your research goals, resources—how much time and money you're willing to spend—and your target audience to decide what usability testing method suits your needs. There are a wide variety of usability testing tools that can help you get started.

Here are some steps to follow on how to run a usability test :

  • Plan the test

Determine the nature of your study by defining the purpose of your test. What areas of your product do you want to focus on? If you have any pressing questions about how users interact with an aspect of your product, this is the time to gather feedback. Gather questions to ask test participants. However, don’t become too focused on asking specific questions during usability testing, so you can ask questions organically. 

Find a place to conduct the study. You can run the study in an office, a research lab, or remotely. If you decide to do the study in person, the room must have limited to no distractions or interruptions so participants can stay focused while completing tasks. 

  • Find participants

Most usability experts recommend testing up to five participants that closely match your target customer. Include incentives—gift cards, cash payments, or other monetary payments—to entice people to participate in your study. To find participants, use a recruitment agency, pop-ups on your website, or social media. 

  • Recruitment agencies tend to charge more, but they reduce your workload by finding and selecting desirable candidates. 
  • Add pop-ups to your website for website visitors to view. If they already use your products or services, chances are they'll be interested in providing in-depth feedback to help improve your products further.
  • Suppose you have a large following and tons of social media engagement, post on social media to find potential participants for your study. Add relevant hashtags to find more participants that resonate with your target user base.
  • Plan the tasks for your study

Ask participants to complete tasks they'll regularly encounter while navigating your website. For example, if you own an e-commerce business, one of your tasks may look like this. "You're buying a dress for your daughter's birthday party, but you're on a budget, and she loves the color purple. Try to find a purple dress that's less than $100." A task like this allows you to test the most important functions of your site—if it's an e-commerce site, purchasing and finding products is essential. This task also shows if participants can find certain filters easily—like a filter to change the color of the dresses or the pricing. 

  • Conduct the study

During the study, allow participants to complete tasks without your assistance or guidance. Try to see how long it takes users to navigate your site and complete tasks. 

Ask participants to think out loud to observe their thoughts while they use your product. This way, you'll know their thoughts and feelings as they interact with your product, deepening your human insight and feedback. 

After the completion of every task, ask participants for feedback. Garner whether they find your product functional, if they're able to complete tasks successfully, and if they enjoyed interacting with your product. 

  • Analyze test results

After conducting the study and gathering your data, analyze the results. Aim to do this soon after the completion of your study, so the results are still fresh and relevant in your mind. If multiple participants experienced repeated issues during the study, examine the problem further to make any adjustments or improvements. When you analyze the results, you can identify any problematic patterns with your product's usability and implement your findings to improve the overall user experience of your product.

the case study of usability testing

Want to learn more?

Explore UX best practices, expert advice, user research templates, and more. 

In this Article

Get started now

Related Blog Posts

Man and woman writing on a whiteboard

What is an ethnographic study?

man sitting at a conference table working on his laptop

11 product discovery techniques to help your team succeed

Woman's hand tapping on an illuminated mobile phone

A/B test your mobile apps and websites for quick UX wins

Human understanding. Human experiences.

Get the latest news on events, research, and product launches

Oh no! We're unable to display this form.

Please check that you’re not running an adblocker and if you are please whitelist usertesting.com.

If you’re still having problems please drop us an email .

By submitting the form, I agree to the Privacy Policy and Terms of Use .

  • Software Engineering Tutorial
  • Software Development Life Cycle
  • Waterfall Model
  • Software Requirements
  • Software Measurement and Metrics
  • Software Design Process
  • System configuration management
  • Software Maintenance
  • Software Development Tutorial
  • Software Testing Tutorial
  • Product Management Tutorial
  • Project Management Tutorial
  • Agile Methodology
  • Selenium Basics
  • What is Software Testing?
  • Principles of software testing - Software Testing
  • Software Development Life Cycle (SDLC)
  • Software Testing Life Cycle (STLC)
  • Types of Software Testing
  • Levels of Software Testing
  • Test Maturity Model - Software Testing

SDLC MODELS

  • Waterfall Model - Software Engineering
  • What is Spiral Model in Software Engineering?
  • What is a Hybrid Work Model?
  • Prototyping Model - Software Engineering
  • SDLC V-Model - Software Engineering

TYPES OF TESTING

  • Manual Testing - Software Testing
  • Automation Testing - Software Testing

TYPES OF MANUAL

  • White box Testing - Software Engineering
  • Black Box Testing - Software Engineering
  • Gray Box Testing - Software Testing

White Box Techniques

  • Data Flow Testing
  • Control Flow Software Testing
  • Branch Software Testing
  • Statement Coverage Testing
  • Code Coverage Testing in Software Testing

BLACK BOX Techniques

  • Decision Table Based Testing in Software Testing
  • Pairwise Software Testing
  • Cause Effect Graphing in Software Engineering
  • State Transition Testing
  • Software Testing - Use Case Testing

TYPES OF BLACK BOX

  • Functional Testing - Software Testing
  • Non-Functional Testing

Types of Functional

  • Unit Testing - Software Testing
  • Integration Testing - Software Engineering
  • System Testing - Software Engineering

Types of Non-functional

  • Performance Testing - Software Testing
  • Usability Testing - Software Engineering
  • Compatibility Testing in Software Engineering

Test case development

  • Testing Documentation - Software Testing
  • How to write Test Cases - Software Testing

Testing Techniques

  • Error Guessing in Software Testing
  • Equivalence Partitioning Method
  • Software Testing - Boundary Value Analysis

Test Management

  • Test plan - Software Testing
  • Software Testing - Test Case Review
  • Requirements Traceability Matrix - RTM

Defect Tracking

  • Bugs in Software Testing
  • Bug Life Cycle in Software Development
  • Severity in Testing vs Priority in Testing
  • Test Environment: A Beginner's Guide
  • Defect Management Process

Other types of Testing

  • Regression Testing - Software Engineering
  • Smoke Testing - Software Testing
  • Sanity Testing - Software Testing
  • Static Testing - Software Testing
  • Dynamic Testing - Software Testing
  • Load Testing - Software Testing
  • What is Stress Testing in Software Testing?
  • Recovery Testing - Software Testing
  • Exploratory Testing
  • Visual Testing - Software Testing
  • Acceptance Testing - Software Testing
  • Alpha Testing - Software Testing
  • Beta Testing - Software Testing
  • Database Testing - Software Testing
  • Software Testing - Mainframe Testing
  • Adhoc Testing in Software
  • Globalization Testing - Software Testing
  • Mutation Testing - Software Testing
  • Security Testing - Software Testing
  • Accessibility Testing - Software Testing
  • Structural Software Testing
  • Volume Testing
  • Scalability Testing - Software Testing
  • Stability Testing - Software Testing
  • Spike Testing - Software Testing
  • Negative Testing in Software Engineering
  • Positive Testing - Software Testing
  • Endurance Testing - Software Testing
  • Reliability Testing - Software Testing
  • Monkey Software Testing
  • Agile Software Testing
  • Component Software Testing
  • Graphical User Interface Testing (GUI) Testing
  • Test Strategy - Software Testing
  • Software Testing Tools
  • Top 20 Test Management Tools
  • Defect Testing Tools - Software Testing
  • 7 Best Automation Tools for Testing
  • Top 10 Performance Testing Tools in Software Testing
  • Cross-Browser Testing Tools - Software Testing
  • Software Testing - Integration Testing Tool
  • Software Testing - Unit Testing Tools
  • Mobile Testing Tools - Software Testing
  • GUI Testing Tool
  • Security Testing Tools - Software Testing
  • Penetration Testing - Software Engineering

DIFFERENCES

  • Manual Testing vs Automated Testing
  • Difference between Load Testing and Stress Testing
  • Sanity Testing Vs Smoke Testing - Software Engineering
  • Difference between System Testing and Acceptance Testing
  • Quality Assurance (QA) vs Quality Control (QC)
  • Difference between Static and Dynamic Testing
  • Differences between Verification and Validation
  • Difference between Alpha and Beta Testing
  • Difference between Black Box and White and Grey Box Testing
  • Difference between Globalization and Localization Testing
  • Test Case vs Test Scenario
  • Test Strategy vs Test Plan
  • Software Testing - Boundary Value Analysis vs Equivalence Partitioning
  • Difference between SDLC and STLC
  • Software Testing - Bug vs Defect vs Error vs Fault vs Failure
  • Differences between Testing and Debugging
  • Difference between Frontend Testing and Backend Testing
  • Difference between High Level Design(HLD) and Low Level Design(LLD)
  • Software Testing - BRS vs SRS
  • Difference between Positive Testing and Negative Testing
  • Difference between Top Down and Bottom Up Integration Testing
  • Difference between Use Case and Test Case
  • Difference between Monkey Testing and Gorilla Testing
  • Difference between Stubs and Drivers
  • Difference between Component and Unit Testing
  • Difference between Software Testing and Embedded Testing
  • Difference between GUI Testing and Usability Testing
  • Difference between Tester and SDET
  • Software Testing - Desktop vs Client-Server vs Web Application Testing
  • Active Software Testing
  • What is an API (Application Programming Interface)
  • Difference between End-to-end Testing and Unit Testing
  • Difference Between Object-Oriented Testing and Conventional Testing

Usability Testing – Software Engineering

Usability testing is a method used to evaluate the user experience and navigation of websites, apps, and digital products.

In this guide, we’ll explore the basics of usability testing, its significance in software development , and how it enhances user engagement . Whether it’s through in-person sessions or remote testing, we’ll delve into how real users’ interactions provide invaluable insights for product improvement .

Table of Content

What is Usability Testing?

Types of usability testing, difference between usability testing and user testing, why is usability testing important.

Phases of Usability Testing

Advantages of Usability Testing

Disadvantages of usability testing, factors affecting cost of usability testing, techniques and methods of usability testing, frequently asked questions on usability testing.

Usability Testing in software testing is a type of testing, that is done from an end user’s perspective to determine if the system is easily usable. Usability testing is generally the practice of testing how easy a design is to use on a group of representative users. Several tests are performed on a product before deploying it. You need to collect qualitative and quantitative data and satisfy customers’ needs with the product. A proper final report is made mentioning the changes required in the product (software).

Usability testing involves evaluating the functionality of a website, app, or digital product by observing real users as they navigate through it. Typically conducted by researchers, either in-person or remotely, the aim is to identify any areas of confusion or difficulty users encounter while completing tasks.

The ultimate goal of usability testing is to uncover pain points in the user experience, revealing opportunities for improvement. By assessing how efficiently users achieve their goals within the product, usability testing helps in enhancing its overall functionality and user satisfaction.

here are some common types of usability testing explained simply:

  • Remote Usability Testing : Participants use a product or website from their own location while researchers observe and gather feedback remotely. It’s convenient and allows testing with diverse users without geographical constraints.
  • Moderated Usability Testing : A researcher guides participants through tasks, observes their interactions, and collects feedback in real-time. It’s helpful for understanding user behavior and thoughts as they navigate through the product.
  • Unmoderated Usability Testing : Participants complete tasks independently, without direct guidance from a researcher. They usually record their screen and verbalize their thoughts while interacting with the product. It’s efficient for gathering feedback from a large number of users quickly.
  • Comparative Usability Testing : This involves testing multiple versions of a product or interface to determine which performs better in terms of usability. It helps in making informed design decisions by identifying strengths and weaknesses of each version.
  • Think-Aloud Testing : Participants verbalize their thoughts and actions as they interact with the product. This provides insights into their decision-making process and helps identify usability issues that might not be obvious otherwise.
  • A/B Testing : Also known as split testing, it involves presenting users with two (or more) versions of a product or interface and measuring which one performs better based on predefined metrics such as conversion rate or user engagement.
  • Guerrilla Usability Testing : Conducted informally in public spaces or online communities, often with minimal planning and resources. It’s useful for gathering quick feedback from a diverse range of users in a natural setting.

Usability testing and user testing are often confused, but they have different purposes. Both are part of UX testing, which aims to understand the user experience comprehensively. User testing involves real people using a product or service and providing feedback. It helps understand what users think about the product, how they perceive it, and what their needs are.

Usability-Testing-&-User-Testing

Usability Testing & User Testing

Usability testing, on the other hand, focuses on specific aspects like finding bugs or errors that affect user flow, checking if users can complete tasks easily, and ensuring they understand how to navigate the site.

When software is ready, it is important to make sure that the user experience with the product should be seamless. It should be easy to navigate and all the functions should be working properly, the competitor’s website will win the race. Therefore, usability testing is performed. The objective of usability testing is to understand customers’ needs and requirements and also how users interact with the product (software). With the test, all the features, functions, and purposes of the software are checked. 

The primary goals of usability testing are – discovering problems (hidden issues) and opportunities, comparing benchmarks, and comparison against other websites. The parameters tested during usability testing are efficiency, effectiveness, and satisfaction. It should be performed before any new design is made. This test should be iterated unless all the necessary changes have been made. Improving the site consistently by performing usability testing enhances its performance which in return makes it the best website. 

There are five phases in usability testing which are followed by the system when usability testing is performed. These are given below:

phases-of-usability-testing

  • Prepare your product or design to test: The first phase of usability testing is choosing a product and then making it ready for usability testing. For usability testing, more functions and operations are required than this phase provided that type of requirement. Hence, this is one of the most significant phases in usability testing.
  • Find your participants: The second phase of usability testing is finding an employee who is helping you with performing usability testing. Generally, the number of participants that you need is based on several case studies. Mostly, five participants can find almost as many usability problems as you’d find using many more test participants.
  • Write a test plan: This is the third phase of usability testing. The plan is one of the first steps in each round of usability testing is to develop a plan for the test. The main purpose of the plan is to document what you are going to do, how you are going to conduct the test, what metrics you are going to find, the number of participants you are going to test, and what scenarios you will use.
  • Take on the role of the moderator: This is the fourth phase of usability testing and here the moderator plays a vital role that involves building a partnership with the participant. Most of the research findings are derived by observing the participant’s actions and gathering verbal feedback to be an effective moderator, you need to be able to make instant decisions while simultaneously overseeing various aspects of the research session.
  • Present your findings/ final report: This phase generally involves combining your results into an overall score and presenting it meaningfully to your audience. An easy method to do this is to compare each data point to a target goal and represent this as one single metric based on the percentage of users who achieved this goal.

Usability testing is preferred to evaluate a product or service by testing it with the proper users. In Usability testing, the development and design teams will use to identify issues before coding and the result will be earlier issues will be solved. During a Usability test, you can,

  • User-Centric Design : By involving actual users in the testing process, you ensure that your product or website is designed with their needs and preferences in mind.
  • Identifying User Pain Points : Usability testing helps uncover areas where users struggle or encounter difficulties while interacting with your product. This insight allows you to address these pain points and improve the overall user experience.
  • Optimizing User Interface : Through usability testing, you can evaluate the effectiveness of your user interface (UI) design, including layout, navigation, and interactive elements. This enables you to refine and optimize the UI for better usability.
  • Enhancing User Satisfaction : By addressing usability issues and making improvements based on user feedback, you can enhance user satisfaction and loyalty, leading to increased engagement and retention.
  • Reducing Development Costs : Identifying usability issues early in the development process helps prevent costly redesigns and rework later on. This ultimately saves time and resources during product development.

The biggest cons of usability testing are the cost and time. The more usability testing is performed, the more cost and time is being used. 

  • Bias and Subjectivity : Testers’ biases, preferences, and interpretations can influence the results of usability testing. Additionally, participants may alter their behavior when they know they are being observed, leading to results that do not accurately reflect real-world usage.
  • Influence of Testing Environment : Usability testing often takes place in controlled environments, such as labs or testing facilities, which may not accurately replicate the conditions in which the product will be used. This can impact the validity of the test results.
  • Difficulty in Capturing Emotions and Context : Usability testing may struggle to capture users’ emotions, motivations, and the context in which they are using the product. This qualitative aspect of user experience can be challenging to measure objectively.
  • Limited Scope of Testing : Usability testing typically focuses on specific tasks or scenarios, which may not fully capture the overall user experience or uncover all potential usability issues.
  • Difficulty in Identifying Solutions : While usability testing can identify usability problems, it may not always provide clear solutions or recommendations for improvement. Additional analysis and interpretation may be required to address identified issues effectively

The testing cost will depend on the following factors:

  • No. of participants for testing.
  • Number of Days which you need for testing.
  • which type of testing.
  • the size of the team used for testing.

Remember to budget for the usability testing, making usability testing into a product or any website is an iterative process and the elements that are needed are as follows:

  • Time: Time is an important factor with considering usability testing, it will use the specialist of usability and the team to evolve the site and also need to test the test scenarios. Be sure to budget in time for this test preparation as well as for running test cases, report writing, analysing the data, and presenting the findings.
  • Rental cost: If you are not considering the equipment, you will need to ensure the budget cost for all other equipment, and also need to allot the location for the testing purpose. For example the rental room like a conference room which is used to perform all operations.
  • Recruiting Costs: Consider how and where you have recruited your participants. You will need to allow the staff to engage a recruiting team to schedule participants based on requirements.
  • Participants Compensation based on: You will need to compensate the participants for their time and travel purposes that also important to finding the testing budget.

There are various types of usability testing that when performed lead to efficient software. But few of them which are the most widely used have been discussed here. 

1. Guerilla Testing

It is a type of testing where testers wander to public places and ask random users about the prototype. Also, a thank gift is offered to the users as a gesture of token. It is the best way to perform usability testing during the early phases of the product development process. Users mostly spare 5–10 minutes and give instant feedback on the product. Also, the cost is comparatively low as you don’t need to hire participants. It is also known as corridor or hallway testing. 

2. Usability Lab

Usability lab testing is conducted in a lab environment where moderators (who ask for feedback on the product) hire participants and ask them to take a survey on the product. This test is performed on a tablet/desktop. The participant count can be 8-10 which is a bit costlier than guerrilla testing as you need to hire participants, arrange a place, and conduct testing. 

3. Screen or Video Recording

Screen or video recording kind of testing is in which a screen is recorded as per the user’s action (navigation and usage of the product). This testing describes how the user’s mind runs while using a product. This kind of testing involves the participation of almost 10 users for 15 minutes. It helps in describing the issues users may face while interacting with the product. 

Generally, there are two studies in usability testing – 

  • Moderated – the Moderator guides the participant for the changes required in the product  (software)
  • Unmoderated –  There’s no moderator (no human guidance), participants gets a set of questions on which he/she has to work.

While performing usability testing, all kinds of biases (be it friendly bias, social bias, etc.) by the participants are avoided to have honest feedback on the product so as to improve its durability.

In summary, usability testing is vital for evaluating user experience and enhancing digital product navigation. Include its advantages, such as optimizing interface design and improving user satisfaction, usability testing can be resource-intensive and subject to bias. However, it remains a valuable tool for enhancing product usability and identifying areas for improvement.

What is the difference between user validation and usability testing?

User validation focuses on confirming if the product meets user needs, while usability testing assesses the ease of use and efficiency of interacting with the product.

What is known as usability testing?

Usability testing is the evaluation of a product’s ease of use and efficiency by observing real users as they interact with it.

Is usability testing and user testing the same?

They differ slightly in scope.

Please Login to comment...

Similar reads.

  • Manual Testing
  • Software Engineering
  • Software Testing

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

12 Usability Testing Templates: Checklist & Examples

How-to guides.

the case study of usability testing

Usability testing can be tough. 

That’s why it’s important to pick the right tools and templates for your specific use case. 

In this post, we are giving away 12 free usability testing templates for various use cases that you can easily copy or download to implement immediately with your team. 

Find the right usability testing template for your use cases in this list: 

Usability Test Plan Template

  • Usability Checklist Template

Usability Task Template

  • Prototype Usability Template
  • New Product/Feature Usability Template
  • Sign-up Usability Template
  • Checkout Process Usability Template
  • Content Navigation Usability Template

Accessibility Testing Template

Usability survey template.

  • Helpdesk Usability Template
  • Homepage Usability Template

In usability testing, templates are useful for:

  • Consistency: Re-using the same template or usability testing script for different test cases means that every aspect of usability testing is consistently covered. This way, your team never misses essential details and actions that need to be taken.  ‍
  • Efficiency: With a clear, simple structured format, such as a template, you can customize each one for specific use cases. And, in case you don’t have time for that, we’ve got 12 different usability testing templates for the most common use cases that dev agencies and in-house teams are testing for regularly. 
  • ‍ Communication: Between your team and any testers, templates ensure everyone is on the same page, with a straightforward process, methodology, and goals. 

Usability testing is a crucial step of software development.

Based on the outcome of the usability testing report, it might mean your product or website’s functionality still need several iterations.

All of this is made easier when using the right tools to collect quantitative and qualitative data from every user during testing.

Tools with features like session replay and automatic collection of technical data—like Marker.io —are useful for usability testing. 

General Usability Testing Templates

We’ve included two categories of free templates for usability testing and collecting user feedback in this post: general and use-case specific. 

For more general use cases, we’ve included:

  • A usability test plan template (standard operating procedure for all usability testing);
  • A usability checklist template (a simple-form version of the above you can check off during a project);
  • And a usability task template (that can be adapted and customized for specific use cases).

We’ve also included task examples for common use cases, such as checkout process, website navigation, and more!

Either way, we have you covered.

Let’s dive in.

the case study of usability testing

What’s this template for?

A usability test plan template is the working document or standard operating procedure (SOP) that is a single source of truth for your entire usability testing process. 

What’s included? 

Within this template, you need to include: 

  • Goals, scope, business requirements , and key performance indicators (KPIs). Include everything, such as why we are building a new website or app or new product features. What user/client pain point are we solving? What are the technical objectives?  ‍
  • Who’s on the usability test team? Include everyone involved and relevant information: internal QA team members, testers, demographics, experience level, and skillset—the project's who, what, why, and when. 
  • ‍ Testing environment: Hardware, operating systems, browsers, and devices. List them here, including how you will establish a suitable testing environment and tools to monitor testers and collect feedback. 
  • ‍ Usability testing project milestones and deliverables. Outline every phase of the usability testing plan here. 
  • ‍ User profiles and personas. Who are our end-users? Fill in detailed demographic information to determine who should be testing this website, software product, or app. 
  • ‍ How to write test tasks. Clear instructions on how to write a usability testing script. Include everything that should go into it.
  • ‍ Recording test results. How are we recording usability test results? What tool(s) are we using? Document this here. 
  • ‍ Implementation of usability testing results. And finally, the process for implementing any bug fixes and UX changes users notice, including informing them and any clients or stakeholders once the usability testing phase is complete. 

How to use this usability test plan template?

You can download the Usability Test Plan Template here . 

This document is 100% editable—simply: 

  • Make a copy;
  • Fill in the blanks with your details;
  • Save and share internally!

Need a usability testing tool? 

Try Marker.io free for 15 days : Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool. 

Pricing: From $39/mo.

Usability Testing Checklist

the case study of usability testing

What’s this template for? 

This template is a general usability checklist. As every product, SaaS tool, app, and website is different, it will need to be adapted for your project(s). 

As a starting point, you can include the following in a usability checklist: 

  • Navigation: Is the app or software navigation and UX intuitive and easy to use? Can testers find their way around the app without difficulty?
  • Readability: Is the text/copy legible and easily understood? Are fonts and colors consistent across all pages?    
  • Accessibility: Is the product or app accessible to users with disabilities, such as visual impairments?  Are we adhering to web accessibility guidelines and best practices? 
  • Error handling and bugs: Are error messages, such as 404 links, correctly displayed, and are they clear and helpful for users?
  • Performance: Does the software/product run as expected across different devices, operating systems, or browsers? If so, does it perform well across these testing environments?
  • Load times and speed: Are page load times as expected across every page of the app or product?
  • User control: Does the product give users control over their interactions? Does it feel responsive? Can they take the actions you want them to take?
  • User flow and UX: is the user flow logical and intuitive? Can users complete tasks within a reasonable timeframe? When do they experience frustration?
  • Self-help and documentation: Can users easily find self-help documents and content, is it accessible and understandable for non-technical users? 

How to use this usability checklist template?

You can download the Usability Checklist Template here . 

This document is 100% editable—simply make a copy, fill in the blanks with your details, save, and share internally!

Or you can copy and paste the checklist above into a Google or Notion Doc, and then use it whenever it’s needed. 

the case study of usability testing

A usability task template is also known as a usability testing script. It’s designed to evaluate the product’s usability and observe user interactions and decision-making processes.

A usability task template always includes:

  • Task name, product, testing environment, and the user performing the test tasks. 
  • Instructions: What are and how should users perform the relevant tasks? 
  • Success criteria: What’s a pass/fail mark for each task? 
  • Space for notes. 

It’s mission-critical to achieve the results you want from usability testing that: 

  • Tasks should reflect the most common user scenarios/actions . Out-of-the-ordinary scenarios shouldn’t be included because the aim is to find out how the average user navigates and performs actions on your product or website.
  • Instructions should be clear, concise, and as simple as possible —no jargon or niche language. In particular, if you’re dealing with a non-technical audience. 
  • Instructions should be unbiased. Otherwise, you could lead the testers to the wrong conclusions/actions, or lead them too easily to perform the tasks as expected.
  • Include a range of tasks of varying degrees of difficulty (e.g., from “Login” to “Create a new filter for [X] on your dashboard). This way, you get the widest range of data possible. 
  • Tasks should be replicable . Make your tasks more general and not specific to a certain type of user—again, for a wider range of data.
  • Measurable success criteria. What’s a pass/fail, and ultimately, how does this test help us improve our website, app, or software?

How to use this usability task template?

It’s easy. You can:

Download the Usability Task Template here . 

Use Case-specific Templates

You can use these premade templates for specific user experience (UX) workflows, such as signing up to a web app, onboarding, product search, accessibility, and loads more. 

We’ve made each template as easy to customize as possible.

Prototype Usability Testing Template

the case study of usability testing

Running usability testing on your early-stage product or app prototypes. Source the insights you need from alpha and beta testers to see whether they’re able to navigate the user flow and UX and perform the tasks you expect.

User research at this stage will influence the product development roadmap, feature and functionality iterations, and even the go-to-market strategy. 

For almost any product, you need to test for these user experience expectations: 

  • Navigation, UX, and user flow: Is it easy enough for users to navigate? 
  • User experience across different devices, browsers, and operating systems; 
  • Accessibility and readability of text and copy within the product or app; 
  • Can users perform the tasks expected that align with business goals for the product or app? 
  • Do users have enough control within the app? 
  • Can users easily find and use self-help documentation or contact customer support? 

How to use this prototype usability template?

It’s simple to get started with this: 

Download our free Prototype Usability Testing Template here. 

New Feature Usability Testing Template

the case study of usability testing

Finding out whether your product would benefit from new features and functionality.

Whether you’ve already developed new features or have an MVP (minimum viable product) version of a new feature, it can be helpful to source user feedback before committing to the development phase. 

You can even use this template to validate user feedback and usability with simple wireframes of proposed new features in the product roadmap. 

As a minimum, you can ask users during the testing phase: 

  • Do you understand what pain point we are trying to solve with this new product or feature? 
  • Can you navigate the UX and user flow easily enough? 
  • Would you use this new product or feature? 
  • Can users perform the tasks expected that align with business goals for the product or app?

How to use this new product/feature usability testing template?

Get started by downloading our free New Product/Feature Usability Testing Template here. 

Need a usability testing tool?

Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool— try Marker.io for free today .

Sign Up Usability Testing Template

the case study of usability testing

Sign-up flows are crucial—and testing the most basic feature of your app can uncover friction points you may not have thought of during development. 

You want to make sure your users understand and can fill out fields easily, where they get stuck, and find areas of improvement for your sign-up process.

What’s included?  

For this template, there are simple questions that need answering: 

  • Can you sign-up for our product or app? 
  • How easy did you find the sign-up process? Did you get frustrated at any point?
  • Are there any fields you don’t find important to fill out? Why?
  • Does it work across different devices, browsers, or operating systems? 

How to use our free web app sign-up usability testing template?

Download our Web App Sign-up Usability Testing Template here. 

Checkout Process Usability Testing Template

the case study of usability testing

For any eCommerce website or app, the checkout process is crucial. Online stores live or die according to whether users complete a purchase or abandon the cart at checkout. 

Anything you can do to improve the checkout conversion rate will increase top-line revenue, sales, and, ultimately, profits. 

A checkout process usability testing template is to test the UX and user flow of an eCommerce site's checkout. 

  • A series of checkout tasks that align with the different ways customers can buy products (e.g., card, PayPal, or others).
  • Different checkout user flows, including Guest or Logged-in users. 
  • Pass/fail tasks to identify pain points or anything that causes friction when users are going through checkout. 
  • Quantitative, data-driven feedback and qualitative questions about how users feel about the checkout user flow. 

How to use this checkout process usability testing template?

All you need to do is download our Checkout Process Usability Testing Template here. 

Try Marker.io free for 15 days : Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool.

Content Navigation Usability Testing Template

the case study of usability testing

How easy is your website or in-app content to navigate? That’s the question you can answer with the right tools and our usability testing template for content navigation. 

  • Simple navigational tasks and questions to identify whether users can navigate around the website, app, or product easily enough. 
  • Questions about whether users have found what you’ve asked them to look for.
  • Navigational tasks aligned with different user personas and for numerous testing environments. 

How to use this content navigation usability testing template?

Download our Content Navigation Usability Testing Template here. 

the case study of usability testing

Web accessibility is a way of ensuring anyone with a disability, such as auditory, cognitive, neurological, physical, speech, or visual, can access and use any website and app as easily as someone without a disability. 

Find out more about what this means here: W3C Website Accessibility Initiative (WAI) . 

W3C sets the gold standard for global website accessibility initiatives.

As the WAI states: “Accessibility is essential for developers and organizations that want to create high-quality websites and web tools, and not exclude people from using their products and services.” 

Making websites and apps accessible to everyone is a smart business move and, in many countries, is a legal requirement. 

What’s included?

As this is a more specific use case, you might need to partner with a provider who can have your website or app tested by users with disabilities, temporary disabilities, and situational limitations. 

In general, you’ll be looking to test stuff like:

  • Website is usable while zoomed in
  • Links are clearly recognizable, clear color contrast across the entire site
  • Logical structure

At the same time, we’ve included a useful template you can use, following the same outline as the Usability Checklist Template, adapted for the disabilities outlined above. 

How to use this accessibility testing template?

Get started by downloading our free Accessibility Testing Template here. 

the case study of usability testing

A usability survey is a survey for users to ask for qualitative feedback during user testing. It’s helpful to understand how they rate it compared to similar products, such as competitors. 

Getting user opinions can help shape the product roadmap, functionality, features, and even the go-to-market strategy. 

Include questions such as: 

  • How positive or negative was the experience of using our product or app? A rating scale (e.g., 1-5 or 1-10) is useful for gauging opinions). 
  • How likely are you to use our product or app again?
  • Would you recommend us to a friend?

How to use our free usability survey template?

Download our Usability Survey Template here. 

Help and Support Docs Usability Template

the case study of usability testing

How easily can users find self-help and support documents and content? 

The last thing you want is for users to churn because they don’t understand how to seek the help they need. 

Included in this template are simple tasks that test whether users can find self-help and support documents and content. It includes questions such as: 

  • What problems have you encountered? 
  • Did you find the right self-help support easily enough? 
  • Were the self-help documents and content understandable? 
  • Could you follow the steps in the self-help section to resolve your problem? 
  • If not, what else could we include to make this process easier? 

How to use this help & support docs usability template?

Download our Help & Support Docs Usability Template here. 

Are you using unmoderated testers? Get the feedback you need, including bug reports, technical data, and loads more— try Marker.io for free today .

Website Homepage Usability Template

the case study of usability testing

This is a template for finding out how users feel about a website homepage, including whether it’s visually appealing and easy to navigate. 

It’s useful to include a series of questions, such as: 

  • Based on our homepage, do you understand what our company does? 
  • Did you find the homepage easy to navigate? 
  • Can you find everything we asked you to?

How to use this website homepage usability template?

Download our Website Homepage Usability Template here. 

Frequently Asked Questions

What is usability testing.

You’ve finished building an amazing new website, app, or software solution. 

What's next?

It needs testing. That’s where usability testing comes into the picture. You can do this internally, as part of your usual QA testing , and many web dev agencies and in-house teams do that. 

But you also need to see how real users navigate your website or use your product. 

Product or project managers can give testers a checklist of tasks to see whether their interactions align with expected outcomes. 

Usability testing can be conducted remotely, in-person, crowd-sourced, moderated, or unmoderated, and there are numerous tools, checklists, and templates you can use for usability testing. 

What are the benefits of usability testing?

The benefit of usability testing is that you can see, in real-time, whether users can complete tasks on a new website, app, or software product. 

Usability testing gives web dev agencies and QA teams crucial feedback to improve their UX.

This ensures the product is easy-to-use, to navigate, accessible, and free from any bugs. 

How do you structure a usability test?

Before implementing any usability test, you need to be clear on the specific goals you want to achieve.

Once those are clear, there are dozens of templates you can use (like those in this article) and tools, such as Marker.io, for usability testing. 

And then, follow this simple usability testing process: 

  • Plan the test and goals. 
  • Provide a timescale for the testing phase. 
  • Source testers (such as those you can hire through testing tools and platforms).
  • Prepare the usability testing script, or questionnaire, based on any of the free templates and checklists above.
  • Invite testers to try out (specific areas of) the product or website, following the instructions in the testing questionnaire. 
  • Get quantitative and qualitative feedback from the testers via the questionnaire and any usability testing tools you’ve deployed. 
  • Implement the relevant improvements, and inform testers that their feedback was appreciated. 
  • Let the client or internal customers know that the usability testing phase is complete and relevant fixes/changes have been made. 

As you can see, every usability testing template is different. 

For basic website testing, we recommend the usability checklist template. It covers everything you need from your testers. 

For checking how accessible a website or app is, you’d need the accessibility testing template, and for testing eCommerce website checkouts, you can use the checkout process usability testing template. 

We hope you found this list of usability checklists and templates helpful!

Let us know if we missed something!

the case study of usability testing

Continue reading

13 best product feedback tools in 2024, 12 best user feedback plugins for wordpress in 2024, how to give helpful website design feedback [step-by-step guide], what is marker.io, who is marker.io for, how easy is it to set up, will marker.io slow down my website, do clients need an account to send feedback, how much does it cost, get started now.

Logo

The Best Usability Testing Questions to Ask + Examples (2023 Guide)

Need some help writing usability testing questions? We've got you covered. Dive into our array of usability testing question examples to create top-notch questions for your usability testing.

Reshu Rathi

May 11, 2023

the case study of usability testing

In this Article

Usability testing is a great way to understand how real users interact with your product or service and get real user feedback. But you only get accurate and actionable insights from usability tests if you ask the right questions.  

Learning how to write usability testing questions is both an art and a science. The wording you choose can make all the difference between getting accurate, valuable data that helps you improve the user experience or just the opposite.  

Fortunately, we've got a raft of tips and examples to help as we've talked to many UX researchers, searched the web, and put together the best usability testing questions you can ask to get deep user insights.

So, if you want to create good usability testing questions, this blog is for you. But before we share the usability testing questions that you can directly use for your business; you need to know what a good usability testing question is.

What Is a Good Usability Testing Question?

Framing a good usability testing question is not rocket science. It just needs to be simple, open-ended, and actionable:

To frame a good usability testing question, keep the following tips in mind:

  • Be specific: Use clear and concise language to avoid confusion or misunderstanding.
  • Avoid leading questions: Do not ask questions that guide the user to a specific answer, as it can lead to bias.
  • Focus on tasks: Frame questions around a specific task that the user needs to complete. This will help you identify pain points and areas of improvement.
  • Avoid technical jargon: Use language that is easy to understand. Don't use jargon or technical terms that might confuse the users.
  • Use open-ended questions: Avoid yes/no questions as they do not provide detailed insights. Instead, use open-ended questions that encourage users to share their thoughts and feelings.

For example, instead of asking, "Did you find the website easy to use?" which is a yes/no question, a better question would be, "How was your experience navigating the website? Were there any areas that were particularly difficult or confusing for you?"  

Open-ended questions like this will encourage the user to provide more detailed feedback about their experience, which can be valuable for identifying areas for improvement.

Related Read: Why every UXer should leverage eye tracking in usability testing?

Types of Usability Testing Questions

This is what you came for—the good stuff. Here are the four types of usability testing questions you should be using to get valuable user insights:

  • Screening questions
  • Pre-test questions
  • In-test questions
  • Post-test questions

Before we dive into examples of usability test questions, it's important to remember that a mix of open-ended questions with follow-ups and multiple-choice questions can help stimulate conversation and elicit valuable feedback.  

The more detailed information you gather, the better insights you can gain. So, be sure to include various question types to encourage meaningful responses. We'll provide you with several examples of usability test questions to help you get started.

20+ Usability Testing Questions That You Can Use Today

With clear insights into how to frame a good usability testing question and the type of usability testing questions, it is time to delve into examples.

Let's say you are an online retailer and want to conduct remote usability testing - you can ask these questions depending on the type of data you want to collect.  

  • To collect quantitative data, you can utilize scales, such as a range of numbers from 0 to 10, and multiple-choice questions where respondents select from various options, such as choosing between options a, b, or c. (e.g., you can tell users to add a product to the cart and ask them - How was the process? (Super unintuitive, unintuitive, intuitive, super intuitive)
  • To collect qualitative data, you can ask open-ended questions (e.g., you can tell them to place an order on the website and ask them - how was the process of making an order.

Given below are 20 usability testing questions divided into four categories that will prompt participants to provide useful insights without introducing bias.

Screening Questions: These questions are used to check the user's background and existing knowledge of the brand or products. This helps to identify potential biases and ensure that the participant is a suitable candidate for the study.

  • What is your age range?
  • How often do you typically shop online?
  • Are you familiar with our brand?
  • Do you have any experience with similar brands?
  • Have you shopped online on our website before?

Pre-Test Questions: These questions are used to gather information about the user's prior experience with similar products or tasks and their expectations for the current test. This information helps to establish a baseline for comparison and identify potential areas of confusion or difficulty.

  • What do you hope to find on our website today?
  • How do you usually go about searching for items online?
  • What is important to you when shopping online?
  • What are your expectations for the checkout process?
  • Have you experienced any difficulties or frustrations with online shopping in the past?

In-Test Questions: These questions are used to gather feedback during testing and assess the user's understanding and experience with the product.

  • Can you walk me through your thought process while searching for a specific product?
  • How easy or difficult was finding the product you were looking for?
  • Were you able to navigate the website easily?
  • Were the product descriptions and images helpful?
  • Did you encounter any issues with the website's functionality?

Post-Test Questions: These questions are used to gather overall impressions and feedback after the testing is completed and can help identify improvement areas in the product or testing process.

  • How satisfied are you with your shopping experience on our website?
  • Was there anything you particularly liked or disliked about our website?
  • Would you recommend our website to a friend or family member?
  • Is there anything we can change or improve about our website?
  • Do you have any additional feedback or comments about your experience shopping on our website?

‍ Ask the Right Usability Testing Questions and Get Valuable Insights

When it comes to usability testing, asking the right questions is critical to obtaining valuable insights. By crafting effective usability testing questions, you can gather accurate data that can help you improve the usability of your product or service.  

But even with years of experience with all stages of a usability test, people still fail to obtain helpful information from these tests. Lucky for you, it doesn't take much time to learn how to get valuable insights by framing and asking the right usability testing questions.

Also, if you want to avoid mistakes while conducting usability testing -take a look at these usability testing mistakes (and how to Fix them).

{{cta-button}}

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

With lots of unique blocks, you can easily build a page without coding.

Click on Study templates

Start from scratch

Add blocks to the content

Saving the Template

Publish the Template

Reshu Rathi is an online marketing and conversion rate enthusiast. She specializes in content marketing, lead generation, and engagement strategy. Her byline can be found all over the web

Product Marketing Specialist

Related Articles

the case study of usability testing

TAM SAM SOM: What It Means and How to Calculate It?

Understanding TAM, SAM, SOM helps businesses gauge market potential. Learn their definitions and how to calculate them for better business decisions and strategy.

the case study of usability testing

Understanding Likert Scales: Advantages, Limitations, and Questions

Using Likert scales can help you understand how your customers view and rate your product. Here's how you can use them to get the feedback you need.

the case study of usability testing

Mastering the 80/20 Rule to Transform User Research

Find out how the Pareto Principle can optimize your user research processes and lead to more impactful results with the help of AI.

the case study of usability testing

Understanding Consumer Psychology: The Science Behind What Makes Us Buy

Gain a comprehensive understanding of consumer psychology and learn how to apply these insights to inform your research and strategies.

the case study of usability testing

A Guide to Website Footers: Best Design Practices & Examples

Explore the importance of website footers, design best practices, and how to optimize them using UX research for enhanced user engagement and navigation.

the case study of usability testing

Customer Effort Score: Definition, Examples, Tips

A great customer score can lead to dedicated, engaged customers who can end up being loyal advocates of your brand. Here's what you need to know about it.

the case study of usability testing

How to Detect and Address User Pain Points for Better Engagement

Understanding user pain points can help you provide a seamless user experiences that makes your users come back for more. Here's what you need to know about it.

the case study of usability testing

What is Quota Sampling? Definition, Types, Examples, and How to Use It?

Discover Quota Sampling: Learn its process, types, and benefits for accurate consumer insights and informed marketing decisions. Perfect for researchers and brand marketers!

the case study of usability testing

What Is Accessibility Testing? A Comprehensive Guide

Ensure inclusivity and compliance with accessibility standards through thorough testing. Improve user experience and mitigate legal risks. Learn more.

the case study of usability testing

Maximizing Your Research Efficiency with AI Transcriptions

Explore how AI transcription can transform your market research by delivering precise and rapid insights from audio and video recordings.

the case study of usability testing

Understanding the False Consensus Effect: How to Manage it

The false consensus effect can cause incorrect assumptions and ultimately, the wrong conclusions. Here's how you can overcome it.

the case study of usability testing

5 Banking Customer Experience Trends to Watch Out for in 2024

Discover the top 5 banking customer experience trends to watch out for in 2024. Stay ahead in the evolving financial landscape.

the case study of usability testing

The Ultimate Guide to Regression Analysis: Definition, Types, Usage & Advantages

Master Regression Analysis: Learn types, uses & benefits in consumer research for precise insights & strategic decisions.

the case study of usability testing

EyeQuant Alternative

Meet Qatalyst, your best eyequant alternative to improve user experience and an AI-powered solution for all your user research needs.

the case study of usability testing

EyeSee Alternative

Embrace the Insights AI revolution: Meet Decode, your innovative solution for consumer insights, offering a compelling alternative to EyeSee.

the case study of usability testing

Skeuomorphism in UX Design: Is It Dead?

Skeuomorphism in UX design creates intuitive interfaces using familiar real-world visuals to help users easily understand digital products. Do you know how?

the case study of usability testing

Top 6 Wireframe Tools and Ways to Test Your Designs

Wireframe tools assist designers in planning and visualizing the layout of their websites. Look through this list of wireframing tools to find the one that suits you best.

the case study of usability testing

Revolutionizing Customer Interaction: The Power of Conversational AI

Conversational AI enhances customer service across various industries, offering intelligent, context-aware interactions that drive efficiency and satisfaction. Here's how.

the case study of usability testing

User Story Mapping: A Powerful Tool for User-Centered Product Development

Learn about user story mapping and how it can be used for successful product development with this blog.

the case study of usability testing

What is Research Hypothesis: Definition, Types, and How to Develop

Read the blog to learn how a research hypothesis provides a clear and focused direction for a study and helps formulate research questions.

the case study of usability testing

Understanding Customer Retention: How to Keep Your Customers Coming Back

Understanding customer retention is key to building a successful brand that has repeat, loyal customers. Here's what you need to know about it.

the case study of usability testing

Demographic Segmentation: How Brands Can Use it to Improve Marketing Strategies

Read this blog to learn what demographic segmentation means, its importance, and how it can be used by brands.

the case study of usability testing

Mastering Product Positioning: A UX Researcher's Guide

Read this blog to understand why brands should have a well-defined product positioning and how it affects the overall business.

the case study of usability testing

Discrete Vs. Continuous Data: Everything You Need To Know

Explore the differences between discrete and continuous data and their impact on business decisions and customer insights.

the case study of usability testing

50+ Employee Engagement Survey Questions

Understand how an employee engagement survey provides insights into employee satisfaction and motivation, directly impacting productivity and retention.

the case study of usability testing

What is Experimental Research: Definition, Types & Examples

Understand how experimental research enables researchers to confidently identify causal relationships between variables and validate findings, enhancing credibility.

the case study of usability testing

A Guide to Interaction Design

Interaction design can help you create engaging and intuitive user experiences, improving usability and satisfaction through effective design principles. Here's how.

the case study of usability testing

Exploring the Benefits of Stratified Sampling

Understanding stratified sampling can improve research accuracy by ensuring diverse representation across key subgroups. Here's how.

the case study of usability testing

A Guide to Voice Recognition in Enhancing UX Research

Learn the importance of using voice recognition technology in user research for enhanced user feedback and insights.

the case study of usability testing

The Ultimate Figma Design Handbook: Design Creation and Testing

The Ultimate Figma Design Handbook covers setting up Figma, creating designs, advanced features, prototyping, and testing designs with real users.

the case study of usability testing

The Power of Organization: Mastering Information Architectures

Understanding the art of information architectures can enhance user experiences by organizing and structuring digital content effectively, making information easy to find and navigate. Here's how.

the case study of usability testing

Convenience Sampling: Examples, Benefits, and When To Use It

Read the blog to understand how convenience sampling allows for quick and easy data collection with minimal cost and effort.

the case study of usability testing

What is Critical Thinking, and How Can it be Used in Consumer Research?

Learn how critical thinking enhances consumer research and discover how Decode's AI-driven platform revolutionizes data analysis and insights.

the case study of usability testing

How Business Intelligence Tools Transform User Research & Product Management

This blog explains how Business Intelligence (BI) tools can transform user research and product management by providing data-driven insights for better decision-making.

the case study of usability testing

What is Face Validity? Definition, Guide and Examples

Read this blog to explore face validity, its importance, and the advantages of using it in market research.

the case study of usability testing

What is Customer Lifetime Value, and How To Calculate It?

Read this blog to understand how Customer Lifetime Value (CLV) can help your business optimize marketing efforts, improve customer retention, and increase profitability.

the case study of usability testing

Systematic Sampling: Definition, Examples, and Types

Explore how systematic sampling helps researchers by providing a structured method to select representative samples from larger populations, ensuring efficiency and reducing bias.

the case study of usability testing

Understanding Selection Bias: A Guide

Selection bias can affect the type of respondents you choose for the study and ultimately the quality of responses you receive. Here’s all you need to know about it.

the case study of usability testing

A Guide to Designing an Effective Product Strategy

Read this blog to explore why a well-defined product strategy is required for brands while developing or refining a product.

the case study of usability testing

A Guide to Minimum Viable Product (MVP) in UX: Definition, Strategies, and Examples

Discover what an MVP is, why it's crucial in UX, strategies for creating one, and real-world examples from top companies like Dropbox and Airbnb.

the case study of usability testing

Asking Close Ended Questions: A Guide

Asking the right close ended questions is they key to getting quantitiative data from your users. Her's how you should do it.

the case study of usability testing

Creating Website Mockups: Your Ultimate Guide to Effective Design

Read this blog to learn website mockups- tools, examples and how to create an impactful website design.

the case study of usability testing

Understanding Your Target Market And Its Importance In Consumer Research

Read this blog to learn about the importance of creating products and services to suit the needs of your target audience.

the case study of usability testing

What Is a Go-To-Market Strategy And How to Create One?

Check out this blog to learn how a go-to-market strategy helps businesses enter markets smoothly, attract more customers, and stand out from competitors.

the case study of usability testing

What is Confirmation Bias in Consumer Research?

Learn how confirmation bias affects consumer research, its types, impacts, and practical tips to avoid it for more accurate and reliable insights.

the case study of usability testing

Market Penetration: The Key to Business Success

Understanding market penetration is key to cracking the code to sustained business growth and competitive advantage in any industry. Here's all you need to know about it.

the case study of usability testing

How to Create an Effective User Interface

Having a simple, clear user interface helps your users find what they really want, improving the user experience. Here's how you can achieve it.

the case study of usability testing

Product Differentiation and What It Means for Your Business

Discover how product differentiation helps businesses stand out with unique features, innovative designs, and exceptional customer experiences.

the case study of usability testing

What is Ethnographic Research? Definition, Types & Examples

Read this blog to understand Ethnographic research, its relevance in today’s business landscape and how you can leverage it for your business.

the case study of usability testing

Product Roadmap: The 2024 Guide [with Examples]

Read this blog to understand how a product roadmap can align stakeholders by providing a clear product development and delivery plan.

the case study of usability testing

Product Market Fit: Making Your Products Stand Out in a Crowded Market

Delve into the concept of product-market fit, explore its significance, and equip yourself with practical insights to achieve it effectively.

the case study of usability testing

Consumer Behavior in Online Shopping: A Comprehensive Guide

Ever wondered how online shopping behavior can influence successful business decisions? Read on to learn more.

the case study of usability testing

How to Conduct a First Click Test?

Why are users leaving your site so fast? Learn how First Click Testing can help. Discover quick fixes for frustration and boost engagement.

the case study of usability testing

What is Market Intelligence? Methods, Types, and Examples

Read the blog to understand how marketing intelligence helps you understand consumer behavior and market trends to inform strategic decision-making.

the case study of usability testing

What is a Longitudinal Study? Definition, Types, and Examples

Is your long-term research strategy unclear? Learn how longitudinal studies decode complexity. Read on for insights.

the case study of usability testing

What Is the Impact of Customer Churn on Your Business?

Understanding and reducing customer churn is the key to building a healthy business that keeps customers satisfied. Here's all you need to know about it.

the case study of usability testing

The Ultimate Design Thinking Guide

Discover the power of design thinking in UX design for your business. Learn the process and key principles in our comprehensive guide.

the case study of usability testing

100+ Yes Or No Survey Questions Examples

Yes or no survey questions simplify responses, aiding efficiency, clarity, standardization, quantifiability, and binary decision-making. Read some examples!

the case study of usability testing

What is Customer Segmentation? The ULTIMATE Guide

Explore how customer segmentation targets diverse consumer groups by tailoring products, marketing, and experiences to their preferred needs.

the case study of usability testing

Crafting User-Centric Websites Through Responsive Web Design

Find yourself reaching for your phone instead of a laptop for regular web browsing? Read on to find out what that means & how you can leverage it for business.

the case study of usability testing

How Does Product Placement Work? Examples and Benefits

Read the blog to understand how product placement helps advertisers seek subtle and integrated ways to promote their products within entertainment content.

the case study of usability testing

The Importance of Reputation Management, and How it Can Make or Break Your Brand

A good reputation management strategy is crucial for any brand that wants to keep its customers loyal. Here's how brands can focus on it.

the case study of usability testing

A Comprehensive Guide to Human-Centered Design

Are you putting the human element at the center of your design process? Read this blog to understand why brands must do so.

the case study of usability testing

How to Leverage Customer Insights to Grow Your Business

Genuine insights are becoming increasingly difficult to collect. Read on to understand the challenges and what the future holds for customer insights.

the case study of usability testing

The Complete Guide to Behavioral Segmentation

Struggling to reach your target audience effectively? Discover how behavioral segmentation can transform your marketing approach. Read more in our blog!

the case study of usability testing

Creating a Unique Brand Identity: How to Make Your Brand Stand Out

Creating a great brand identity goes beyond creating a memorable logo - it's all about creating a consistent and unique brand experience for your cosnumers. Here's everything you need to know about building one.

the case study of usability testing

Understanding the Product Life Cycle: A Comprehensive Guide

Understanding the product life cycle, or the stages a product goes through from its launch to its sunset can help you understand how to market it at every stage to create the most optimal marketing strategies.

the case study of usability testing

Empathy vs. Sympathy in UX Research

Are you conducting UX research and seeking guidance on conducting user interviews with empathy or sympathy? Keep reading to discover the best approach.

the case study of usability testing

What is Exploratory Research, and How To Conduct It?

Read this blog to understand how exploratory research can help you uncover new insights, patterns, and hypotheses in a subject area.

the case study of usability testing

First Impressions & Why They Matter in User Research

Ever wonder if first impressions matter in user research? The answer might surprise you. Read on to learn more!

the case study of usability testing

Cluster Sampling: Definition, Types & Examples

Read this blog to understand how cluster sampling tackles the challenge of efficiently collecting data from large, spread-out populations.

the case study of usability testing

Top Six Market Research Trends in 2024

Curious about where market research is headed? Read on to learn about the changes surrounding this field in 2024 and beyond.

the case study of usability testing

Lyssna Alternative

Meet Qatalyst, your best lyssna alternative to usability testing, to create a solution for all your user research needs.

the case study of usability testing

What is Feedback Loop? Definition, Importance, Types, and Best Practices

Struggling to connect with your customers? Read the blog to learn how feedback loops can solve your problem!

the case study of usability testing

UI vs. UX Design: What’s The Difference?

Learn how UI solves the problem of creating an intuitive and visually appealing interface and how UX addresses broader issues related to user satisfaction and overall experience with the product or service.

the case study of usability testing

The Impact of Conversion Rate Optimization on Your Business

Understanding conversion rate optimization can help you boost your online business. Read more to learn all about it.

the case study of usability testing

Insurance Questionnaire: Tips, Questions and Significance

Leverage this pre-built customizable questionnaire template for insurance to get deep insights from your audience.

the case study of usability testing

UX Research Plan Template

Read on to understand why you need a UX Research Plan and how you can use a fully customizable template to get deep insights from your users!

the case study of usability testing

Brand Experience: What it Means & Why It Matters

Have you ever wondered how users navigate the travel industry for your research insights? Read on to understand user experience in the travel sector.

the case study of usability testing

Validity in Research: Definitions, Types, Significance, and Its Relationship with Reliability

Is validity ensured in your research process? Read more to explore the importance and types of validity in research.

the case study of usability testing

The Role of UI Designers in Creating Delightful User Interfaces

UI designers help to create aesthetic and functional experiences for users. Here's all you need to know about them.

the case study of usability testing

Top Usability Testing Tools to Try in 2024

Using usability testing tools can help you understand user preferences and behaviors and ultimately, build a better digital product. Here are the top tools you should be aware of.

the case study of usability testing

Understanding User Experience in Travel Market Research

Ever wondered how users navigate the travel industry for your research insights? Read on to understand user experience in the travel sector.

the case study of usability testing

Top 10 Customer Feedback Tools You’d Want to Try

Explore the top 10 customer feedback tools for analyzing feedback, empowering businesses to enhance customer experience.

the case study of usability testing

10 Best UX Communities on LinkedIn & Slack for Networking & Collaboration

Discover the significance of online communities in UX, the benefits of joining communities on LinkedIn and Slack, and insights into UX career advancement.

the case study of usability testing

The Role of Customer Experience Manager in Consumer Research

This blog explores the role of Customer Experience Managers, their skills, their comparison with CRMs, their key metrics, and why they should use a consumer research platform.

the case study of usability testing

Product Review Template

Learn how to conduct a product review and get insights with this template on the Qatalyst platform.

the case study of usability testing

What Is the Role of a Product Designer in UX?

Product designers help to create user-centric digital experiences that cater to users' needs and preferences. Here's what you need to know about them.

the case study of usability testing

Top 10 Customer Journey Mapping Tools For Market Research in 2024

Explore the top 10 tools in 2024 to understand customer journeys while conducting market research.

the case study of usability testing

Generative AI and its Use in Consumer Research

Ever wondered how Generative AI fits in within the research space? Read on to find its potential in the consumer research industry.

the case study of usability testing

All You Need to Know About Interval Data: Examples, Variables, & Analysis

Understand how interval data provides precise numerical measurements, enabling quantitative analysis and statistical comparison in research.

the case study of usability testing

How to Use Narrative Analysis in Research

Find the advantages of using narrative analysis and how this method can help you enrich your research insights.

A Guide to Asking the Right Focus Group Questions

Moderated discussions with multiple participants to gather diverse opinions on a topic.

the case study of usability testing

From Idea to Impact: Demystifying the Process of New Product Development

What are the stages to be undertaken during a new product development? Read all about it here.

the case study of usability testing

How to Conduct Agile UX Research?

Navigating the Agile landscape: A comprehensive guide to conducting Agile UX Research with AI-powered research platforms

the case study of usability testing

How Chief Product Officers Leverage User Research for Business Success

Understand the changing role of Chief Product Officers and how they should respond to evolving customer needs with user research.

the case study of usability testing

Top 10 Tree Testing Tools in 2024

This blog will help you pick the best tree testing tool for you that can be utilized to identify usability issues in the website or app navigation.

the case study of usability testing

Top 10 UX Design Trends in 2024

What are some of the top UX design trends that will be at the forefront in 2024? Read on to find out.

the case study of usability testing

From Vision to Execution: The Essential Role of Brand Strategists in Building Strong Brands

Brand strategists help to shape the identity, perception, and market positioning of a brand. Here’s everything you need to know about them.

the case study of usability testing

Conducting a Descriptive Research Design for Consumer Research

Discover the advantages of descriptive market research and why you should implement it to create an impact within your industry domain.

Maximize Your Research Potential

Experience why teams worldwide trust our Consumer & User Research solutions.

Book a Demo

the case study of usability testing

Are you an agency specialized in UX, digital marketing, or growth? Join our Partner Program

Learn / Guides / Usability testing guide

Back to guides

How to run moderated usability testing

When you're doing user testing, it's important to standardize the process so you end up with consistent and reliable results.

In this chapter, we take a deep dive into moderated testing—arguably, one of the most complex testing methods—with the help of four experienced moderators who share their five-step process and help you consider all the variables involved. You’re going to be in good hands!

Last updated

Reading time, take your first usability testing step today.

Hotjar helps you understand how visitors use your website.

Conducting a moderated usability test

At the beginning of your usability testing process, you'll need to decide what usability testing method is right for you based on:

Your research goals (what you want to achieve)

Your resources (how much time and money you can invest)

The audience you want to test

If you need a quick refresher, we’ve written a separate page about the  usability testing methods and tools  at your disposal. 

One of the most thorough and in-depth methods for gaining user insight is moderated usability testing, during which a trained moderator observes the participants’ behaviors and interacts with them directly. This method is also one of the most complex ones because there are a lot of variables involved—from the location you pick to the moderator’s ability to get valuable answers from the participants.

To write the following section, we relied on the help of four veteran usability testers who talked us through their process to make sure you get the full picture:  Els Aerts  (founder of usability and conversion company AGConsult, with almost 20 years’ experience in the field), our editor Fio (who has run over 200 usability testing sessions since 2016), and our  product designers  Craig and Daniel (who are in charge of running usability testing at Hotjar).

A 5-step process for usability testing

We’re going to use an e-commerce website as our example throughout the page, but the points below apply if you want to test a prototype, a non-transactional website, or a product.

Step 1: plan the session

<

Planning the details of the usability testing session is, in some ways, the most crucial part of the entire process. The decisions you make at the start of the testing process will dictate the way you proceed and the results you end up with.

Determine the nature of your study

Define the problems/area you want to focus on : what is the purpose of the test? What areas of your e-commerce website would benefit the most from usability testing?

Type of users you want to test : typically, these are representative of your user personas , but you may want to drill down more specifically on a certain segment (e.g., users who have completed a purchase in the past 30 days).

Questions you want to ask : what are the specific questions you want to ask users about your website? What are you trying to find out? (note: we go into this subject in depth in the usability testing questions chapter).

Logistical details of your usability testing sessions

Location : will you do the testing in your office? At a research lab? Over the internet?

Timetable : when will you run the testing sessions? (This is particularly crucial if you are inviting participants to a research lab, which in turn means: you need to know when to book the lab.)

Moderators : who will run the testing sessions? As we will see below, moderating user testing without influencing the results requires skill and practice, so you should consider either hiring trained moderators or arranging training for yourself.

Recording setup : recording testing sessions gives you the chance to review them later and catch all kinds of data that the moderator might miss or not have time to record. If you decide to take advantage of video or audio recording, you'll need to be familiar with the equipment and its installation. In an ideal situation, you want to record the participants’ screens, their speech, and also their body language—all of which you can easily do in a testing lab.

Collect all this information in one centralized place (bonus points for creating a one-page template you can reuse multiple times), and use it as your main guide towards the next steps: recruiting participants and designing the actual session.

Step 2: recruiting participants

<

Whom you recruit, and how, depends on your testing goals (for example, how much information you want and therefore how long your sessions need to be) and your budgetary constraints.

The most popular ways to find participants for your study:

Hire an agency : if you're looking for a very specific subsection of the population (like web-savvy oncologists, or single mothers under 35), the most efficient way to find them is to hire a specialized recruitment agency. These companies have vast resources for finding desirable candidates and can do so very efficiently.

Use your website : if you already have an established user base, recruit people there. Use a pop-up poll (Hotjar can help with this) to find users who are willing to participate.

Note: this is why you first need to take step 1 and plan how you’re going to run the test—an on-page poll can help you get in touch with volunteers from all over the world… but if you’re testing in a lab, only a tiny fraction of them will be able to join you.

Use social media : if you have a social media following, use your social channels to reach out to potential participants.

Recruit your clients : reach out to your clients/customers directly and ask if they would be willing to help (provided they’ve given you consent to be contacted for these initiatives. You don’t want to spam them unnecessarily!).

Pro insight from Fio: “you may think you are ‘bothering’ your customers and be hesitant to reach out, but I’ve often found that the opposite is true. People are generally flattered when you ask for their opinion, and genuinely curious to see how their thoughts can help you.”

However you recruit your participants, you'll need to compensate them for their efforts. Els recommends gift cards if you're based in the United States, and cash for most other parts of the world. The amount you pay is up to you—usually, anywhere between $30-50 dollars for a non-specialist audience is acceptable.

Step 3: designing the task(s)

<

This step (designing the task) and the previous one (recruiting participants) really happen around the same time. Once you've worked out the why and how of your research, and while you wait for participant confirmation, it's time to design the test itself.

What this means: you’re going to carefully plan the specific scenarios you’ll take your participants through, and the tasks your participants will be required to complete, to guarantee clear and actionable results. Els does this by writing specific scenarios that provide a context for the testing tasks. For example, let’s say you’re testing an e-commerce shop that sells clothes:

Make a scenario like: “You've been invited to a theme party, and the theme is red. Everybody had to come dressed all in red. You look in your wardrobe, and you don’t have anything that would work. Well, it’s time to buy something red. How do you go about it?” So now your participants will go to the website, and you know that there is a product filter for ‘red’. Can they find the filter easily? Do they know they can use filters? Et cetera, et cetera. Then, you can just watch them use, or not, the red filter.

This scenario allows you to test the participants’ ability to use the filters on your website and is open-ended enough to apply to anyone's preferences. You want to avoid using scenarios that are too specific (e.g., asking a participant to pick out a milk foamer when they may not even like milk in their coffee).

Another pro tip: when designing scenarios, keep the most important functions of the website in mind. For an e-commerce website, the top task is usually buying something, so you would probably want to include a scenario that nudges the user through the purchasing process. Els recommends giving the tester real money to spend during the user test. “When they're spending their own money, they get a lot more critical,” she says. “We sometimes have to interrupt people, because they will happily spend 45 minutes choosing the right pair of shoes—much like they would do in real life.”

Step 4: running the session

<

When it’s time to conduct the usability testing session, you or your moderators should follow a set protocol with each participant. This protocol leaves some room for customization but still guarantees an overall standardized experience for each test subject.

Let’s assume you are running the session yourself: here is what you do.

Introductions and warm up

If you’re doing an in-person test, make sure your participant is physically comfortable with the testing setup (chair, desk height, mouse placement, etc.) and that they understand what's going to happen during the session; if you’re doing it remotely, make sure they can hear you properly.

If you are recording the session (for example, because you want to review it later), ask for their permission at this point; if you are running in-person testing, you may even have a printed out form that you ask them to sign.

To get your subject to loosen up, ask them some friendly conversational questions, such as how far they’ve traveled to get to the lab, if they’ve done user testing before, etc. For Craig, “the important thing for a new interviewee coming in is to feel relaxed and comfortable in the testing environment. A big goal of the first few minutes is getting to know each other, build rapport. This helps you make a smooth transition into the testing phase, when the tester hopefully doesn't even really realize that you're shifting gears, as you start to collect more specific information from them.”

Collect pre-testing data

During your conversation, collect demographic and  psychographic  information using predetermined questions. In the case of your e-commerce test, you might want to ask:

-When the last time was they shopped online

-How often have they bought something online in the past 6 months

-How they generally go about finding the products they want

-What influences their buying decisions

Transition into the first task

Use the rapport you've built to transition the participant into the first testing task. You would usually have 3 or 4 scenarios you want to go through, but the order in which you complete them may depend on your participant's mood and skill level. This is where it pays off to be a trained moderator. Fio notices: “you must be able to sense if your participants are getting frustrated, which may indicate you need to switch them to an easier task to build their confidence; or you may also find the super-skilled participant that completes the task in a really short time, which is where you need to be good at probing and investigating why they did what they did.”

Taking notes

In an ideal scenario, you have a second person taking notes for you so you can be 100% focused on the relationship with your participant; when Daniel and Craig run user tests, they record the sessions and get them transcribed so they can later go through them and highlight relevant parts or sentences.

Follow up questions and wrap-up

Reserve some time at the end of the session to ask any follow-up questions and collect the participant's final feedback. Be sure to thank them for their help.

Dos and don'ts of session moderation

There's an art to running a moderated test section that involves establishing a rapport with the subject and naturally guiding them through the tasks. Our four veteran testers gave us their top tips for being an effective moderator:

Do use clear, neutral instructions. You have to make absolutely sure that your question is not open to interpretation.

Don't write the tasks down—or, at the very least, don’t follow the task list verbatim, as it can give the proceedings too much formality. The participant will feel more at ease if you personalize the task and your wording based on the context.

Do watch for verbal cues and body language. Sometimes users won't explicitly say they are confused, but a skilled moderator can tell by their actions. An example would be someone who was previously silent saying “Hmm . . .” or sighing in frustration.

Don't speak too much. You want to interfere with the user's thought processes as little as possible; set the task, ask them to speak out what they’re thinking and what actions they’re doing, and quietly observe how they go about it.

Do keep an even tone. Don't agree or disagree with the user too much; doing so might influence their final opinions (as in: they might try to ‘please’ you and tell you what they think you want to hear). Try to be as neutral as possible with your speech and body language.

Don't take control of the task. As soon as the test starts, the user should be in total control. Never take their mouse or navigate a step for them.

Do know the best ways to interject. If you do need to interrupt or answer a question, deflect as much as you can by using the echo, boomerang, or Columbo techniques that are explained in depth in the  testing questions chapter.

Don't look at their screen too much. Sometimes, observing the user too closely can influence their behavior. “I tend to just pretend I'm writing something,” Els says.

Step 5: analyzing the insights

<

Finally, after you've collected all your data, it's time to analyze the results and make conclusions. Try to do this as soon as possible after testing so that the observations are fresh in your mind.

As you go over the data, pull out the most serious or frequent problems that users encountered for further examination. Daniel uses a color-coded system when he reviews his transcripts, so he can group similar situations together. 

Don't address every single thing that went wrong; instead, prioritize the issues that were most problematic and need to be workshopped and resolved.

Read more: read the chapter on  analyzing and evaluating usability testing data  for a straightforward, 5-step approach. 

Previous chapter

Analyze your results

Next chapter

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

Shots - Health News

Mammograms have pros and cons. women can handle the nuance, study argues.

Ronnie Cohen

An African American woman is shown getting a mammogram. She is photographed from behind, so we see the back of her head and body as she stands facing a large X-ray machine. A health care professional wearing pink scrubs positions the woman in the machine.

The most recent recommendation of the U.S. Preventive Services Task Force is that all women 40 to 74 get mammograms every other year. A previous recommendation said screening should start at 50. One doctor suggests that people "test smarter, not test more." Heather Charles/Tribune News Service via Getty Images hide caption

New research makes the case for educating women in their 40s — who've been caught in the crossfire of a decades-long debate about whether to be screened for breast cancer with mammograms — about the harms as well as the benefits of the exam.

After a nationally representative sample of U.S. women between the ages of 39 and 49 learned about the pros and cons of mammography, more than twice as many elected to wait until they turn 50 to get screened, a study released Monday in the Annals of Internal Medicine found.

Most women have absorbed the widely broadcast message that screening mammography saves lives by the time they enter middle age. But many remain unaware of the costs of routine screening in their 40s — in false-positive results, unnecessary biopsies, anxiety and debilitating treatment for tumors that left alone would do no harm.

"In an ideal world, all women would get this information and then get to have their further questions answered by their doctor and come up with a screening plan that is right for them given their preferences, their values and their risk level," said social psychologist Laura Scherer , the study's lead author and an associate professor of research at the University of Colorado School of Medicine.

Of 495 women surveyed, only 8% initially said they wanted to wait until they turned 50 to get a mammogram. After researchers informed the women of the benefits and the harms, 18% said they would wait until 50.

"We're not being honest"

Learning about the downsides of mammograms did not discourage women from wanting to get the test at some point, the study showed.

The benefits and the harms of mammography came as a surprise to nearly half the study's participants. More than one-quarter said what they learned from the study about overdiagnosis differed from what their doctors told them.

"We're not being honest with people," said breast cancer surgeon Laura Esserman , director of the University of California San Francisco Breast Care Center, who was not involved with the research.

"I think most people are completely unaware of the risks associated with screening because we've had 30, 40 years of a public health messaging campaign: Go out and get your mammogram, and everything will be fine," she said in an NPR interview.

What is the breast cancer calculator actor Olivia Munn suggested after her mastectomy?

What is the breast cancer calculator actor Olivia Munn suggested after her mastectomy?

Esserman sees women who are diagnosed with slow-growing tumors that she believes in all likelihood would never harm them. In addition, mammography can give women a false sense of security, she said, like it did for Olivia Munn .

The 44-year-old actress had a clean mammogram and a negative test for cancer genes shortly before her doctor calculated her score for lifetime breast cancer risk, setting off an alarm that led to her being treated for fast-moving, aggressive breast cancer in both breasts.

Toward a personalized plan for screening

Esserman advocates for a personalized approach to breast cancer screening like the one that led to Munn's diagnosis. In 2016, she launched the WISDOM study , which aims to tailor screening to a woman's risk and, in her words, "to test smarter, not test more."

The National Cancer Institute estimates that more than 300,000 women will be diagnosed with breast cancer and 42,250 will die in the U.S. this year. Incidence rates have been creeping up about 1% a year, while death rates have been falling a little more than 1% a year.

We're not dying of metastatic breast cancer. We're living with it

We're not dying of metastatic breast cancer. We're living with it

For the past 28 years, the influential U.S. Preventive Services Task Force has been flip-flopping in its recommendations about when women should begin mammography screening.

From 1996 until 2002, the independent panel of volunteer medical experts who help guide physicians, insurers and policymakers said women should begin screening at 50. In 2002, the task force said women in their 40s should be screened every year or two. In 2009, it said that 40-something women should decide whether to get mammograms based on their health history and individual preferences.

The new study was conducted in 2022, when the task force guidelines called for women in their 40s to make individual decisions.

New guidelines

In 2024, the panel returned to saying that all women between the ages of 40 and 74 should be screened with mammograms every other year. Rising breast cancer rates in younger women, as well as models showing the number of lives that screening might save, especially among Black women, drove the push for earlier screening.

An editorial accompanying the new study stresses the need for education about mammography and the value of shared decision-making between clinicians and patients.

"For an informed decision to be made," states the editorial written by Dr. Victoria Mintsopoulos and Dr. Michelle B. Nadler, both of the University of Toronto in Ontario, "the harms of overdiagnosis — defined as diagnosis of asymptomatic cancer that would not harm the patient in the future — must be communicated."

  • breast cancer
  • mammography
  • Introduction
  • Conclusions
  • Article Information

Selection criteria at reference date included being aged 52 to 85 y during 2011 to 2017, 5 or more years prior enrollment in the health plan, and no prior gastrointestinal cancers, colectomy, inflammatory bowe disease, or genetic colorectal cancer syndromes in the 10 years prior to cohort entry.

a The date of colorectal cancer diagnosis was set as the reference and if unknown, the death date was used (32 participants).

b Matching was based on age, sex, health plan membership duration, and geographic region in a 1:8 ratio of case to control persons. Geographic regions were determined based on the medical center or facility in which a person received most of their care or were assigned for care by the health plan.

c The location was defined using the splenic flexure such that cancers that form in the colon proximal to the splenic flexure were classified as right colon cancers and those distal were left colon and rectum.

eMethods. Screening History

eTable 1. Frequency of Fecal Immunochemical Test (FIT) Screening Among Case and Control Persons Over the 10-Year Ascertainment Window

eFigure 1. Plot of Screening Prevalence by Years From Reference Date During the 10-Year Period Among Control Persons

eTable 2. Characteristics of Study Population of People 52-85 Years Excluding People With FIT Exposure Prior to the Reference Date, KPNC/KPSC 2011-2017

eTable 3. Association Between Completion of Mailed FIT and Risk of Colorectal Cancer Death Overall and by Location Excluding People Who Received FIT Prior to the 5-Year Window

eTable 4. Association Between Screening FIT and Risk of Death From Colorectal Cancer According to Race and Ethnicity, Excluding People With Prior FIT Exposure

eFigure 2. Plot of the Association Between FIT Screening and Death From Colorectal Cancer Using Differing Lookback Periods and Inclusion Criteria Based on Age and Prior Screening

Data Sharing Statement

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Doubeni CA , Corley DA , Jensen CD, et al. Fecal Immunochemical Test Screening and Risk of Colorectal Cancer Death. JAMA Netw Open. 2024;7(7):e2423671. doi:10.1001/jamanetworkopen.2024.23671

Manage citations:

© 2024

  • Permissions

Fecal Immunochemical Test Screening and Risk of Colorectal Cancer Death

  • 1 Department of Family and Community Medicine, The Ohio State University College of Medicine, Columbus
  • 2 Center for Health Equity, The Ohio State University Wexner Medical Center, Columbus
  • 3 Division of Research, Kaiser Permanente Northern California, Oakland
  • 4 Department of Health Systems Science, Kaiser Permanente Bernard Tyson School of Medicine, Pasadena, California.
  • 5 Department of Quality and Systems of Care, Kaiser Permanente Southern California, Pasadena
  • 6 Department of Research & Evaluation, Kaiser Permanente Southern California, Pasadena
  • 7 Center for Primary Care and Public Health (Unisanté), University of Lausanne, Lausanne, Switzerland
  • 8 Mayo Clinic, Phoenix, Arizona
  • 9 Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, New York
  • 10 Department of Population Medicine, Harvard Medical School, Boston, Massachusetts
  • 11 Department of Epidemiology, University of Washington, Seattle

Question   What is the colorectal cancer mortality benefit of screening with fecal immunochemical tests (FITs)?

Findings   In this nested case control study of 10 711 individuals, completing a FIT to screen for colorectal cancer was associated with a reduction in risk of dying from colorectal cancer of approximately 33% overall, and there was a 42% lower risk for left colon and rectum cancers. FIT screening was also associated with lower risk of colorectal cancer death among non-Hispanic Asian, non-Hispanic Black, and non-Hispanic White people.

Meaning   This study provides US community-based evidence that suggests FIT screening lowers the risk of dying from colorectal cancer and supports the strategy of population-based screening using FIT.

Importance   The fecal immunochemical test (FIT) is widely used for colorectal cancer (CRC) screening, but evidence of its effectiveness is limited.

Objective   To evaluate whether FIT screening is associated with a lower risk of dying from CRC overall, according to cancer location, and within demographic groups.

Design, Setting, and Participants   This nested case-control study in a cohort of screening-eligible people was conducted in 2 large, integrated health systems of racially, ethnically, and socioeconomically diverse members with long-term programs of mailed FIT screening outreach. Eligible participants included people aged 52 to 85 years who died from colorectal adenocarcinoma between 2011 and 2017 (cases); cases were matched in a 1:8 ratio based on age, sex, health-plan membership duration, and geographic area to randomly selected persons who were alive and CRC-free on case’s diagnosis date (controls). Data analysis was conducted from January 2002 to December 2017.

Exposures   Completing 1 or more FIT screenings in the 5-year period prior to the CRC diagnosis date among cases or the corresponding date among controls; in secondary analyses, 2- to 10-year intervals were evaluated.

Main Outcomes and Measures   The primary study outcome was CRC death overall and by tumor location. Secondary analyses were performed to assess CRC death by race and ethnicity.

Results   From a cohort of 2 127 128 people, a total of 10 711 participants (3529 aged 60-69 years [32.9%]; 5587 male [52.1%] and 5124 female [47.8%]; 1254 non-Hispanic Asian [11.7%]; 973 non-Hispanic Black [9.1%]; 1929 Hispanic or Latino [18.0%]; 6345 non-Hispanic White [59.2%]) was identified, including 1103 cases and 9608 controls. Among controls during the 10-year period prior to the reference date, 6101 (63.5%) completed 1 or more FITs with a cumulative 12.6% positivity rate (768 controls), of whom 610 (79.4%) had a colonoscopy within 1 year. During the 5-year period, 494 cases (44.8%) and 5345 controls (55.6%) completed 1 or more FITs. In regression analysis, completing 1 or more FIT screening was associated with a 33% lower risk of death from CRC (adjusted odds ratio [aOR], 0.67; 95% CI, 0.59-0.76) and 42% lower risk in the left colon and rectum (aOR, 0.58; 95% CI, 0.48-0.71). There was no association with right colon cancers (aOR, 0.83; 95% CI, 0.69-1.01) but the difference in the estimates between the right colon and left colon or rectum was statistically significant ( P  = .01). FIT screening was associated with lower CRC mortality risk among non-Hispanic Asian (aOR, 0.37; 95% CI, 0.23-0.59), non-Hispanic Black (aOR, 0.58; 95% CI, 0.39-0.85) and non-Hispanic White individuals (aOR, 0.70; 95% CI, 0.57-0.86) ( P for homogeneity = .04 for homogeneity).

Conclusions and Relevance   In this nested case-control study, completing FIT was associated with a lower risk of overall death from CRC, particularly in the left colon, and the associations were observed across racial and ethnic groups. These findings support the use of FIT in population-based screening strategies.

Colorectal cancer (CRC) is a major contributor to cancer deaths worldwide, and an estimated 152 810 individuals in the US received a diagnosis of CRC and 53 010 died from it in 2024. 1 The US Preventive Services Task Force and other US organizations recommend annual fecal immunochemical test (FIT) screening among average-risk individuals to reduce the risk of death from CRC. 2 , 3 FIT is simple to use, usually completed at home without the need for an in-person visit, and can be analyzed in a standardized way. FIT is more sensitive for both CRC and adenomas than guaiac-based fecal occult blood tests (g-FOBT) while being highly specific, 4 - 7 and g-FOBT screening has also been shown to reduce the risk of CRC mortality. 4

FIT screening programs have reported reduced CRC incidence and mortality, 8 , 9 but further evidence on effectiveness is limited. Observational studies of biennial FIT screening in Europe and Taiwan have compared CRC mortality risk between screened and unscreened individuals or people invited vs not invited to screen in people aged 50 to 65 or 50 to 71 years. 10 - 13 However, those studies did not verify eligibility in all individuals and/or used incidence-based mortality rates that is subject to lead-time bias. Current trials of FIT have limited power, 14 and/or are not designed to compare FIT screening with unscreened individuals. 15 This contrasts with observational and randomized clinical trial evidence on g-FOBT, sigmoidoscopy, and colonoscopy. 4 , 16 - 19 Also, there are reasons to believe that FIT effectiveness may vary according to colon site 20 - 22 and by race and ethnicity given differences in social and structural barriers that influence care quality across the screening continuum. 2 , 23

We previously reported improved CRC incidence and mortality rates and narrowing of racial disparities in the Kaiser Permanente Northern California (KPNC) FIT-based screening program. 8 , 9 This study examined whether completing FIT screening is associated with a lower risk of death from CRC overall, according to location in the colon, and by race and ethnicity. The approaches used in this study have previously generated findings that approximated randomized clinical trial results on sigmoidoscopy and g-FOBT. 24 , 25

This was a nested case-control study conducted among members of KPNC and KP Southern California (KPSC). The study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline for case-control studies. KPNC and KPSC are multicenter, integrated, community-based health systems that provide both health insurance and clinical care to large, socially and geographically diverse populations. Both KPNC and KPSC implemented CRC screening programs starting in 2006 and 2007 that use proactive outreach with FIT (OC-AUTO FIT [Polymedco]) for all screening-eligible members who are not up to date by other means, such as colonoscopy. The health systems have standardized processes for delivering care across the screening continuum, from identifying eligible people to treatment for detected cancers, thusly facilitating the evaluation of screening test outcomes. All persons with a positive FIT are referred for follow-up colonoscopy and tracked to clinical resolution. Colonoscopy for primary screening is performed on request through referral to gastroenterology. 26 , 27 The study was deemed exempt for review and the requirement of informed consent by the institutional review boards at The Ohio State University, KPNC and KPSC.

The study population included adults aged 52 to 85 years with at least 5 years of health plan membership prior to a reference date, which was the date of diagnosis for people who died of adenocarcinoma of the colon or rectum as the immediate or underlying cause between January 1, 2011, and December 31, 2017, as ascertained from tumor registry and state mortality files 25 , 28 , 29 or a comparable reference date for selecting control persons. There was reasonable agreement in cause of death information among chart audits, registries, and the death certificate. 29

The age criterion was based on screening guidelines during the study period, which recommended initiating screening at age 50 years and allowed 2 years (through age 52 years) for people to have opportunities to initiate screening. Extension to age 85 years was based on consideration of potential ongoing screening exposure and potential lagged (up to 5 years or longer) FIT screening mortality benefits. 2 We used administrative codes in electronic databases to exclude people with a history of colectomy or increased-risk conditions for CRC, including gastrointestinal cancers, inflammatory bowel disease, inheritable genetic syndromes, or family history of CRC. 29 , 30

Each case patient was individually matched using an incidence-density approach to 8 randomly selected people who were alive and not known to have CRC at the index date based on birth year (±1 year), sex, health plan membership duration prior to diagnosis (±1 year), and medical center and geographic region ( Figure ). 20 , 24 The 1:8 matching ratio enhances statistical power. This approach enabled comparable periods of screening eligibility among case and control persons prior to the date of CRC diagnosis. To focus on the comparison of FIT screening exposure to those unscreened, we excluded people who had colonoscopy as the primary (or initiating) screening test during the 10-year period ( Figure ) but examined FIT to colonoscopy (485 participants) or sigmoidoscopy (51 participants) crossovers in sensitivity analyses. For more information on screening, see the eMethods in Supplement 1 .

Electronic databases that are derived from the electronic health record and administrative databases were used to obtain patients’ birth date, sex, race, ethnicity, health plan membership information, place of residence, and clinical information. Race and ethnicity data in the electronic health records are based on self-report (but may occasionally be assigned by an observer) and classified according to categories consistent with the US Office of Management and Budget 1997 Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity. Consideration of race and ethnicity as a covariate was planned a priori , given differing social and economic experiences, colorectal adenoma prevalence in some studies, and CRC mortality risk. 2 , 23 , 31 , 32 Socioeconomic status (SES) was assessed using data from the US Census Bureau at the tract level close to the midpoint of case ascertainment (2015). 33 , 34 Clinical information included health care utilization, the clinician specialty and health care facilities visited, testing history, and diagnoses and procedures. 29 Cancer diagnosis date, tumor location, and histologic examination findings were obtained from surveillance, epidemiology and end results program–affiliated tumor registries. 29

Exposure to FIT was defined as completing 1 or more FITs with a valid laboratory result for screening prior to the reference date. FIT is primarily delivered by mail using systematic approaches. A positive result is based on a cutoff level of 20 or more μg of hemoglobin per gram of stool. We only included completed FITs documented as being performed in an outpatient setting. For each person included in the study, data on completion of FIT and other screening tests, including colonoscopy and cancer-related symptoms or signs, were ascertained during the 10-year period preceding the reference date. Electronic data were obtained on medical diagnoses, imaging studies, gastrointestinal endoscopies, and laboratory studies such as FIT. 25 , 28 , 29 FITs preceded by colonoscopy and those performed in people with documented CRC-associated signs or symptoms were classified as nonscreening. We also applied a previously validated electronic algorithm as part of a multistep process to classify testing indication as diagnostic (eg, performed in people with iron deficiency anemia), surveillance, screening, or unknown. 28 , 29 The algorithm used electronic data to assign indication as screening after excluding other indications. Thus, we selected people classified as receiving screening or surveillance colonoscopy along with a sample of people completing FIT for chart audit. 24 , 25 , 35

We considered matching variables (including study site and geographic regions) as covariates. Race and ethnicity were categorized as non-Hispanic Asian, non-Hispanic Black, Hispanic or Latino, non-Hispanic White, or other (defined as Native American and Alaskan Native, multiracial and/or multiethnic, or unknown race and ethnicity). SES was measured using the percentage of people aged 25 years or older in a census tract without a high school diploma and categorized according to quartiles. 33 , 34 Healthcare utilization was evaluated to assess health-seeking behavior by enumerating and categorizing (according to quartiles) the number of outpatient primary care clinician encounters in the 5 years prior to the reference date. The Charlson comorbidity score 36 (0, 1, or ≥2) was used as a proxy of wellness to undergo screening. To evaluate whether the association of FIT screening with mortality varied across colon locations, we categorized cancers located proximal to the splenic flexure as right colon, those in the splenic flexure and distal locations as left colon and rectum, and others as not specified.

The primary analysis examined the association of completing FIT screening during the 5-year period prior (beginning from January 2007 through January 2013) to the reference date with the risk of CRC death during 2011 to 2017. In all analyses, people who did not receive any recommended CRC screening tests or received a CRC test for an indication other than screening during the relevant observation window served as the reference category. We also considered both shorter and longer intervals prior to the reference date during which FIT plausibly could identify preclinical cancer or precancer (ie, detectable preclinical period [DPP]). 20 , 25 , 37 Because of the relatively high sensitivity of FIT for identifying advanced adenomas 4 - 7 , 38 and because there is a greater threat to validity from underestimating than from overestimating the DPP duration, 39 our analyses used a 5-year window as the primary approach. 20

Analyses were performed overall (primary analysis) and by colon and rectum location. Secondary analyses were performed by race and ethnicity because protection from FIT depends on receiving follow-up colonoscopy for abnormal test result and social factors influence follow-up colonoscopy completion. 40

In sensitivity analyses, we excluded people who had completed FIT screening before the relevant window due to the potential that people with prior negative FIT tests may represent a lower-risk population. Thus, in these additional analyses using the 5-year ascertainment window, we excluded people who completed FIT more than 5 years prior to the reference date. We also examined restrictions to people aged 55 years and older to maximize the available lookback period.

We evaluated both conditional and unconditional logistic regression analyses; given these yielded similar estimates, unconditional models were used to retain control persons for whom matched case persons were ineligible. The models were adjusted for matching variables (ie, age, sex, health plan membership duration, and geographic region), race and ethnicity, SES, comorbidity score, and wellness visits. All analyses were performed using the Stata statistical software version 18.0 (StataCorp). The threshold for significance was a 2-sided P  < .05. Data analysis occurred from January 2002 to December 2017.

From an underlying population of 2 127 128 members during 2011 to 2017, we identified 1279 patients who died of CRC and 10 226 matched CRC-free persons ( Figure ). We excluded 129 case persons with a non-adenocarcinoma histologic profile and 24 case and 45 control persons with missing age information. We also excluded 23 case and 573 control persons who had only screening colonoscopy, resulting in a final study sample of 10 711 patients (3529 aged 60-69 years [32.9%]; 5587 male [52.1%] and 5124 female [47.8%]; 1254 non-Hispanic Asian [11.7%]; 973 non-Hispanic Black [9.1%]; 1929 Hispanic or Latino [18.0%]; 6345 non-Hispanic White [59.2%], 210 other race or ethnicity [2.0%]), including 1103 case and 9608 control patients, of whom 1109 did not have a matched case ( Figure ). Among those included in the analyses, there were fewer health care visits among case than control persons ( Table 1 ).

During the 10-year period prior to the reference date, among control persons, 6101 (63.5%) completed at least 1 FIT screening, 4404 (45.8%) completed 2 or more FITs (eTable 1 in Supplement 1 ), and the FIT screening prevalence was relatively stable in 5 years preceding the reference date (eFigure 1 in Supplement 1 ). The cumulative FIT positive rate among control persons was 12.6% (768 controls), of whom 610 (79.4%) had colonoscopy within 12 months of the result date; the corresponding positivity and follow-up colonoscopy rates among control persons with ascertainment restricted to the 5-year period before the reference date were 10.6% (562 controls) and 76.8% (431 controls), respectively.

During the 5 years prior to the reference date, 494 case persons (44.8%) and 5345 control persons (55.6%) completed 1 or more FIT screenings ( Table 2 ). In unconditional logistic regression analyses, completing FIT screening was associated with a 33% lower risk of death from overall CRC (adjusted odds ratio [aOR], 0.67; 95% CI, 0.59-0.76) ( Table 2 ).

In stratified analyses, there was no statistically significant difference in CRC for right colon cancers (aOR, 0.83; 95% CI, 0.69-1.01), but there was a significant 42% lower risk of death for left colon and rectum cancers (aOR, 0.58; 95% CI, 0.48-0.71), and the difference in the estimates between the right colon and left colon or rectum was statistically significant ( P  = .01). In analyses stratified by race and ethnicity, completed FIT screenings were associated with a 63% lower risk of death for non-Hispanic Asian individuals (aOR, 0.37; 95% CI, 0.23-0.59), 42% lower risk among non-Hispanic Black individuals (aOR, 0.58; 95% CI, 0.39-0.85), and 29% lower risk among non-Hispanic White individuals (aOR, 0.71; 95% CI, 0.60-0.83). There was a 22% lower risk of death among Hispanic or Latino individuals, but this finding was not significant (aOR, 0.78; 95% CI, 0.57-1.08). There was statistically significant heterogeneity of the estimates across the groups ( P for heterogeneity = .04) ( Table 3 ).

In analyses that excluded people with exposure to FIT prior to the 5-year period (892 cases and 7144 controls; eTable 2 in Supplement 1 ), FIT exposure was associated with a significant 31% lower risk of death from overall CRC (aOR, 0.69; 95% CI, 0.60-0.81) and left colon or rectum cancers (aOR, 0.69; 95% CI, 0.55-0.87). The difference in risk for right colon cancers was not significant (aOR, 0.81; 95% CI, 0.65-1.01) (eTable 3 in Supplement 1 ). Estimates stratified by race and ethnicity were similar to the unrestricted analysis (eTable 4 in Supplement 1 ).

Findings were stable to excluding crossovers to colonoscopy or sigmoidoscopy (aOR for overall CRC risk, 0.67; 95% CI, 0.58-0.76) and to excluding people younger than 55 years at death (eFigure 2 in Supplement 1 ). Sensitivity analyses with differing windows for FIT exposure ascertainment and differing age criteria produced similar results on overall CRC mortality as the primary analysis (eFigure 2 in Supplement 1 ). For instance, in analyses using the entire 10-year (rather than 5-year) observation period prior to the reference date, completing 1 or more FIT screenings was associated with a lower risk of death from overall CRC (aOR, 0.66; 95% CI, 0.58-0.75).

In this nested case-control study, we found that completing 1 or more FIT screenings within the prior 5 years was associated with a 33% lower risk of death from colorectal adenocarcinoma. The reduction in mortality risk was significant for those with left colon or rectum cancers (42%). The results are broadly similar to those obtained in randomized and nonrandomized studies of the association of g-FOBT with mortality from CRC and are consistent with observed reductions in CRC mortality rates following the initiation of organized screening. 8 , 9 FIT has several practical advantages over g-FOBT for screening delivery, including improved adherence. 41 Together with prior studies, the findings provide strong evidence for the long-term systematic delivery of FIT to reduce population rates of death from CRC, with evidence of benefit across the racial and ethnic groups examined.

The magnitude of the association differed according to tumor location within the colon and rectum, which is consistent with the results of prior CRC screening studies with both fecal testing and colonoscopy. 20 - 22 , 42 , 43 A study by Selby and colleagues 21 found that the mean stool hemoglobin concentration was 60.0 μg/g for left colon cancers and 12.4 μg/g for right colon cancers; thus, more cancers in the right colon than in the left colon would be expected to generate hemoglobin concentrations below the positivity threshold. It is possible that tumors in the right colon grow more rapidly, have higher frequency of being preceded by precursor lesions like sessile serrated polyps that are less likely to bleed and may be less detectable by FIT, 38 , 44 or that the longer transit time leads to degradation of blood that is shed. 22

This study was in 2 health care systems that have systematically delivered organized screening to a well-defined member population using population health management strategies for about 1.5 decades. 8 , 27 , 45 FIT effectiveness in clinical practice depends upon receiving follow-up colonoscopy when the FIT result is abnormal. In the population studied, about 20% of people had not undergone a follow-up colonoscopy within 12 months of the result date. Although that follow-up rate is among the highest reported in the US, 46 , 47 any failures to receive follow-up could diminish the potential effectiveness of FIT screening on reducing CRC mortality.

Our results are also similar to prior observational studies despite differences in settings, methods, populations, screening delivery (annual vs biennial), and age groups studied. Chiu et al 12 found a lower risk of CRC death overall (adjusted rate ratio, 0.60; 95% CI, 0.57-0.64), for left colon cancers (adjusted rate ratio, 0.56; 95% CI, 0.53-0.69), and right colon cancers (adjusted rate ratio, 0.72; 95% CI, 0.66-0.80) after up to 10 years of follow-up in Taiwan’s national biennial screening program. The pooled analysis of g-FOBT trials reported a 25% reduction in CRC death among people who completed at least 1 screening round. 4 Our estimated 33% lower risk may reflect that FIT has higher sensitivity and specificity. We leveraged the diversity of our population to report estimates that varied from 29% to 63% overall lower risk of CRC death in association with FIT screening across the racial and ethnic groups.

Well-designed case-control studies can produce valid estimates of the benefit of cancer screening in association with mortality but require being able to distinguish screening tests from those performed for work-up of CRC-related symptoms or signs. We used complementary approaches of electronic classification algorithms and medical record audits to assign test indication. Any residual misclassification would likely underestimate the true efficacy of FIT screening. 37 Also, case-control studies of cancer screening and mortality require assumptions of the DPP. Our findings were largely unchanged in sensitivity analyses using varying FIT exposure ascertainment windows. Also, validity is enhanced if, during the time period corresponding to the DPP, screening prevalence is stable in the population from which the cases and controls are selected. 48 With 1.5 decades of programmatic screening outreach, our study population had relatively stable screening prevalence during the FIT ascertainment window for our primary analyses and analyses using differing cutoff points and assumptions of the DPP yielded similar results.

This study has limitations. Almost one-half of people in our analyses had completed 2 or more FITs, but the case-control design is not suitable for assessing the impact of repeated screening (ie, strategy of annual FIT with perfect adherence) in part because positive screening results, which preclude further screening, are more common among case persons (eTable 1 in Supplement 1 ). In contrast, control persons without cancer are more likely to undergo repeated screening, leading to potentially spurious or exaggerated associations in analyses based on frequency of FIT screening. Although our findings may underestimate the effectiveness of FIT under conditions of perfect adherence, they reflect benefits likely to be observed in organized population-based screening but may not directly apply to populations with lower screening or follow-up colonoscopy adherence.

Our analysis accounted for potential confounders by exclusion, matching, stratification (eg, race and ethnicity and colon site), and model-based adjustment for socioeconomic and health care utilization history. However, the potential for confounding by healthy screenee effects remains. Our previous studies 49 found that the likely magnitude of bias from residual confounding from unmeasured factors, such as lifestyle, is small and unlikely to change our findings. This study was conducted prior to the US Preventive Services Task Force recommendation to start screening at age 45 years 2 ; thus, findings may not directly apply to people aged 45 to 49 years.

In conclusion, this population-based nested case-control study observed that screening with 1 or more FIT was associated with a lower risk of dying from CRC, particularly for cancers in the left colon and rectum, with benefits observed across the racial and ethnic groups examined. The findings support the use of strategies for coordinated and equitable large-scale population-based delivery of FIT screening with follow-up of abnormal screening results to help avert preventable premature CRC deaths.

Accepted for Publication: May 24, 2024.

Published: July 19, 2024. doi:10.1001/jamanetworkopen.2024.23671

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Doubeni CA et al. JAMA Network Open .

Corresponding Author: Chyke A. Doubeni, MD, MPH, Center for Health Equity, The Ohio State University Wexner Medical Center, Columbus, OH 43201 ( [email protected] ).

Author Contributions: Dr Doubeni and Ms Zhao had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Doubeni and Corley are co–first authors.

Concept and design: Doubeni, Corley, Jensen, Levin, Selby, Zauber, Fletcher, Weiss.

Acquisition, analysis, or interpretation of data: Doubeni, Corley, Jensen, Levin, Ghai, Cannavale, Zhao, Buckner Petty, Zauber, Schottinger.

Drafting of the manuscript: Doubeni, Jensen, Selby, Buckner Petty, Fletcher, Weiss.

Critical review of the manuscript for important intellectual content: Corley, Jensen, Levin, Ghai, Cannavale, Zhao, Selby, Buckner Petty, Zauber, Fletcher, Weiss, Schottinger.

Statistical analysis: Doubeni, Zhao, Buckner Petty, Zauber.

Obtained funding: Doubeni, Corley.

Administrative, technical, or material support: Doubeni, Corley, Jensen, Levin, Ghai, Cannavale, Schottinger.

Supervision: Doubeni, Jensen, Weiss.

Conflict of Interest Disclosures: Dr Doubeni reported receiving royalties from UpToDate outside the submitted work. Dr Levin reported receiving grants from Freenome and the Patient-Centered Research Outcomes Institute outside the submitted work. Dr Selby reported receiving grants from Swiss Cancer Research and the Leenaards Foundation outside the submitted work. Dr Schottinger reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.

Funding/Support: Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health (Award No. R01CA213645).

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement 2 .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

IMAGES

  1. What is Usability Testing?

    the case study of usability testing

  2. HOW TO CONDUCT USABILITY TESTS

    the case study of usability testing

  3. 11 Usability Testing Tools to Improve Landing Page Performance

    the case study of usability testing

  4. PPT

    the case study of usability testing

  5. UX researchers use this popular observational methodology to uncover

    the case study of usability testing

  6. Mobile Usability Testing

    the case study of usability testing

VIDEO

  1. Usability Testing with Users' Personal Information

  2. Case Study: Task-Based Usability Test

  3. Eye Tracking Study

  4. How to Test Images in your Usability Study

  5. How do you practice UX Design?

  6. Always Pilot Test User Research Studies

COMMENTS

  1. 5 Real Usability Testing Examples & Methods to Take Away

    5 Real-life usability testing examples & approaches to apply. Get a feel for what an actual test looks like with five real-life usability test examples from Shopify, Typeform, ElectricFeel, Movista, and Trint. You'll learn about these companies' test scenarios, the types of questions and tasks these designers and UX researchers asked, and the ...

  2. 6 Usability Testing Examples & Case Studies

    Without remote usability testing, a testing scale as large as this would have been very difficult and expensive to achieve. Today, SoundCloud's mobile app looks like this: This case study demonstrates the power of regular usability testing in products with frequent updates. Source: SoundCloud case study (.pdf) by test IO. Example #4 ...

  3. Guide to usability testing from Markswebb, with case studies examples

    Definition of usability testing. Usability testing is a methodological approach used to evaluate a product or service by testing it with representative users. The primary purpose of usability testing is to identify usability issues, collect qualitative and quantitative data, and determine the overall user satisfaction with the product.

  4. The Beginner's Guide to Usability Testing [+ Sample Questions]

    Usability Testing Examples & Case Studies. Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in action: 1. User Fountain + Satchel. Satchel is a developer of education software, and their goal was to improve the experience of the site for their users.

  5. A case study in competitive usability testing (Part 1)

    In Part 1, we deal with how to set up a competitive usability testing study. In Part 2, we showcase results from the study we ran, with insights on how to approach your data and what to look for in a competitive UX study. There are many good reasons to do competitive usability testing. Watching users try out a competitor's website or app can ...

  6. A Guide To Usability Testing: Tools, Methods & Examples

    Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they're able to accomplish what they need to do with the product. 3. Formative vs. summative usability testing. Alright!

  7. What is Usability Testing?

    Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product's release.

  8. 8 Usability Testing Methods for UX Insights (With Examples)

    Depending on your resources, one might be better than the other. We look at remote vs. in-person usability testing in more detail in the next chapter. Examples of unmoderated usability tests are first-click tests, session recordings, eye-tracking, 5-second tests, etc.

  9. Usability Testing 101

    Usability testing is a popular UX research methodology.. Definition: In a usability-testing session, a researcher (called a "facilitator" or a "moderator") asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant's behavior and listens for feedback.

  10. (PDF) Usability Testing: A Practitioner's Guide to ...

    This study reviews the studies that compare CTA and RTA to investigate what is gained and lost by using one or the other variant in a usability test. A total of 29 studies, reporting from 42 ...

  11. Usability Testing: A Beginners Guide With Best Practices

    Researchers devised solutions to these problems, detailed in the Quora Usability Testing Case Study. Usability Testing Example #3: McDonald's. In 2016, McDonald's in the UK launched its mobile ordering app and enlisted user testing to uncover issues and evaluate the user-friendliness of their interface. SimpleUsability, a behavioral research ...

  12. Usability Testing Methods: the Ultimate Guide

    This is often the case when we commit to too many goals. By broadening the scope of your usability testing, you're risking diminishing the quality of your data and insight. 2. Recruit your users. An important part of usability studies is working with users that are representative of your user personas.

  13. 2 Usability Testing Case Studies

    Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product's features, functions and purpose. Developing with a clear purpose and research-based data will ensure your ...

  14. Task Scenarios for Usability Testing

    A scenario puts the task into context and, thus, ideally motivates the participant. The following 3 task-writing tips will improve the outcome of your usability studies. 1. Make the Task Realistic. User goal: Browse product offerings and purchase an item. Poor task: Purchase a pair of orange Nike running shoes.

  15. 8 Usability Testing Methods That Work (Types + Examples)

    The three overall usability testing types include: Moderated vs. unmoderated. Remote vs. in person. Explorative vs. comparative. 1. Moderated vs. unmoderated usability testing. A moderated testing session is administered in person or remotely by a trained researcher who introduces the test to participants, answers their queries, and asks follow ...

  16. How Usability Testing Works

    Usability testing is a form of user research where test participants use your product to complete tasks. It examines your product's functionality and validates the intuitiveness of your user interface and design. Your product must operate on a high level of functionality because if users cannot complete tasks without difficulty or frustration ...

  17. Zara: A Usability Case Study

    Usability Testing. With a better understanding of the user, I went to a mall with a Zara store and selected mall-goers to perform guerilla usability testing. I sampled people to test and verified that they were at least frequent online shoppers prior to beginning the testing. ... UX Case Study: Customizing Voucher Claims on Shopeefood Checkout ...

  18. Usability and User Experience: Design and Evaluation

    It reviews the major methods of usability assessment, focusing on usability testing. The concept of UX casts a broad net over all of the experiential aspects of use, primarily subjective experience. User-centered design and design thinking are methods used to produce initial designs, after which they typically use iteration for design ...

  19. Usability Testing

    Usability testing is a method used to evaluate the user experience and navigation of websites, apps, and digital products.. In this guide, we'll explore the basics of usability testing, its significance in software development, and how it enhances user engagement.Whether it's through in-person sessions or remote testing, we'll delve into how real users' interactions provide invaluable ...

  20. 12 Usability Testing Templates: Checklist & Examples

    A usability test plan template (standard operating procedure for all usability testing); A usability checklist template (a simple-form version of the above you can check off during a project); And a usability task template (that can be adapted and customized for specific use cases). We've also included task examples for common use cases, such ...

  21. The Best Usability Testing Questions to Ask

    Here are the four types of usability testing questions you should be using to get valuable user insights: Screening questions. Pre-test questions. In-test questions. Post-test questions. Before we dive into examples of usability test questions, it's important to remember that a mix of open-ended questions with follow-ups and multiple-choice ...

  22. How to Run a Moderated Usability Test in 5 Steps

    Step 1: plan the session. Planning the details of the usability testing session is, in some ways, the most crucial part of the entire process. The decisions you make at the start of the testing process will dictate the way you proceed and the results you end up with.

  23. The pros and cons of mammograms should be explained to women, study

    New research makes the case for educating women in their 40s — who've been caught in the crossfire of a decades-long debate about whether to be screened for breast cancer with mammograms ...

  24. As a prosecutor, Harris mixed criminal justice reform with tough-on

    As a Senator in 2018, Harris reversed course and urged California to allow such testing. A 2023 independent report , opens new tab found "extensive and conclusive" evidence of Cooper's guilt.

  25. Fecal Immunochemical Test Screening and Colorectal Cancer Death

    This nested case-control study analyzed associations of fecal immunochemical test screenings with risk of death from colorectal cancer overall, by cancer ... which is consistent with the results of prior CRC screening studies with both fecal testing and colonoscopy. 20-22,42,43 A study by Selby and colleagues 21 found that the mean stool ...