Think with Enlab

Diving deep into the ocean of technology

Stay Connected. No spam!

How to conduct effective usability testing in 5 steps with a sample script

 

When using a web or a mobile app, first impressions are critical to users’ perceptions of how easy their experience will be. According to a user eye-tracking study conducted by the Missouri University of Science and Technology, it takes only a few seconds for users to form an opinion about a website. Given that, they quickly decide whether the site is worth staying at. That’s why we need to research what matters most to users.

Last time, we shared with you how to conduct an effective user interview for your minimum viable product (MVP). In this article, I will go deeper into how to conduct usability testing in five steps that help validate your design with real users and ensure you create an efficient and enjoyable customer experience. We also discover how usability testing differs from user testing, when they are involved, and how they add value to your design sprints.

Let’s roll.

 

An Overview of usability testing

What builds up MVP usable?

Imagine a customer trying to purchase something from an eCommerce website, the inner dialogue they might be having with the website sound like this:
“I can't find what I'm looking for.”
“I've found what I'm looking for, but I'm not sure how much it costs?”
“Is this item in stock?”
“Is it possible to have it shipped to where I need it?”
“Is delivery included in the price if I pay this much?”

Many of us also often encounter these issues that lead to bad customer experiences and frustration with the site. To bring a good user experience, you need to build a functional and usable MVP version first. In other words, your MVP can handle at least one user's need.

Usability is a quality attribute that assesses how easy and pleasant user interfaces are. Jakob Nielsen, a User Advocate and principal of the Nielsen Norman Group, divided usability into five attributes that can be measured and specified usability objectives. It includes learnability, efficiency, memorability, errors, and satisfaction.

 

 

Learnability: Users are able to learn how to use a system as quickly and easily as possible the first time they interact with the design.

Efficiency: This measures how fast users can perform tasks. Moreover, they also tend to take an interest in whether your MVP provides the features they need.

Memorability: This measures how well users can remember various functions after learning the system's functions.

Errors: The user interface (UI) should be clear enough so that the users make as few errors as possible. You need to know how severe these errors are and include good instructions to fix them quickly.

Satisfaction: It involves how pleasant users feel when using your design. This attribute can consist of the emotional aspects of the user experience (UX), like visual design, brand image, trends, and feelings.

 

It would help if you focused on these attributes that build up your MVP usable and fulfill users’ needs.

 

What is usability testing?

Usability testing is a common research methodology to test the functionality and interface of your app, website, or other digital product with early adopters. It usually involves observing participants' behavior and collecting their valuable feedback while they're completing each task.

The usability testing session can be conducted repeatedly throughout the product development lifecycle, from a concept until the product's release, so that eventually, we can build a usable product. Even if essential product flaws or deficiencies are missed during one test, another testing cycle offers the opportunity to identify the issues.

 

Benefits of usability testing

From my point of view, wherever your MVP is in the development process, usability testing can benefit you considerably as follows:

Eliminate design problems and frustration with your product
Conducting usability testing the right way allows your development team to minimize the users' frustration with using your product. By remedying flaws in the design and UX before releasing, you can accomplish these goals:

  • Set the stage for building a close rapport between your business and your customers.
  • Establish the expectation that your future products sold are high quality and easy to use.
  • Show how your business focuses on a user-centered design process and prioritizes customers' needs and their feedback.
  • Ship a full-fledged product that customers find helpful, effective, learnable, and satisfying.

Improve profitability

What's more, one of the critical benefits of usability testing for your business is to increase profitability. Several examples can illustrate:

  • Minimize the cost of service and support calls. If you discover difficult-to-complete tasks through conducting a usability test. It's time to further the product and prepare valuable supporting materials. If you reduce the risks of releasing a product with serious usability problems, you can save time and resources later.
  • Increase sales and the probability of repeat sales. Running usability testing allows you to take advantage of findings from the test to further your product that is aligned with users' needs most. This paves the way for you to acquire a competitive edge to stand out from the crowd and boost sales. Besides, your team can keep a historical record of usability benchmarks for future releases.

 

Types of usability testing

Before deciding on a proper user research method for your MVP, you need to think over two common types of usability testing.

 

 

Moderated vs. unmoderated: In moderated tests, the facilitator guides, interacts, and observes users during the testing session, while in unmoderated tests, participants are required to put more effort into completing their tasks without help and guidance.

In-person vs. remote: In-person tests take place in a testing lab or office with a trained facilitator who observes participants as they complete the test. Meanwhile, participants can complete remote tests from anywhere, either online or over the phone.

 


Based on specific research purposes, usability testing is likely broken down into either qualitative or quantitative usability testing. These are valuable methods that help you design proper testing scripts. Their goals are the same, which is to gain useful insights, but their approaches vary.

Qualitative usability testing involves observing users understand how they experience your product and why they performed specific actions. It enables you to discover the roots of problems in the UX and gain in-depth insights from their comments and feedback.

Quantitative usability testing measures users' performance on a given task and time on the task. It results in numerical data, statistics, and percentages that allow you to make data-driven design decisions.

 

The differences between usability testing and user test

Most software developers and even UX designers often confuse the differences between usability testing and user testing. Below is a table characterizing the distinctions between the two approaches. Once you differentiate these terms better, you will be able to utilize these methods at the right time and obtain actionable feedback for enhancing a better user experience.

 

 

 

 

Usability testing User testing

What is it

 

Test the UI, UX

 

Test users need

 

When should you do

 

 

As soon as you have drawn those random hand-drawn sketches on paper or a software-based prototype.

 

Right after you have got the idea.

 

 

What to expect from

 

Can people easily use the app (for a given task)?

 

Do people need this solution?

 

What to ask during

 

 

 

 

 

 

 

Can you try doing this <new way of solution>?

How would you like to log in to this solution?

Can you get <a small task in this solution> done in 10 seconds?

 

 

 

 

How do you currently do <problem you are solving>?

Did you even think of a better way of doing this?

Will you like to do this task <the way your solution works>?

Will you pay money (for transaction apps) / share content for this solution?

 

 

How to conduct an effective usability test in 5 steps

Having a solid usability testing workflow enables you to save time and resources. Since then, you can focus on doing what you do best: designing. The procedure of conducting effective usability testing can be broken down into five key steps.

Planning

  • Establish test goals

User experience has come into its own, shaping how companies do business. For this reason, we need to lay out a strategic plan to conduct usability testing effectively. You can begin by establishing your testing goals with project stakeholders. Test objectives are focused on what you want to learn about your users' experiences at any product's development stage.

When you hesitate to define your usability testing goals, consider using criteria such as Whitney Quesenbery’s 5Es — Efficient, Effective, Engaging, Error tolerant, Easy to learn — to shape your testing session. Using the Es, your team can decide how to set goals for your study and measure whether these goals are met.

 

 

 

 

Definition Examples

Effective

 

How accurately and completely users achieve their goals.

 

Can users purchase for service successfully?

 

Efficient

 

 

 

 

How quickly users accomplish their tasks without wasting time or resources.

 

 

 

Can users look for information they need to perform tasks without assistance?

Can users complete a procedure within a predetermined time frame?

 

 

Engaging

 

 

 

 

How well the user interface is and how pleasant and exciting your product is.

 

 

 

Do users rate their experience as fulfilling or enjoyable?

Do their comments and body language imply that they have a positive experience?

 

 

Error tolerant

 

 

 

 

How well the product prevents errors and enables users to recover from mistakes that happen.

 

 

 

Do users encounter errors? If so, how many?

And when they make mistakes, do they recover successfully?

Do they understand them if they receive error messages?

 

Easy to learn

 

 

 

 

 

How effectively the product facilitates initial orientation as well as further learning.

 

 

 

 

Can customers get started right away?

Does their ability to accomplish tasks improve when they become familiar with the system?

Does the system architecture align with their mental model for the way they expect the system to work?

 

  • Determine how to test the product

After defining testing goals, you should determine how to test the product and its scope. The plan can include:

  • What to test is based on the stage of the product's development, such as your existing site, other competitors' sites, wireframes, page designs, working prototypes, etc.
  • What type of usability testing is the most suitable: in-person vs. remote, moderated vs. unmoderated, or even if you need to combine more than one type?
  • Where to conduct the test is based on several options like lab testing, field testing, remotely, some combination, or another option.

What's more, you need to map out the specific tasks you want the participants to complete, delegate roles (like moderators, observers, and note-takers) and equip devices (like cameras, recorders, and so on) to run usability testing effectively. Once you know what to test and how to test it, you can set clear criteria to determine how to measure the success of each task. Below is a sample checklist for your reference to identify success criteria.

 

 

Success rates

 

Time on task

 

Errors made in performing the task

 

Confusion (unexpected user actions)

 

System features used/ not used.

 

System bugs or failures

 

 

  • Write testing script

During the preparation process, it’s vital to build up a usability testing script so that you can get the most out of a usability study. This script outlines the issues, covers testing questions that need to be addressed, and keeps you on the right track in research. It also consists of the activities associated with planning, designing, and conducting the test. Here are some tips for preparing your usability test efficiently:

  • Have clear goals, and create user tasks that test those objectives.
  • Pose the research questions that must be specific, precise, clear, and measurable.
  • Be well-timed, ideally around 30 minutes or so.
  • Run pilot tests by your team before giving them to real usability testers.

We will offer you a sample of the usability testing script later that illustrates how to collect valuable insights from real users, build a good rapport with test participants, and validate your hypotheses.

 

Recruiting the right participants

The second step is to recruit the right participants. When it comes to recruitment for usability testing, almost everyone boils down to a few questions:

  • What kind of people do you test with?
  • How many participants do you need?
  • How do you reach them? (recruiting yourself, recruit through an agency)
  • How will you pay them for their time? (money, gift card, discount, tickets, lifetime access to your product, etc.)

After all, your test findings will only be valid if participants are targeted customers of the product. Otherwise, your final results will be limited and questionable. To screen and recruit the ideal participants for your study, create the most detailed and specific persona you possibly can.

Most usability experts suggest only testing 5 participants during each study to discover several usability problems instead of using many more test participants. To land on the proposed number of 5 users per design iteration, the Nielsen Norman Group conducted 83 usability tests across multiple clients with studies ranging from 2 to 28 participants. The results showed a considerable rise in recurrent findings in 83 studies. Meanwhile, with 5 users, they almost reached the maximum results ratio.

 


Source: The chart summarizes 83 of Nielsen Norman Group's usability consulting projects, www.nngroup.com.

 

In another study, according to Jeff Sauro of MeasuringU, given that the probability of a user encountering an error during testing is 31%, testing just five users would turn up 85% of the problems in an interface.

 

Running a usability test

A usability test comprises the total time when participants perform tasks with the product and fill out questionnaires about the product. Therefore, before running usability testing, you need to tailor your questions to align with your testing goals. And most importantly, you'll need to prepare a proper script and know how to ask good questions to get the most out of them.

Below is a sample of the usability testing script.

 

 

Topic

 

Sample testing script

 

Introduction

 

Hi _________, how are you? I really appreciate you taking time out of your day to participate in this session today. My name’s _______ and I’m _______(your role).

Let me explain how this will work before we begin. I'd like to ask you some questions about yourself and your relevant experience. Then, you start to perform some tasks on our eCommerce platform. Once you've completed the tasks, I'd like to hear your honest feedback about your experience with our program.

You don't have to worry about making mistakes because we are only testing the product, not you. We're running this usability test to see how users interact with our software and collect their insights. It will be a big help to us.

Is there anything else you want to ask before we get going?

Finally, I’d like to ensure you’re comfortable with us recording today’s session. Is this okay with you?

[Give them a recording permission form to sign]

So I’ll now begin recording the audio and go over some background questions.

 

Build rapport with participants through background questions.

 

  • First, what’s your occupation? What do you do all day?
  • What types of websites do you often visit when you browse the web? / What kind of mobile device (or devices) do you use, such as smartphones or tablets?
  • What kinds of things do you spend time doing on your devices?
  • Do you have any favorite websites/mobile apps?

 

The home page/ first screen tour

 

That's great. We're done with the questions, and now the next step is to start looking at things.

First, I'm going to ask you to look at the home page/first screen and tell me what you think of it: what appeals to you, what you can do here, and what it's for.

Just look around and don't click on anything yet. You can scroll if you want.

 

The tasks

 

We're now ready to start the test. I'd like to remind you of a few things. You can use the software as naturally as possible on your own, as if without anyone watching.

Please think aloud as you're using our program. I'm going to ask you to do these tasks without using Search. We want to hear your opinions, like where you're navigating on the page, why you're clicking there, and what you expect to happen when you do click. If something doesn't make sense on the software or isn't working right, please let us know. We'll learn a lot about how well the software works that way. If you have any questions during the test, I'll try to answer them when we're done.

I’m going to read one aloud and give you a printed copy.

  • Explore how to purchase a new cell phone.
  • Can you find where gaming merchandise is?
  • Take a look at this page and see what this company offers.
  • How to submit a contact form?
  • Where can you subscribe to a newsletter?

 

Wrap-up questions and feedback

 

  • How was your experience completing this task?
  • What do you think of the location of features and information?
  • How satisfied are you with the purchase feature? / How easy was this feature to use, on a scale from 1 to 10?
  • What did you like the most/least about this product? Why?
  • Would you use such a product to go shopping in real life?
  • What factors would make you likely to use this product more?

 

Probing

 

Well, once again, thank you so much for taking the time out of your day to participate in this study with us. Your contribution today will be beneficial to us. Take care, and I hope to speak to you soon.

[Provide participants with their incentive or remind them that it will be sent to them.
Stop the screen recorder and save the file.]

 

 

Analyzing data

Once you've completed your testing sessions, it's time to summarize and organize usability testing results. You can structure and organize the data without clutter based on the following elements:

  • Involve an issue identification (ID) system
  • Take note of where it occurred (screen, module, UI widget, flow, etc.)
  • Identify the task the user was engaging in
  • Provide a concise description of the issue

After structuring a list of issues, from my perspective, you can categorize the problems into:

  • Task criticality: Rated in terms of impact on the business or the user if the task isn't accomplished by setting a numeric value to it, like a simple linear sequence (e.g., 1, 2, 3, 4,5, etc.)
  • Issue impact: How much has it impacted the user trying to complete the task? You can assign a value for items on the following scale:
    5: (blocker) The issue prevents the user from completing the task.
    3: (major) It leads to frustration or delay.
    2: (minor) It involves a minor effect on task performance.
    1: (suggestion) It's a participant's recommendation.
  • Issue frequency: How many times has an issue occurred with various participants? Divide the number of reported cases by the total number of participants to calculate the issue frequency (percentage).

James R. Lewis and Jeff Sauro developed a common method for arranging usability issues in Quantifying the User Experience. It can be designed to show the data in the table below with participants in columns P1 and P2, respectively.

 

 

ID

 

TASK

 

TASK CRITICALITY

 

WHERE

 

DESCRIPTION

 

IMPACT

 

P1

 

P2

 

FREQUENCY

 

1

 

Find game merchandise

 

3

 

Home screen

 

Took more than 3 seconds to find the “game merchandise” option

 

2

 

1

 

 

 

0.5

 

2

 

Purchase process

 

5

 

Module

 

Struggled to make payments by credit card

 

3

 

1

 

1

 

1

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 


It's necessary to prioritize usability issues that need to be worked hopped and resolved first, especially when your resources are limited in the initial phases.

 

Reporting your findings

After extracting insights from your data, produce a final report covering the main takeaways and the recommendations for a better user experience during the next round of testing.

A good final report should:

  • Showcase the highest priority issues.
  • Define the specific area of design, interaction, or flow that caused the problem.
  • Involve evidence like snippets of videos, screenshots, or transcripts from actual tests.
  • Provide solutions to the highest priority issues.
  • Include any positive discoveries and feedback you received.

You can use spreadsheets to plan and document the findings. Here’s a template for the Usability testing report created by Carlos Rosemberg. This report involves a test plan, issues and feedback, task performance, and solutions. It’s downloadable, and you can freely modify it to your needs.

 


Source: Usability testing report.

 

 

Read also:

 

Final thoughts
You have discovered how to conduct effective usability testing for validating your MVP. Steve Jobs once said, “The design is not just what it looks like and feels like. The design is how it works”. Usability testing is one of the most practical methods to validate the success of a design. Ensuring customers can look for, understand, and apply a solution contributes to the overall success of a product and business. We hope this sharing can help you be on the right track to giving users the functionality they want and can use. You also can avoid costly mid- and post-development changes.

 


References:

About the author

Hien Dang

As an extrovert person, I love creating value-added activities and taking on challenges. Finding myself passionate about connecting people and businesses worldwide at the intersection of marketing and technology, I invest my time in upskilling, researching, and producing high-quality content in the tech industry.
Frequently Asked Questions (FAQs)
What is the importance of usability testing in product development?

Usability testing is crucial in product development because it helps identify and address user experience issues early on. By conducting usability tests, you can ensure that your product is user-friendly, efficient, and meets the needs of your target audience. This early feedback saves time and resources in the long run and can lead to higher user satisfaction and retention.

How can I plan an effective usability testing session for my product?

Planning a usability testing session involves setting clear goals, defining the scope of testing, recruiting the right participants, and creating a testing script. It’s essential to establish what you want to learn from your users’ experiences and design tasks that align with your objectives. Additionally, recruiting participants who represent your target audience ensures valuable insights.

What are some common usability testing methods, and when should I use them?

Usability testing methods can vary, including moderated vs. unmoderated and in-person vs. remote testing. The choice depends on your project’s specific goals and constraints. For instance, in-person moderated testing may be ideal for observing participants’ behaviors and interactions, while unmoderated remote testing can provide valuable insights from a broader audience.

How do I analyze data from usability testing to improve my product?

Analyzing usability testing data involves identifying issues, categorizing them by severity, and prioritizing improvements. Using a systematic approach, such as an issue identification system, can help you organize and address usability problems effectively. Prioritizing issues ensures that critical problems are tackled first, leading to a better user experience.

What should I include in a usability testing report to communicate findings?

A comprehensive usability testing report should highlight the most critical issues, provide evidence (e.g., videos, screenshots), and offer solutions for improvement. Including positive discoveries and feedback received during testing can also provide a balanced view of the product’s usability. A well-structured report guides your team in making informed design decisions for future iterations.

Up Next

March 07, 2024 by Dat Le
In the dawn of the digital era, where innovation and speed-to-market are the pillars of success,...
November 25, 2021 by Hien Dang
A minimum viable product (MVP) is all about the ongoing improvement and gradual growth of...
October 05, 2021 by Hien Dang
If you've ever watched Shark Tank, a well-known American reality TV series, startup founders can...
September 15, 2021 by Hien Dang
Understanding your customers is crucial for running a successful business. User research, therefore, should also...

Can we send you our next blog posts? Only the best stuffs.

Subscribe