Accessibility

Accessibility

Tailor your digital experience to your specific needs by adjusting the accessibility preferences below
tool-icon
Reduce motion
tool-icon
Dark mode
tool-icon
Large text
tool-icon
Large spacing
Get in touch
Main menu
Accessibility

Accessibility

Tailor your digital experience to your specific needs by adjusting the accessibility preferences below
tool-icon
Reduce motion
tool-icon
Dark mode
tool-icon
Large text
tool-icon
Large spacing
Get in touch

How to analyse usability testing results

Strategist

Usability testing involves participants completing set tasks using a digital product and providing feedback. Usability testing can be conducted moderated (with a facilitator present to support the process) or unmoderated (participants conduct the test independently) in-person or remotely. Researchers or UX designers then review and analyse the results to identify opportunities to improve the usability of a digital product. In this article, we outline the process of how to analyse your usability testing results.

Understanding usability test results

Usability testing is an important research method within the UX design process. It helps to identify usability issues and gather real insights from users to support enhancing the user experience of a digital product. Results from usability testing can represent quantitive metrics and qualitative feedback depending on the type of testing and questions asked. Following analysis, usability testing results should be utilised to identify recommendations on opportunities to improve the usability of the digital product.

 

Methods for analysing usability data

When analysing qualitative results at Make it Clear we first like to pull results from across all participants into a digital whiteboard tool such as Miro. This means that we can see all of our results in one place enabling easier comparisons and contrasts to be made as well as enabling a collaborative approach to analysis. We use set formatting of this board and utilise the tagging functionality to support the categorisation of insights. The process of grouping insights and identifying themes is conducted as a multidisciplinary team of research and UX roles who have been involved in the testing process this helps to ensure an unbiased approach and identification of recommendations. Once we have grouped findings we then review these in more detail breaking down into more themes where required and discussing what these findings mean i.e turning a finding into an insight.

Quantitive results most typically are a result of unmoderated usability testing. Unmoderated testing is typically conducted via a research platform such as UserTesting which allows researchers to set up a usability testing study and participants to complete this when it suits. These types of platforms will often do some analysis of the quantitive data such as time spent on your behalf, creating averages etc.

 

Key metrics in usability testing

As previously mentioned, quantitive metrics are often most commonly captured within unmoderated usability testing. This is partly because unmoderated usability testing is often deployed at a much larger scale than moderated testing i.e. more participants are usually involved in unmoderated testing meaning more reliable averages. Some metrics that are often captured within this type of research include:

  • Success rate
  • Time on task
  • User satisfaction

 

Creating a usability test report

Your usability testing report should have four sections: background, methodology, key findings, and recommendations. The background should introduce the context of the testing and the methodology provides an overview of how the research was conducted. The key focus of the report should be placed on the insights and recommendations. It helps provide structure to the insights to play these back in easy-to-understand groupings such as per task or page. Providing a visual reference such as a screenshot of the page in question or a short video clip from the prototype or testing is very useful to help the reader understand exactly what part of the interface the theme is referencing.

It can be helpful to reference the insight in relation to the recommendation. Recommendations then grouped into a roadmap or next steps are most helpful to help understand what the following actions should be as a result of the testing. Prioritising recommendations based on factors such as the necessity to improve the user experience, scale and time can also help decision-making.

 

Common mistakes in analysing usability test results

Two common mistakes in analysing usability test results are misinterpreting data and overlooking user feedback. Those analysing the data should be mindful of the difference between what people say and what they do. In some instances users may report a task being easy to complete, however, this task may have been completed incorrectly, taken a long time to complete, or the user has described a number of pain points as they were completing the task. The converse of this is also a risk, for example, a user may have completed a task quickly, but this does not necessarily mean that they would describe it as easy or that it has been completed correctly. It is important to take into account the body language, feedback and interactions when analysing the test results.

 

Make It Clear’s expertise in usability test analysis

Our team is well-versed in conducting usability testing for apps, platforms and websites of different functionality and subject matters. Our researchers, strategists and UX designers work together to plan usability testing tasks, create prototypes and analyse results to ensure actionable recommendations that are tailored to your digital product.


Back to top