Chapter 5

Evaluative research: Key methods, types, and examples

In the last chapter, we learned what generative research means and how it prepares you to build an informed solution for users. Now, let’s look at evaluative research for design and user experience (UX).

evaluative research illustration

What is evaluative research?

Evaluative research is a research method used to evaluate a product or concept and collect data to help improve your solution. It offers many benefits, including identifying whether a product works as intended and uncovering areas for improvement.

Also known as evaluation research or program evaluation, this kind of research is typically introduced in the early phases of the design process to test existing or new solutions. It continues to be employed in an iterative way until the product becomes ‘final’. “With evaluation research, we’re making sure the value is there so that effort and resources aren’t wasted,” explains Nannearl LeKesia Brown, Product Researcher at Figma.

According to Mithila Fox, Senior UX Researcher at Stack Overflow, the evaluation research process includes various activities, like content testing, assessing accessibility or desirability. During UX research, evaluation can also be conducted on competitor products to understand what solutions work well in the current market before you start building your own.

“Even before you have your own mockups, you can start by testing competitors or similar products,” says Mithila. “There’s a lot we can learn from what is and isn't working about other products in the market.”

However, evaluation research doesn’t stop when a new product is launched. For the best user experience, solutions need to be monitored after release and improved based on customer feedback.

Turn insights into impact with Maze

Create better product experiences with evaluative research powered by actionable insights from your users.

Why is evaluative research important?

Evaluative research is crucial in UX design and research, providing insights to enhance user experiences, identify usability issues, and inform iterative design improvements. It helps you:

  • Refine and improve UX: Evaluative research allows you to test a solution and collect valuable feedback to refine and improve the user experience. For example, you can A/B test the copy on your site to maximize engagement with users.
  • Identify areas of improvement: Findings from evaluative research are key to assessing what works and what doesn't. You might, for instance, run usability testing to observe how users navigate your website and identify pain points or areas of confusion.
  • Align your ideas with users: Research should always be a part of the design and product development process. By allowing users to evaluate your product early and often you'll know whether you're building the right solution for your audience.
  • Get buy-in: The insights you get from this type of research can demonstrate the effectiveness and impact of your project. Show this information to stakeholders to get buy-in for future projects.

Evaluative vs. Generative research

The difference between generative research and evaluative research lies in their focus: generative methods investigate user needs for new solutions, while evaluative research assesses and validates existing designs for improvements.

Generative and evaluative research are both valuable decision-making tools in the arsenal of a researcher. They should be similarly employed throughout the product development process as they both help you get the evidence you need.

When creating the research plan, study the competitive landscape, target audience, needs of the people you’re building for, and any existing solutions. Depending on what you need to find out, you’ll be able to determine if you should run generative or evaluative research.

Mithila explains the benefits of using both research methodologies: “Generative research helps us deeply understand our users and learn their needs, wants, and challenges. On the other hand, evaluative research helps us test whether the solutions we've come up with address those needs, wants, and challenges.”

Tip ✨

Use generative research to bring forth new ideas during the discovery phase. And use evaluation research to test and monitor the product before and after launch.

The two types of evaluative research

There are two types of evaluative studies you can tap into: summative and formative research. Although summative evaluations are often quantitative, they can also be part of qualitative research.

Summative evaluation research

A summative evaluation helps you understand how a design performs overall. It’s usually done at the end of the design process to evaluate its usability or detect overlooked issues. You can also use a summative evaluation to benchmark your new solution against a prior one, or that of a competitor’s, and understand if the final product needs assessment. Summative evaluation can be used for outcome-focused evaluation to assess impact and effectiveness for specific outcomes—for example, how design influences conversion.

Formative evaluation research

On the other hand, formative research is conducted early and often during the design process to test and improve a solution before arriving at the final design. Running a formative evaluation allows you to test and identify issues in the solutions as you’re creating them, and improve them based on user feedback.

TL;DR: Run formative research to test and evaluate solutions during the design process, and conduct a summative evaluation at the end to evaluate the final product.

Tip ✨

Looking to conduct UX research? Check out our list of the top UX research tools to run an effective research study.

5 Key evaluative research methods

“Evaluation research can start as soon as you understand your user’s needs,” says Mithila. Here are five typical UX research methods to include in your evaluation research process:

Evaluative research methods


User surveys can provide valuable quantitative insights into user preferences, satisfaction levels, and attitudes toward a design or product. By gathering a large amount of data efficiently, surveys can identify trends, patterns, and user demographics to make informed decisions and prioritize design improvements.

Closed card sorting

Closed card sorting helps evaluate the effectiveness and intuitiveness of an existing or proposed navigation structure. By analyzing how participants group and categorize information, researchers can identify potential issues, inconsistencies, or gaps in the design's information architecture, leading to improved navigation and findability.

Tree testing

Tree testing, also known as reverse card sorting, is a research method used to evaluate the findability and effectiveness of information architecture. Participants are given a text-based representation of the website's navigation structure (without visual design elements) and are asked to locate specific items or perform specific tasks by navigating through the tree structure. This method helps identify potential issues such as confusing labels, unclear hierarchy, or navigation paths that hinder users' ability to find information.

Usability testing

Usability testing involves observing and collecting qualitative and/or quantitative data on how users interact with a design or product. Participants are given specific tasks to perform while their interactions, feedback, and difficulties are recorded. This approach helps identify usability issues, areas of confusion, or pain points in the user experience.

A/B testing

A/B testing, also known as split testing, is an evaluative research approach that involves comparing two or more versions of a design or feature to determine which one performs better in achieving a specific objective. Users are randomly assigned to different variants, and their interactions, behavior, or conversion rates are measured and analyzed. A/B testing allows researchers to make data-driven decisions by quantitatively assessing the impact of design changes on user behavior, engagement, or conversion metrics.

This is the value of having a UX research plan before diving into the research approach itself. If we were able to answer the evaluative questions we had, in addition to figuring out if our hypotheses were valid (or not), I’d count that as a successful evaluation study. Ultimately, research is about learning in order to make more informed decisions—if we learned, we were successful.

Nannearl LeKesia Brown, Product Researcher at Figma

Nannearl LeKesia Brown, Product Researcher at Figma

Evaluative research question examples

To gather valuable data and make better design decisions, you need to ask the right research questions. Here are some examples of evaluative research questions:

Usability questions

  • How would you go about performing [task]?
  • How was your experience completing [task]?
  • How did you find navigating to [X] page?
  • Based on the previous task, how would you prefer to do this action instead?

Get inspired by real-life usability test examples and discover more usability testing questions in our guide to usability testing.

Product survey questions

  • How often do you use the product/feature?
  • How satisfied are you with the product/feature?
  • Does the product/feature help you achieve your goals?
  • How easy is the product/feature to use?

Discover more examples of product survey questions in our article on product surveys.

Closed card sorting questions

  • Were there any categories you were unsure about?
  • Which categories were you unsure about?
  • Why were you unsure about the [X] category?

Find out more in our complete card sorting guide.

Evaluation research examples

Across UX design, research, and product testing, evaluative research can take several forms. Here are some ways you can conduct evaluative research:

Comparative usability testing

This example of evaluative research involves conducting usability tests with participants to compare the performance and user satisfaction of two or more competing design variations or prototypes.

You’ll gather qualitative and quantitative data on task completion rates, errors, user preferences, and feedback to identify the most effective design option. You can then use the insights gained from comparative usability testing to inform design decisions and prioritize improvements based on user-centered feedback.

Cognitive walkthroughs

Cognitive walkthroughs assess the usability and effectiveness of a design from a user's perspective.

You’ll create evaluators to identify potential points of confusion, decision-making challenges, or errors. You can then gather insights on user expectations, mental models, and information processing to improve the clarity and intuitiveness of the design.

Diary studies

Conducting diary studies gives you insights into users' experiences and behaviors over an extended period of time.

You provide participants with diaries or digital tools to record their interactions, thoughts, frustrations, and successes related to a product or service. You can then analyze the collected data to identify usage patterns, uncover pain points, and understand the factors influencing the user experience.

In the next chapters, we'll learn more about quantitative and qualitative research, as well as the most common UX research methods. We’ll also share some practical applications of how UX researchers use these methods to conduct effective research.

Generate product insights with Maze

Make actionable decisions powered by user feedback, evaluation, and research.

user testing data insights

In the next chapters, we'll learn more about quantitative and qualitative research, as well as the most common research approaches, and share some practical applications of how UX researchers use them to conduct effective research.

Frequently asked questions

What is evaluative research?

Evaluative research, also known as evaluation research or program evaluation, is a type of research you can use to evaluate a product or concept and collect data that helps improve your solution.