UX Blocks

In today's digital age, where attention spans are decreasing and users are quick to judge a website or app, it's essential to capture users' attention and engage them quickly. This is where the 5-second test comes into play. In this article, we'll discuss what the 5-second test is and how it's used in UX research.

What is the 5-second test?

The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.

How does it work?

The 5-second test typically involves the following steps:

  • Prepare the design: Prepare a screenshot or a design of your website or app that you want to test. Make sure it's representative of the overall design and messaging of the website or app.
  • Show the design: Show the design to the user for 5 seconds. During this time, the user can't interact with the design or scroll through the page.
  • Ask questions: After the 5 seconds are up, ask the user a series of questions about what they remember from the design or what they think the design is about. These questions can range from general impressions to specific details about the design.
  • Analyze the results: Analyze the data collected from the user responses and identify any common themes or issues that users may have with the design. Use this information to make changes or improvements to the design.

When to use the 5-second test?

The 5-second test is often used in the early stages of design or redesign to capture users' initial reactions and make quick improvements. It's a valuable tool for testing new designs, landing pages, and marketing campaigns and can be used to identify potential issues before launching a website or app.

Benefits of the 5-second test

The 5-second test has many benefits, including:

  • Quick and easy: The 5-second test is a quick and easy way to get user feedback and identify potential issues or improvements in a design.
  • Cost-effective: Compared to other UX research methods, the 5-second test is relatively inexpensive and can be conducted with a small group of users.
  • Real-world insights: By capturing users' initial reactions, the 5-second test provides real-world insights into how users perceive and interact with a website or app.
  • Early detection of issues: The 5-second test can help identify potential issues or improvements early in the design process, saving time and money in the long run.

Best Practices

  • Be clear on your research goals: Before conducting a 5-second test, it’s important to identify what specific aspect of your website or design you want to test. This will help you create focused questions and gather meaningful data.
  • Keep it simple: The purpose of a 5-second test is to quickly capture a user's first impression, so keep the test simple. Avoid cluttering the test with too many questions or visuals that might distract from the main focus of the test.
  • Use clear visuals: Use clear and high-quality visuals in your 5-second test. Avoid using visuals that are too complex or hard to interpret, as they may skew the results.
  • Test with the right audience: It’s important to test with the right audience to ensure that the results are relevant. Consider your target audience and recruit participants who match the demographics of your user base.
  • Test iteratively: Don’t rely on a single 5-second test to make decisions. Conduct multiple tests iteratively to ensure that the changes you make are effective and result in a better user experience.

Use cases

  • Landing pages: Landing pages are crucial for user engagement and conversions. A 5-second test can help identify whether the landing page is effective in grabbing the user’s attention and conveying the message clearly.
  • Branding: A 5-second test can help test the effectiveness of branding elements such as logos, colour schemes, and fonts. It can help determine if the brand message is being conveyed in the first few seconds of user engagement.
  • Call to action: Call-to-action (CTA) buttons are crucial for user engagement and conversions. A 5-second test can help determine if the CTA button is placed prominently and if the messaging is clear.
  • Product pages: A 5-second test can help test the effectiveness of product pages by identifying whether the key product features and benefits are being conveyed effectively.

Qatalyst offers a test block feature that allows users to conduct 5-second testing. The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.

A screenshot of a video chatDescription automatically generated

Create a 5-second Test

To create a 5-second test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "5-second test".

A screenshot of a computerDescription automatically generated

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Image Dimension:
  • Fit to Screen: In this setting, the image is displayed in such a way that it completely fits within the boundaries of the screen, regardless of its original dimensions. Users can view the entire image without the need to scroll vertically or horizontally.
  • Fit to Width: The image is scaled to cover the full width of the screen while maintaining its original aspect ratio. If the dimensions of the image exceed the width of the screen, users can scroll vertically to view the portions of the image that extend beyond the visible area.
  • Fit to Height: In this configuration, the height of the image is adjusted to fit the screen while preserving its original width. If the width of the image exceeds the screen width, users can scroll horizontally to explore the entire image.
  • Time Limit: You can change the time limit of the test to 10, 15 or 20 seconds using this option. 

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip:  The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

Based on the technology selected, you will find the following metrics : 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a websiteDescription automatically generated

  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.
A screenshot of a phoneDescription automatically generated

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.
A screenshot of a websiteDescription automatically generated
  • Mouse Scroll Data: This metric provides valuable insights into how users navigate a scrollable page by revealing the extent of the page they have visited. This metric helps us understand the user's engagement and attention as they scroll through the content, offering valuable information about which areas of the page are being explored.
A screenshot of a websiteDescription automatically generated

Example

 Here is an example of how you can use Qatalyst for 5-second testing:

Suppose you are a designer working on a new landing page for a website. You want to use 5-second testing to get feedback on the visual appeal and memorability of the landing page design.

You decide to run a five-second test and ask participants if they are able to understand what the company does by looking at the landing page or if the message is clearly speaking to the audience or not.

Step 1: Upload the Image 

After determining the focus of your test, you can proceed to configure your test within Qatalyst. This can be done by uploading an image of the specific screen you wish to test.

Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.

Step 2: Create Questions around the Test

Now, add questions to understand users' comprehension of the landing page and gauge their overall perception of the website. For this, use the survey blocks. Keep the questions concise and focused to gather quick, instinctive responses within the limited 5-second timeframe. 

Step 3: Publish the test and share 

Now that your test is ready, it’s time to share the test with the participants.

Step 4: Analyze the Result

After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.

In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.

Five-second tests are a quick and effective exercise to measure the clarity of your design and how it communicates a message which can later help you improve the user experience of your design.


A/B testing is a popular method used in UX research to evaluate the effectiveness of different design options. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.

What is A/B testing?

A/B testing is a technique used to compare two versions of a design to determine which one performs better. In UX research, this technique is used to compare two different designs or variations of the same design to understand which one performs better in terms of user behaviour, engagement, and conversion.

How does A/B testing work?

A/B testing involves creating two different versions of a design element, such as a button or a page layout and showing it to the users. User behaviour, such as clicks, eye gaze, or conversions, is then measured and compared between the two versions to determine which one performs better.

For example, let's say a company wants to test two different versions of the website to see which one performs better. The first version of the website has a dark colour scheme, while the second version has a light colour scheme. The company decides to run an A/B test to see which version of the website has a higher conversion rate. 

The company analyzes the results of the A/B test, and they find that the second version of the website has a higher conversion rate. The company decides to implement the second version of the website, and they see an increase in sales.

When to conduct A/B Testing?

A/B testing can be conducted at any stage of website design, but the most effective time to conduct A/B testing is during the design and development phase. This is because A/B testing can help you identify the most effective design elements, user interface, and user experience, which can save time and resources in the long run.

When conducting A/B testing during website designing, it's important to test different variations of your website design, such as layout, colour scheme, font, and images. This can help you identify the most effective design elements that resonate with your audience.

It's also important to conduct A/B testing on different devices, such as desktops, laptops, tablets, and mobile phones, as user behaviour can vary significantly depending on the device. By testing on different devices, you can ensure that your website is optimized for all types of users.

Furthermore, it's essential to conduct A/B testing on different segments of your audience to ensure that your website design is effective for all user groups. This can include testing different versions of your website on different demographics, such as age, gender, location, and interests.

Best Practices

Here are some best practices to keep in mind when conducting A/B testing:

  • Clearly define your goals: Before conducting A/B testing, it's important to clearly define your goals and what you want to achieve. This will help you choose the right metrics to measure and ensure that your A/B testing is aligned with your business objectives.
  • Choose the right metrics: When conducting A/B testing, it's important to choose the right metrics to measure. This will depend on your business goals and what you want to achieve. Common metrics include click-through rates, conversion rates, bounce rates, and time on the page.
  • Monitor results regularly: A/B testing should be an ongoing process, and it's important to monitor your results regularly to identify any trends or changes in user behaviour. This will help you make informed decisions and optimize your online presence over time.
  • Right Audience: It's essential to test on a representative sample of your target audience to ensure that your results are meaningful and relevant. This means selecting participants who match your target demographic, interests, and behaviour patterns.

Use Cases

Here are some use cases for A/B testing:

  • Landing page optimization: A/B testing can be used to optimize landing pages for better conversion rates. By testing different variations of design elements such as headlines, images, call-to-action (CTA) buttons, and forms, businesses can determine which design leads to the highest conversion rate.
  • Email marketing: A/B testing can be used to optimize email marketing campaigns for better open and click-through rates. By testing different variations of subject lines, email copy, and CTA buttons, businesses can determine which version of the email performs the best.
  • Pricing strategy: A/B testing can be used to determine the most effective pricing strategy for a product or service. By testing different pricing models, such as tiered pricing or a flat rate, businesses can determine which pricing strategy leads to the highest revenue.
  • Product features: A/B testing can be used to determine which product features are most appealing to users. By testing different variations of product features, such as the placement of a search bar or the size of product images, businesses can determine which version leads to the highest engagement and conversion rates.
  • Ad campaigns: A/B testing can be used to optimize ad campaigns for better performance. By testing different variations of ad copy, images, and targeting options, businesses can determine which version of the ad leads to the highest click-through and conversion rates.

In all of these cases, A/B testing allows businesses to make data-driven decisions about their marketing and product strategies. By testing different variations of design elements and features, they can identify the most effective approach and improve their overall performance.

In Qatalyst, you can conduct A/B testing on images to determine which one users prefer. Additionally, we offer you the ability to integrate various technologies, such as mouse tracking, facial coding, and eye tracking, to gather additional data and insights about user behaviour and preferences.

Create an A/B Test

To create an A/B test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "A/B test".

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Required: selecting one answer from the list is mandatory; the respondent will not be able to move to another question without answering the question.
  • Randomize: The images will appear in random order.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

On the next screen, based on the technology selected, you will find the following metrics: 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a computerDescription automatically generated

  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.

Example

Suppose you are an e-commerce business looking to optimize your product page layout for better conversion rates. Specifically, you want to compare two different variations of the "Add to Cart" button to determine which design yields higher user engagement and click-through rates.

Step 1: Set Up the Test

In Qatalyst, set up the A/B test by uploading the two versions of the product page, each with a different design for the "Add to Cart" button. Ensure that only this specific element is changed while keeping the rest of the page consistent. This will help isolate the impact of the button design on user behaviour.

Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.

Step 2: Create Questions around the Test

Now, add questions based on the information you want to gather from respondents. Consider using open-ended questions to gather qualitative feedback that can provide deeper insights.

Step 3: Publish the test and share 

Now that your test is ready, it’s time to share the test with the participants.

Step 4: Analyze the Result

After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.

In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.

A/B testing with Qatalyst empowers you to make data-driven decisions about design changes, enabling continuous optimization and improvement of your product pages to maximize conversions and enhance user experience.

In UX research, it is crucial to understand the preferences of your users. This is where preference testing comes in. Preference testing is a technique that allows you to test multiple design options to determine which one is preferred by users. In this article, we will discuss preference testing in UX research, how it works, and its benefits. 

What is Preference Testing?

Preference testing is a type of research that helps businesses understand what their customers like and prefer. It involves showing different design variants to people and asking them which one they like the most. By doing this, businesses can learn their customer's preferences and make decisions about how to improve their product to meet their customers' needs better.

A group of women looking at a computer screenDescription automatically generated

Benefits of preference testing

Preference testing has many benefits in UX research, some of which include:

  • User-centred design: By testing multiple design variations with users, preference testing ensures that your design decisions are based on user preferences and needs rather than assumptions or personal preferences.
  • Improved user experience: Preference testing allows you to identify design elements that users prefer, which can be incorporated into your final design to create a more enjoyable and engaging user experience.
  • Time and cost-effectiveness: Preference testing is relatively quick and inexpensive compared to other UX research methods, such as usability testing. This makes it a great option for small or medium-sized businesses with limited resources.
  • Competitive advantage: Preference testing can help you gain a competitive advantage by creating a design that is optimized for user preferences and needs, leading to increased user engagement and customer satisfaction.

How does Preference Testing work?

To conduct preference testing, you first need to identify the design elements that you want to test. These could include anything from different colour schemes to variations in layout, content, or navigation. Once you have identified the design elements, you can create multiple variations of each element and then present them to users in randomized order.

Participants in the study are typically shown each variation for a few seconds and then asked to choose which one they prefer. This process is repeated for each design element being tested. Once all the data is collected, you can analyze the results to determine which design elements are most preferred by your target audience.

When to perform preference testing?

Well, ideally, you should conduct preference testing whenever you're trying to improve the user experience of a website or application. More specifically, preference testing can be particularly useful when you're trying to make decisions about design, content, navigation, or user flows.

For example, let's say you're designing a new website, and you're trying to decide which colour scheme to use. You could conduct a preference test to see which colour scheme is more appealing to your target audience. Or, let's say you're redesigning your e-commerce site, and you're trying to decide where to place the "add to cart" button. You could conduct a preference test to see which placement is more intuitive and leads to more conversions.

In short, preference testing can be a valuable tool whenever you're trying to make decisions about the user experience of a website or application. It allows you to get feedback from users and make data-driven decisions that can improve the overall user experience.

Best Practices

  • Define clear goals: Before conducting preference testing, it's important to define clear goals and objectives. This can help ensure that the study is focused and that the data collected is relevant and useful.
  • Use a representative sample: To ensure that the results of preference testing are accurate and reliable, it's important to use a representative sample of participants. This means selecting participants who are similar to your target audience in terms of demographics, behaviour, and preferences.
  • Choose appropriate stimuli: The stimuli used in preference testing should be appropriate and relevant to the research goals. This might include different product designs, packaging options, or marketing messages, depending on what is being tested.
  • Use a randomized design: To avoid bias, it's important to use a randomized design when presenting stimuli to participants. This can help ensure that each option is given an equal chance of being chosen.

Use Cases

  • User interface (UI) design: Preference testing can be used to test different UI designs, such as the placement of buttons, the layout of menus, and the use of colour schemes. This can help businesses determine which design elements are most intuitive and user-friendly.
  • Information architecture: Information architecture refers to the organization and structure of content on a website or application. Preference testing can be used to test different information architectures to determine which ones are most effective in helping users find the information they need.
  • Content: Content plays an important role in shaping user experience. Preference testing can be used to test different types of content, such as headlines, descriptions, and calls to action, to determine which ones are most engaging and persuasive.
  • Navigation: Navigation is a critical aspect of UX design. Preference testing can be used to test different navigation structures, such as menus and navigation bars, to determine which ones are most effective in helping users find their way around a website or application.
  • User flows: User flows refer to the series of actions a user takes to accomplish a task. Preference testing can be used to test different user flows to determine which ones are most efficient and user-friendly.
  • Prototypes: Prototyping is an important part of the UX design process. Preference testing can be used to test different prototypes to determine which ones are most effective in meeting users' needs and preferences.

Qatalyst offers a test block feature that allows users to conduct preference testing on various elements of the product. Users can add different versions of an element, such as two different designs, and ask users which one they prefer. This data can be used to inform product development decisions and optimize the product's design and features.

A screenshot of a computerDescription automatically generated

Properties

  • Required: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Randomize: The image options will appear in random order.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.

A screenshot of a computerDescription automatically generated

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

On the next screen, based on the technology selected, you will find the following metrics: 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a computerDescription automatically generated
  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.
  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.
A screenshot of a computerDescription automatically generated


In UX research, it is important to test your prototype before start building your product. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.

What is a Prototype?

A prototype is an early version or a design mock-up of a product or feature that is used to test its design and functionality before it is produced or released. It is a simplified representation of the final product created to illustrate key features and identify design flaws. 

What is Prototype Testing?

Prototype testing is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. The goal of prototype testing is to improve the product's design, functionality, and user experience before it is released to the market. Prototype testing helps users refine their ideas and concepts before investing time and resources in the final product, saving time and money and ensuring the product meets the needs and expectations of users.

Why is it Important?

  • Identifying design flaws early: By testing a prototype, you can identify design flaws and usability issues early in the product development process when it is easier and less costly to address them. This can save time and money in the long run and result in a more successful product launch.
  • Reducing development costs: Prototype testing can help identify design flaws and usability issues early in the product development process. By addressing these issues early, you can avoid costly redesigns or reworks later in the development process.
  • Saving time: By testing the product early and often, you can identify and address issues in a timely manner. This can save time by avoiding lengthy and costly delays caused by major design flaws or usability issues.
  • Improving user experience: Prototype testing can help you identify and address usability issues that could negatively impact the user experience. By refining the product's design and functionality based on user feedback, you can improve the overall user experience and increase the likelihood of user adoption.
  • Mitigating risk: Prototype testing can help mitigate the risk of launching a product that does not meet the needs of its intended audience or has significant design flaws. By testing the product early and often, you can identify and address issues before they become major problems.

How to conduct?

The process of prototype testing typically involves the following steps:

  • Design and create a prototype: A physical or digital prototype is designed and created, representing a simplified version of the final product. 
  1. To effectively utilize prototype testing, it's important to have a clear objective in mind. Defining what you want to validate will guide the type of prototype you create. Prototypes can range in complexity, from simple sketches to fully interactive versions. Low-fidelity prototypes are ideal for concept testing, while high-fidelity prototypes are valuable for assessing usability and identifying workflow issues. By aligning your prototype with the specific goals of your testing, you can maximize its effectiveness in gathering meaningful feedback and insights.
  • Recruit participants: Participants recruited who represent the target audience or user group and who are willing to provide feedback on the prototype.
  • Conduct prototype testing: Participants are asked to perform specific tasks or scenarios using the prototype while the test facilitator observes and collects feedback. The feedback may be collected through questionnaires or interviews.
  • Analyze feedback: The feedback collected during prototype testing is analyzed to identify usability issues, design flaws, and areas for improvement.
  • Refine the prototype: The feedback is incorporated into the prototype's design, resulting in a more refined and user-friendly product.
  • Repeat the testing process: The prototype testing process may be repeated several times, with each iteration improving upon the previous one until the final product is deemed satisfactory for release.

Prototype testing can help ensure that the final product meets the needs and expectations of users and is free of design flaws or usability issues. It can save time and resources by identifying and addressing design flaws early in the development process, resulting in a more successful product launch.

When to Conduct Prototype Testing?

Prototype testing should be conducted during the product development process, ideally after a preliminary version of the product has been created. The timing of prototype testing will depend on the specific product being developed and the stage of the development process.

In general, prototype testing should be conducted when:

  • Design concepts are being developed: Prototype testing can be used to test and refine initial design concepts and ideas before investing significant time and resources in the final product.
  • Major design changes are made: Prototype testing can be used to test the impact of major design changes on the product's functionality and usability.
  • New features or functionalities are added: Prototype testing can be used to test new features or functionalities and gather feedback on their usefulness and effectiveness.
  • Usability issues are identified: Prototype testing can be used to identify and address usability issues before they become major problems.
  • User feedback is needed: Prototype testing can be used to gather feedback from users or stakeholders to ensure that the product meets their needs and expectations.

Overall, prototype testing should be conducted early and often during the product development process to ensure that the final product is user-friendly, effective, and meets the needs of its intended audience.

Best Practices

Here are some best practices to consider when conducting prototype testing:

  • Define clear testing objectives: Clearly define the objectives of the prototype testing, including what features or functionalities will be tested and who the target audience or user group is.
  • Use realistic scenarios: Create realistic scenarios for users to perform using the prototype to simulate how they would use the final product in real-life situations.
  • Recruit representative participants: Recruit participants who represent the target audience or user group and have the knowledge, experience, and skills necessary to provide meaningful feedback.
  • Use multiple testing methods: Use a variety of testing methods, including surveys, interviews, and observation, to collect data from participants and get a complete picture of their experience with the prototype.
  • Create a comfortable testing environment: Create a comfortable testing environment for participants where they feel at ease and can focus on the tasks at hand.
  • Document and analyze feedback: Document the feedback collected during prototype testing and analyze it to identify patterns and themes, as well as specific design flaws or usability issues.
  • Refine and iterate: Incorporate the feedback into the prototype's design and refine it, conducting additional rounds of prototype testing as needed until the final product meets the desired level of usability and functionality.

Qatalyst offers a test block feature that allows users to conduct prototype testing. It is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. You can upload a prototype of your website or mobile app, define the flow of the design and test it on respondents and gather responses.

Prerequisite for prototype link

  • The file has to be publicly accessible.
  • The file has to be from the services we support, "Figma" and "Sketch", for now. (We will be adding more in future)
  • The file should have at least two screens. 
  • Ensure the prototype nodes are connected, and there is a clear starting node with no independent nodes or all the nodes are connected.

Steps for adding prototypes:

  • Access your recent Figma projects by visiting https://www.figma.com/files/recent. This page will display a list of your Figma projects.
  • Choose the specific project you want to test with Qatalyst.
  • To open the prototype, simply click on the play button ▶️ located in the top menu.
  • In the top menu, locate and click the "Share Prototype" option. This will generate a shared URL for your prototype.
  • Qatalyst is compatible with all types of prototypes, including those designed for desktop, mobile, and tablet devices.
  • Ensure the Link Sharing Settings are set to "Anyone with the link can view." It is important for your Figma prototype to be publicly accessible in order to import it successfully.
  • Once you have set the sharing settings, click on the "copy link" button. Then, simply paste the copied link into Qatalyst for seamless integration.

Journey Paths

  • Defined Path: A defined path is a predetermined sequence of steps or actions that a user can follow to complete a specific task or goal within the prototype. However, Qatalyst allows you to define multiple paths for the different screens. The start screen will be the same, and the user can change the end screen of the test. In between the end and start screen user can define multiple paths.
  • Exploratory Path: In this path type, while creating the research, the user can define the start and end screen, and while taking the test, the respondents navigate from different screens to reach the endpoint. Using this technique, you can identify if participants can finish an activity effectively, the time needed to accomplish a task, and adjustments necessary to increase user performance and happiness and examine the performance to determine if it satisfies your usability goal.

When to use which journey Path?

Defined Path: If you have pre-determined navigation paths for your prototype, using a defined path allows you to assess which path is most convenient or preferred by users. This helps you understand which specific path users tend to choose among the available options.

Exploratory Path: Choose an exploratory path when you want to test whether the respondents are able to navigate between the screens and are able to complete the given task and gather information about users' natural behaviour and preferences. This approach encourages users to freely explore the prototype and interact with it based on their own instincts and preferences. It can reveal unexpected insights and usage patterns that may not have been accounted for in predefined paths.

Properties

  • Required: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Randomize: The image options will appear in random order.
  • Screen Recording: Using this option, the whole session of taking the test will be recorded along with the audio.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

1. Blocks Summary

Type image caption here (optional)

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip the Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Task Summary

In the dashboard of the result, you will find the summary of the test and the following information:

  • Average Duration: The average time respondents have spent on the block.
  • Bounce Rate: The percentage of drop off and skips relative to the total number of responses.
  • Misclick rate: Percentage of clicks made other than the actionable CTAs.
  • Success Rate: percentage of respondents who have successfully completed the task. (not skipped, not dropped).
  • Alternate Success Rate: This metric is available only for the defined path and shows the percentage of users who have reached the goal screen i.e. completed the task but have used an alternate path instead of using the defined path.

Overall Usability Score: This score represents the overall performance of your prototype. It is calculated by harnessing various metrics such as success rate, alternate success, average time, bounce rate, and misclick rate.

Overall Usability Score = Direct Success Rate + (Indirect Success Rate/ 2) - avg(Misclick%) - avg(Duration%)

3. User Journey Tree

Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.

For the journeys, you will find the information by the coloured lines in the tree.

In the defined journey, the following information is shown:

  • Green line: The respondents who have navigated through the defined path and landed on the goal screen.
  • Purple Line: When the respondents have taken some path other than the defined path yet landed on the goal screen.
  • Red Line:  When travelling this path, the respondents have not reached the goal screen.
  • Red down arrow 🔽: This icon displays the number of users who closed/skipped the journey after visiting a particular screen.

For the exploratory journey, there is no alternate path. The journey can be either a success or a failure.

Success: When the respondents can reach the goal screen.

Failure: When respondents do not reach the goal screen.

Insights from the User Journey

  • Popular paths: By examining the user journey tree, you can identify the most common paths taken by respondents. This insight helps you understand the preferred navigation patterns and the pages or features that attract the most attention. You can leverage this information to optimize and enhance the user experience on these popular paths.
  • Abandoned paths: The user journey tree can also reveal paths that are frequently abandoned or not followed by respondents. These abandoned paths may indicate where users encounter difficulties, confusion, or disinterest. 
  • Navigation patterns: Analyzing the user journey tree allows you to observe the navigation patterns of respondents. You can identify if users follow a linear path, explore different branches, or backtrack to previous pages. This insight helps you understand how users interact with your prototype and adapt the navigation flow accordingly to ensure a seamless and intuitive user experience.
  • Bottlenecks or roadblocks: The user journey tree can highlight specific pages or interactions where users frequently get stuck or face challenges. These bottlenecks or roadblocks in the user journey can provide valuable insights into areas that may require improvements, such as unclear instructions, confusing interface elements, or complex tasks. By addressing these issues, you can smoothen the user journey and enhance usability.
  • Deviations from expected paths: The user journey tree might reveal unexpected paths taken by respondents that differ from the intended user flow. These deviations can indicate opportunities to optimize the prototype by aligning user behaviour with the desired user journey. Understanding why users deviate from the expected paths can provide insights into their needs, preferences, and potential design or content improvements.

4. Graph metrics

The performance metrics provide a clear picture of the average time spent on each page in the prototype. This information is presented alongside the total time taken to complete the task and the number of respondents who have visited each page. By mapping these metrics together, we gain insights into how users interact with each page and how it contributes to the overall task completion.

Insights from Performance metrics

  • Column height: This indicates a significant drop in people's engagement with the corresponding page. It suggests that users are leaving or losing interest in that particular page. It could be a sign that the content or design of the page needs improvement to retain user attention.
  • Column width: If any particular column is wider than the other columns, it suggests that respondents have spent a considerable amount of time on the page. It may indicate that the page is either providing valuable information or engaging the users in some way. However, it's important to note that spending too much time on a page also indicates confusion or difficulty in finding the desired information.

5. Performance Breakdown

A screenshot of a phoneDescription automatically generated

This chart showcases the comprehensive performance analysis of each page within the prototype. It presents valuable insights such as the average time spent by respondents on each page, the misclick rate, and the drop-off rate.

By harnessing these metrics, we derive a usability score for every page, offering users a clear understanding of how each page performed so that they can focus on areas that require improvement.

Usability Score = MAX(0,100 - (Drop Off) - (Misclick Rate *misclick weight) - (MIN(10,MAX(0,(Average duration in sec) - 5)/2))))

The Misclick weight equals 0.5 points for every misclick.

Insights from Performance Breakdown

  • Identify high-performing pages: Pages with a shorter average time spent, lower misclick rate, and lower drop-off rate can be considered well-designed and well-performing. These pages likely provide intuitive interactions and a smooth user experience.
  • Identify low-performing pages: Pages with a higher average time spent, higher misclick rate, and higher drop-off rate may require further investigation and improvement. These pages may have usability issues, unclear navigation, confusing elements, or uninteresting content.
  • Prioritize improvements: By analyzing the metrics, you can prioritize your efforts based on the insights obtained. Focus on optimizing pages with high drop-off rates and high misclick rates to improve user experience, reduce abandonment, and increase engagement.

The page with a usability score below 80 calls for attention. The researchers can check eye tracking, mouse tracking and facial coding data and figure out if the behaviour is expected or an anomaly.

6. Emotion AI Metrics

When you click on any page using the performance metrics, you will be seamlessly transported to the detailed Metrics page, where you can delve into insights gathered from eye tracking, facial coding, and mouse clicks.

Here, you will discover information such as the average time spent on the page, the number of respondents who have visited the page, and intricate details regarding the misclick rate.

In the Analytics section, you'll have access to a wealth of metrics, including:

  • All Clicks: This encompasses all clicks made within the prototype, offering a holistic view of user interaction.

  • Misclick: This specific metric isolates clicks made outside the designated clickable areas within the prototype, shedding light on user behaviour in unintended interactions.

  • Mouse Scroll Data: This metric provides valuable insights into how users navigate a scrollable page by revealing the extent of the page they have visited. This metric helps us understand the user's engagement as they scroll through the content, offering valuable information about which areas of the page are being explored.

  • AOI (Area of Interest): On the prototype page, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, average fixation duration, the number of clicks, and the number of miss-clicks, providing a deeper understanding of user engagement.

For example, an AOI could be used to track the time that users spend looking at a call to action button, or the number of times they click on a link. This information can be used to improve the usability of the website or app by making sure that the most important elements are easy to find and interact with.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.

By exploring these meticulously curated metrics, you can gain a comprehensive understanding of user engagement and behaviour, empowering you to make data-driven decisions to enhance your project's performance and user experience.

7. Screen Recording

Under this section, you will find the screen recordings of the session taking the test. You can use the top dropdown to select the testers. 
Along with the video recording, you will get the following functionality: 

  • Eye Tracking Metrics: Shows where users look on the screen with a heatmap.
  • Facial Coding Metrics: Tracks how users feel using facial expressions, displayed as positive and negative emotion charts.
  • AOI (Area of Interest): Let you choose specific parts of the video to study closely.
  • Transcript: Writes down everything users say in the video.
  • Highlighting: Helps you point out important parts in the transcript.
  • Notes: Allows you to jot down thoughts or comments at specific times in the video.

Highlight Creation


What is card sorting?

Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.

Card sorting can be conducted using physical cards, where participants physically manipulate and group the cards, or it can be done digitally using online platforms like Qatalyst. The technique allows researchers to gain insights into users' mental models, understand their organizational preferences, and inform the design and structure of information architecture, navigation systems, menus, and labelling within a product or website.

Why do we use Card Sorting?

Card sorting is a valuable technique in UX research for several reasons:

  • Understand how users think: Card sorting helps us understand how users naturally organize and categorize information in their minds. This insight helps us design interfaces that match their mental models, making it easier for them to find what they're looking for.
  • User-friendly design: By involving users in organizing information, we ensure our designs are user-friendly and intuitive. Card sorting helps us create interfaces that feel familiar and make sense to users, resulting in a better user experience.
  • Find common patterns: Card sorting helps us identify common patterns and groupings in how users categorize items. This knowledge guides us in organizing information in a way that makes sense to the majority of users.
  • Improve findability: Effective information organization improves how easily users can find what they're looking for. Card sorting helps us identify logical groupings and labelling conventions that enhance the findability of content.

Types of Card Sorting?

There are three main types of card sorting:

  • Open card sorting: Users are given a set of cards with labels on them and asked to sort them into groups that make sense to them. The researcher does not provide any guidance on how to sort the cards.
  • Closed card sorting: Users are given a set of cards with labels on them and asked to sort the cards into pre-determined categories. The researcher provides a list of categories to choose from.
  • Hybrid card sorting: This is a combination of open and closed card sorting. Users are given a set of cards with labels on them and asked to sort the cards into pre-determined categories. They are also allowed to create their own categories if they do not see any that fit their needs.

How to conduct Card Sorting?

1. Choose the correct type of card sorting. There are three main types of card sorting; choose the methods you want to choose.

Type

When to use it

Open card sorting

When you want to understand how users naturally group information.

Closed card sorting

When you already have a good idea of what your categories should be.

Hybrid card sorting

When you want to get feedback on both your initial ideas and how users naturally group information.

2. Prepare the cards. The cards should be clear and concise, and they should represent the information that you want users to sort. Use index cards, sticky notes, or a digital card sorting tool like Qatalyst.

3. Recruit participants. You should recruit participants who are representative of your target audience. 

4. Conduct the card sort. You can conduct the card sorting in person or online. If you conduct the card sorting in person, you must provide a quiet space and a comfortable place for participants to work. If you are conducting the card sorting online, you will need to use a digital card sorting tool.

5. Analyze the results. Once you have collected the results of the card sort, you will need to analyze them. You can use various methods to analyze the results, such as frequency analysis and category analysis.

6. Use the results to improve your information architecture. Once you have analyzed the results of the card sort, you can use them to improve your information architecture. You can use the results to identify the most essential categories for users, determine the best way to label categories and validate or invalidate initial assumptions about information architecture.

Best Practices

  • Choose the correct type of card sorting for your needs. As mentioned earlier, there are three main types of card sorting: open, closed, and hybrid. The type of card sorting you choose will depend on your specific needs and goals.
  • Use clear and concise labels on the cards. The labels on the cards should be clear and concise, and they should represent the information that you want users to sort. Avoid using jargon or technical terms that users may not understand.
  • Have a clear goal for the study. What do you want to learn from the card sort? Once you know your goal, you can tailor the study to collect the necessary data.
  • Collect enough data from enough participants. The number of participants you need will depend on the complexity of your information architecture. However, as a general rule of thumb, you should aim to collect data from at least 15 participants.
  • Be patient and let participants think aloud. This will help you to understand why they are making the decisions they are making. Ask them to explain their thought process as they are sorting the cards.

Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.

Create a Card Sort in Qatalyst

To create a card sort, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Card Sorting" option.

A screenshot of a computerDescription automatically generated

Step 3: Here, you can add the task and add multiple cards and Categories by clicking on the "+" button available. There are multiple properties and options also available to enhance the experience.

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

1. Card Property

a. Image: An Image will also appear on the card along with the text.

b. Hide Title: The card will appear without the text; this option can be enabled if you have an image added to the cards.

c. Randomize:  The cards will appear in random order.

2. Card Category

1. Limit card in category: Using this property, only the given number of cards can be added to a particular category.

A screenshot of a computerDescription automatically generated

2.Randomize - The category will appear in random order.

Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Result View

In the result section of a card sort, you will find the quantitative data about the selection made by different respondents.

In the Categories and Cards section, you will find the following two views for the result data:

Card View: It will show the information and the number of categories in which they are added, along with the agreement percentage. By clicking on the plus icon, you can get information about the categories to which these cards are added.

A screenshot of a graphDescription automatically generated

How to read this data?

From the first column, users can infer that the "DBZ" card is added in two categories, and the agreement percentage is 50%, which means users agree that this card belongs to two categories. 

You can also expand the cards and view the percentage of users who have added the card in a particular category. 

A screenshot of a computerDescription automatically generated

Category View

In the category view, the user can view the category names and the number of cards added in that category, along with the agreement matrix.

A screenshot of a graphDescription automatically generated

After expanding the card, users can view the cards added in that category and the percentage of users who have added them.

A screenshot of a cardDescription automatically generated

Agreement Matrix

An agreement matrix is a visual representation of how often users agree that a card belongs in each category in a card sort. It is a table with rows representing the cards and columns representing the categories. Each cell in the table indicates the agreement rate for a card in a category. The agreement rate is the percentage of users who placed the card in that category.

The agreement matrix can be used to identify which categories are most agreed upon by users, as well as which cards are most ambiguous or difficult to categorize. It can also be used to identify clusters of cards that are often grouped together.

What is Tree Testing?

Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.

By conducting tree testing, we aim to address the fundamental question, "Can users find what they are looking for?" This research technique allows us to evaluate the effectiveness of our information architecture and assess whether users can navigate through the content intuitively, locate specific topics, and comprehend the overall structure of our product. It provides valuable insights into the findability and clarity of our content hierarchy, enabling us to refine and optimize the user experience.

A group of people working on a projectDescription automatically generated

In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.

What is Information Architecture(IA)?

Information architecture (IA) refers to the structural design and organization of information within a system, such as a website, application, or other digital product. It involves arranging and categorizing information in a logical and coherent manner to facilitate effective navigation, retrieval, and understanding by users.

Why do we use Tree Sorting?

Here are some of the things that tree testing can be used for:

  • Evaluate Information Architecture: Tree testing allows researchers to evaluate the effectiveness of the information architecture (IA) of a website or app. Testing the navigational structure in isolation, without the influence of visual design or other distractions, it provides a focused assessment of how well users can find and understand information within the product. For example, you might want to test whether users can easily find the page that describes your company's history.
  • Assess Findability: Tree testing helps determine the findability of specific topics or pieces of information. By presenting users with tasks or scenarios and observing their navigation through the tree structure, researchers can identify any difficulties or inefficiencies in locating desired information. This insight helps refine the IA and improve the overall findability for users. For example, you might want to test whether users find the main categories of your website to be straightforward and easy to understand.
  • Identify Navigation Issues: Tree testing allows for the identification of navigation problems, such as incorrect paths, dead ends, or confusing labelling. By analyzing user interactions and collecting feedback, researchers can uncover areas where the IA may be causing confusion or hindering users' ability to navigate efficiently.  For example, you might find that users are having difficulty finding the page that describes your company's products.

Here are some questions that tree testing can answer:

  • What are the most critical topics for users? This question can be used to prioritize the content on your website. By asking users to rank the topics in order of importance, you can get feedback on which topics are most important to them.
  • What are the most common paths that users take through my website? This question can be used to identify the most popular areas of your website. By tracking the paths that users take through your website, you can get feedback on which areas are most popular and which areas need improvement.
  • What are the most common problems that users have finding information on my website? This question can be used to identify the areas of your website that are most difficult to use. By asking users to describe the problems they have found, you can get feedback on how to improve the usability of your website.
  • Do my labels make sense? This question can be used to validate ideas before designing. By asking users to find topics based on their labels, you can get feedback on whether the labels are clear and easy to understand.
  • Is my content grouped logically? This question can be used to test the usability of your navigation. By asking users to find topics based on their location in the hierarchy, you can get feedback on whether the content is grouped logically and easy to navigate.
  • Can users find the information they want easily and quickly? This question can be used to build a foundation for the design that will lay on top of your product structure. By asking users to find topics within a specific amount of time, you can get feedback on whether the information is easy to find and navigate.

How to conduct Tree Testing?

  • Define your goals. What do you want to achieve with the tree test? Do you want to evaluate the findability of specific topics or subtopics? Do you want to get feedback on the overall hierarchy of your website? Once you know your goals, you can start to develop your tree test.
  • Create a tree diagram. This is a visual representation of your website's hierarchy. It should show the top-level categories, as well as the subcategories and sub-subcategories. You can use a spreadsheet or a tree-testing tool to create your tree diagram.
  • Write tasks. The tasks that you give to users will help you to evaluate the findability of topics on your website. The tasks should be specific and measurable. For example, you might ask users to "Find the page that describes our company's history."
  • Recruit participants. You will need to recruit a group of users who represent your target audience. The participants should be familiar with the type of website that they are testing.
  • Conduct the tree test. You can conduct the tree test remotely or in person. If you are conducting the test remotely, you will need to use a tree testing tool. The tool will allow you to present the tree diagram to the participants and track their progress.
  • Analyze the results. The results of the tree test will show you how well users were able to find the topics that you asked them to find. You can use the results to identify areas of the hierarchy that are difficult to understand or navigate.

Best Practices

  • Keep the tasks short and simple.
  • Use clear and concise language.
  • Give the participants enough time to complete the tasks.
  • Ask the participants to think aloud as they are completing the tasks.

What is Tree Testing?

Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.

In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.

A screenshot of a computerDescription automatically generated

Create a Tree Test in Qatalyst

To create a tree test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

A screenshot of a web pageDescription automatically generated

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Tree Testing" option.

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Step 3: Once you have added the block, design your question and information architecture by simply adding the labels and defining the parent-child relationship in a tree-like structure.

A screenshot of a computerDescription automatically generated

Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Property

Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Result View📊

In the result section of the Tree Test, you will find the following two sections:

End Screen: This section will show the label submitted by the users as the answer to the task and the percentage of users who selected that label.

The below screenshot suggests that 4 respondents have taken the test and they have submitted two labels (Preference Testing, Qualitative study) as the answer to the task, and out of that 75% of the respondents have selected Preference Testing and 25% have selected Qualitative Study.

A screenshot of a computerDescription automatically generated

Common Path: In this section, you will the actual path navigated by the respondents, starting from the parent label to the child label they have submitted.

What is Live Website Testing?

In UX research, live website testing refers to the practice of conducting usability testing or user testing on a live and functioning website. This type of testing is done to gather insights and feedback from users as they interact with the website in its actual environment.

Website testing aims to ensure that the website is user-friendly, efficient, engaging, and aligned with the needs and expectations of its target audience.

Live Website Testing Using Qatalyst

The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website or digital platform, allowing researchers to observe how participants engage with the task in a controlled environment. This provides a focused way to gather insights into user behaviour, decision-making, and preferences during a research session.

  • Path tracking: Path tracking in live website testing involves monitoring and analyzing the specific journeys or paths that users take while navigating through the website. This method allows researchers to understand the sequence of actions users perform, the pages they visit, and the interactions they engage in during their browsing sessions.
  • Average time and drop-off: Understanding the average time users spend on the website and the drop-off points (where users exit the website) is crucial in live website testing. This data provides essential metrics to evaluate user engagement and identify areas that may need improvement.
  • Leveraging Facial Coding and Eye Tracking
Website testing can be further enriched through advanced technologies such as facial coding and eye tracking. These techs offer the capability to Qatalyst respondents' nonverbal cues, shedding light on facial expressions and eye movements. This nuanced analysis unveils unspoken emotions, cognitive processes, and areas of focus, providing a holistic perspective on participant engagement.
  • Unveiling Insights Through Audio Transcripts
The recorded audio holds immense potential for researchers. Audio transcripts offer a deep dive into participants' verbal responses, enabling researchers to analyze language nuances, sentiments, and communication patterns. These insights contribute to a comprehensive understanding of participant attitudes and viewpoints, enriching the research findings.

The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website, allowing researchers to observe how participants engage with the task in a controlled environment. 

Create a Live website Test

To create a Live Website test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. 

From the list of options that appear, select "Live website Testing" under the Task-based research section.

Step 3:  Place the test instructions or scenarios in the top bar and enter the website URL where you want respondents to perform the task in the URL field. Click the "Add Task" icon to include multiple tasks in your testing scenario.

Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Time Limit: You can change the time limit of the test to 30, 60, 90, or 120 seconds using this option. This will depend on the length and the complexity of the task. Please note once the time is over, the task will be over and submitted, and the recording will be available for that time only.
  • Show Timer: By enabling this option, a timer will be displayed during the website testing session.
  • Screen Recording: This option is mandatorily enabled. Using this option, the screens of the respondents will be recorded for the entire duration of the test. 

Technology

  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once, too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.

Note: When a single task is assigned, the insights provided include a video recording, user path visualizer, journey tree, and performance breakdown. For multiple tasks, the insights will feature only the video recording accompanied by supporting metrics.

Single Task Results

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

3. User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

4. Journey Tree

Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.

5. Performance Breakdown

Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability. 

Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.

Multiple Task Results

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.
  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.
  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

3. User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

4. Journey Tree

Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.

5. Performance Breakdown

Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability. 

Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.

Moderated research in UX (User Experience) involves a controlled testing environment where a researcher interacts directly with participants. This method employs a skilled moderator to guide participants through tasks, observe their interactions, and gather qualitative insights. This approach offers an in-depth understanding of user behaviour, preferences, and challenges, enabling researchers to make informed design decisions and enhance the overall user experience. The researcher's active involvement allows for real-time adjustments, targeted questioning, and nuanced observations, making moderated research a valuable tool for evaluation and improvement.

Moderated testing can be conducted in person or remotely. In person, the moderator and participant will be in the same room. Remotely, the moderator and participant will use video conferencing to communicate.

Moderated testing is a more time-consuming and expensive type of usability testing than unmoderated testing. However, it can provide more in-depth feedback that can be used to improve the user experience.

Here are some of the benefits of moderated testing:

  • Deeper Insights: Moderated research allows researchers to probe deeper into participants' thoughts, feelings, and behaviours. The real-time interaction provides an opportunity to uncover underlying motivations and reasons behind user actions, leading to richer insights.
  • Clarification: Moderators can clarify any misunderstandings participants might have about tasks, questions, or the product itself. This ensures that the data collected is accurate and reflects participants' true experiences.
  • Real-time Feedback: Researchers can gather immediate feedback from participants about their experiences with a product or interface. This feedback can lead to quick adjustments and improvements in the design process.
  • Behaviour Observation: Moderators can observe participants' facial expressions, body language, and verbal cues. These non-verbal signals can provide additional context to participants' actions and help in understanding emotional responses.

Here are some of the drawbacks of moderated testing:

  • It can be more time-consuming and expensive than unmoderated testing.
  • It can be difficult to find qualified moderators.
  • It can be disruptive to the user's experience.

Overall, moderated testing is a valuable tool for usability testing. It can provide valuable insights into the user experience that can be used to improve the product or service.

Here are some of the situations where moderated testing is a good choice:

Complex User Flows and Interactions: If your product involves intricate user flows, complex interactions, or multi-step tasks, moderated testing can help guide participants through these processes and ensure they understand and complete tasks correctly.

In-Depth Understanding: If you aim to gather in-depth qualitative insights into participants' experiences, thoughts, emotions, and motivations, moderated testing allows you to ask follow-up questions, probe deeper, and gain a richer understanding of user behaviour.

Usability Issue Identification: If your primary goal is to identify usability issues, pain points, and obstacles users face while interacting with your product, moderated testing is recommended. Moderators can observe participant struggles in real-time and gather detailed context around the issues.

Customized Probing: When you want to tailor your research approach to each participant's unique responses and behaviours, moderated testing provides the flexibility to delve deeper into areas of interest based on individual participant feedback.

Real-Time Feedback: If you need immediate feedback on design changes, feature iterations, or prototypes, moderated testing can offer instant insights that can be acted upon quickly.

Small Sample Sizes: For studies with a small sample size, moderated testing can provide a more nuanced understanding of individual participant experiences and preferences.

Early Design Iterations: During the early stages of design or development, moderated testing can be valuable. A moderator can quickly adapt to changes and provide real-time feedback, enabling iterative improvements before the product reaches advanced stages.

In the realm of user-centric research, we have introduced a vital block known as the "Session Block." This feature enables researchers to delve deep into user experiences, fostering a holistic comprehension of behaviours and insights.

A person sitting at a desk with a computerDescription automatically generated

Understanding the Essence of the Session Block

At its core, the Session Block is a component within Qatalyst that empowers researchers to schedule and conduct insightful sessions with participants. This capability plays a pivotal role in what is known as moderated research—a method that hinges on guiding participants through specific tasks while encouraging them to vocalize their thoughts and actions. The result is an invaluable stream of real-time insights that provide a comprehensive view of user behaviour, opinions, and reactions.

Unveiling the Dynamics of Moderated Research

Moderated research, made possible through the Session Block, stands as a dynamic approach to engaging with participants.  It works by asking users to talk about what they're doing and thinking while they use a product or service. This helps us understand how they make decisions, what frustrates them, and when they feel happy about their experience.

As participants navigate through tasks, the moderator assumes a guiding role, ensuring that the user's journey is well-structured and aligns with the study's objectives. Moderators also wield the power of follow-up questions, enabling them to probe deeper into participants' responses and elicit more nuanced insights.

Leveraging Session Blocks for Comprehensive Insights

The Session Block not only empowers researchers to facilitate these insightful interactions but also provides a structured framework for conducting them. Here's how it works:

A few people in a discussionDescription automatically generated with medium confidence
Type image caption here (optional)

Scheduling and Set-Up: Researchers can seamlessly schedule sessions by defining crucial details such as the session's name, participant's name and email, moderator's information, language preferences, date and time, and even the option to incorporate facial coding technology if desired.

Real-Time Interaction: During the session, participants engage in tasks while verbally sharing their thought processes. Moderators actively guide the discussion, prompting participants to elaborate on their actions and decisions.

Deeper Exploration: Moderators leverage follow-up questions to delve deeper into participants' viewpoints. This enables them to uncover underlying motivations, preferences, and pain points that might otherwise remain hidden.

Rich Insights: The real-time nature of the interaction, combined with follow-up queries, yields a wealth of qualitative data. These insights provide a nuanced understanding of user behaviours, allowing researchers to make informed decisions and improvements. The following insights can be drawn from the session.

  • Transcript of conversation
  • Tags and highlights
  • Emotion Analytics
  • Talk time and Max. Monologue
  • Filler words

In essence, the Session Block in Qatalyst transforms moderated research into a fluid and structured process. It empowers researchers to not only guide participants through tasks but also to extract profound insights that fuel informed decisions, leading to enhanced user experiences and product refinements. As the bridge between participants and researchers, the Session Block exemplifies Qatalyst's commitment to enabling in-depth, user-focused research in the digital age.

In Qatalyst, you can conduct moderated research by scheduling a meeting online using a session block. Moderated research hinges on the concept of guiding participants through tasks and prompting them to articulate their thoughts aloud as they navigate a product or service. This dynamic approach facilitates a comprehensive understanding of user behaviour as participants verbalize their actions and reactions in real-time. Additionally, moderators have the opportunity to pose follow-up questions, delving deeper into participants' perspectives and extracting valuable insights.

Here are the steps for setting up a session in Qatalyst: 

Steps

Step 1:  Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Session Block".

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Step 3: After selecting the block, a form will appear on the screen, where you need to fill in the following details: 

  • Session Name
  • Participant Name and Email
  • Moderator Name and Email
  • Observer: If you add any observers to the meeting, they will not be visible to the participants. Only the moderator can see them.
  • Language: Specify the language to be used during the session. This is required for accurate transcript generation.
  • Date and time
  • Technology: If facial coding technology is selected, the participant's facial expression will be captured using the webcam, and insights will be shown in the result.
A screenshot of a computerDescription automatically generated

Step 4: after you have added the details, you can keep on adding other blocks if required. Once done, you can publish the study by clicking on the publish button available at the top right corner of the page.

Note that session time follows UTC.

A screenshot of a computerDescription automatically generated

Step 5: Once you publish the study, you will be directed to the share page for the session joining link, and the attendees of the session will receive a joining link as well, using which they can join the session.

A screenshot of a surveyDescription automatically generated

A screenshot of a computerDescription automatically generated

Insights

Once the session is conducted, you can see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)
A screenshot of a video chatDescription automatically generated

The result of the mobile app testing consists of following 4 sections: 

  • Video
  • Transcript
  • Highlights
  • Notes
A screenshot of a video conferenceDescription automatically generated

Video

 This section allows you to revisit the session whenever needed, preserving the content and context for your convenience.

If you have enabled facial coding tech, you can see the participant's emotional responses during the session. 

Transcript

The next section is Transcript. You will find the auto-generated transcript for it, where you can create tags and highlight the important parts.

The transcript will be generated in the same language which was selected while creating the session. 

Additionally you can also translate the transcript to 100+ languages available in the platform.

For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag". 

Highlight

All the highlights created on the transcript will appear in this section. You can also play the specific video portion.

A screenshot of a chatDescription automatically generated

Notes

This section displays the notes created on the recorded video. 

A screenshot of a chatDescription automatically generated

How to create notes?

  • On the recorded video, click on the notes.
  • Click on the specific time from the video player seeker where you want add note
  • A prompt will appear, add the notes. 

Live mobile app testing in UX research is the practice of observing and collecting feedback from users as they interact with a mobile app in real time. This approach allows researchers to gather feedback and observe user behaviour, providing invaluable data for optimizing app usability and functionality.

Live Mobile App Testing Using Qatalyst

The Live Mobile App Testing block in Qatalyst allows the users to add their mobile application Play Store URL and the task to be performed in the application. By presenting these tasks to participants, researchers observe and analyze how users engage with the app, providing a controlled yet authentic environment for comprehensive user behaviour analysis.

  • Leveraging Facial Coding and Eye Tracking
Website testing can be further enriched through advanced technologies such as facial coding and eye tracking. These techs offer the capability to Qatalyst respondents' nonverbal cues, shedding light on facial expressions and eye movements. This nuanced analysis unveils unspoken emotions, cognitive processes, and areas of focus, providing a holistic perspective on participant engagement.
  • Unveiling Insights Through Audio Transcripts
The recorded audio holds immense potential for researchers. Audio transcripts offer a deep dive into participants' verbal responses, enabling researchers to analyze language nuances, sentiments, and communication patterns. These insights contribute to a comprehensive understanding of participant attitudes and viewpoints, enriching the research findings.

Mobile application testing involves assessing and evaluating the usability, functionality, design, and overall user experience of a mobile application. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.

Create a Live Mobile App Test

To create a Live Website test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. 

From the list of options that appear, select "Mobile App Testing" under the Task-based research section.

Step 3: Add the task title and the description. In the URL field, add the Play Store URL of the app on which you want the respondents to perform the task.

How to get the Play Store app URL: 

  • Open Play Store on Mobile or web
  • Search for the application. 
  • On the web: Click on the share button and copy the URL.
  • On mobile: Click on the three dots and then "Share"
Type image caption here (optional)


Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Time Limit: You can change the time limit of the test to 30, 60,90 or 120 seconds using this option. This will depend on the length and the complexity of the task. Please note once the time is over, the task will be over and submitted, and the recording will be available for that time only.
  • Show Timer: If enabled, a timer will be displayed during the test.
  • Screen Recording: This option is mandatorily enabled. Using this option, the screens of the respondents will be recorded for the entire duration of the test. 
  • Picture in Picture mode: This option will show the tester's video in the recording as well.

Technology

  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

Mobile application testing involves assessing and evaluating a mobile application's usability, functionality, design, and overall user experience. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.

In this article, we will provide you insights on different metrics you will get for mobile app testing in Qatalyst and their definitions. 

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

The result of the mobile app testing consists of following 4 sections: 

  • Video
  • Transcript
  • Highlights
  • Notes

Video

In the video section, you will find the screen recording of the test for every respondent, use the dropdown at the top of the screen to select the respondent. 

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius and shadow per your preference.

Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your app.

Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame, and insights will appear in a few seconds.

Transcript

The next section is Transcript. If the recording has audio, you will find the auto-generated transcript for it, where you can create tags and highlight the important parts.

For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag". 

Highlight

All the highlights created on the transcript will appear in this section. You can also play the specific video portion.

Notes

This section displays the notes created on the recorded video. 

How to create notes?

  • On the recorded video, click on the notes.
  • Click on the specific time from the video player seeker where you want add note
  • A prompt will appear, add the notes. 

User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

User Journey Tree

Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.

For the journeys, you will find the information by the lines and the name of each page in the tree.

The following information is shown:

  • Green line: The path navigated by the respondents to complete the task.
  • Red down arrow 🔽: This icon displays the number of users who closed/skipped the journey after visiting a particular screen.

Performance Breakdown

This chart showcases the comprehensive performance analysis of each page the respondents have navigated. It presents valuable insights such as the average time spent by respondents on each page, and the drop-off rate.

What is the Video Screener Block?

The Video Screener Block within the Qatalyst app serves as a dedicated tool for testers to record and submit videos as part of their testing process. This block allows users to integrate video-based insights into their tests effortlessly. Whether it's capturing user interactions, feedback, or suggestions, this feature adds a rich layer to the testing process, providing a holistic understanding of user experiences.

How to Utilize the Video Screener Block

Utilizing the Video Screener Block in the Qatalyst app is straightforward. Testers can easily add this block to their testing sequences, prompting users to record and submit videos based on specified criteria. The intuitive interface ensures a user-friendly experience, allowing for easy navigation and management of video submissions.

Here are the steps for adding a video screener block to the Qatalyst test: 

Steps

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Video Response Block".

Step 3: Once you have added the block, you can add the title of the task and the description. In the property section, you can define the following: 

  • Time Limit: maximum time limit for the video recording or uploading.
  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Upload Media: Testers can also upload the media if they do not wish to record it live.
  • Preferred Language: Language spoken in the live recording or uploaded media; this selection is important for accurate transcript generation.

Technology

  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions. in the video, if the face of the user is detected, this technology will give you insights into the expression portrayed by the users.

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Result

Users can access a concise summary of the Video Screener block, providing an overview of the responses within. Users can view the overall summary, view individual testers' responses and seamlessly navigate to a specific tester's view with just a click. We've also added transcripts and analytics, along with the ability to create and manage highlights.

In the summary section on the right hand side of the page, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

In the video response summary dashboard, you will get the following information:

  • Total number of testers: The total number of participants in a study.
  • Completed Tester: Total number of participants who submitted the response.
  • Drop off: Number of users who dropped the test.
  • Emotion metrics: An overall percentage distribution of the emotions in the video response submitted. 
  • Word cloud: A visual representation of the most common words used by participants in their video responses.

Below in the screen, you will find the video responses submitted by the users; you can open and expand it to view the detailed analytics of the particular response. Here, you will find the following insights:

  • Media Player: The media player is where you can play the video. 
  • Total Talk Time: The overall duration of the respondent's spoken content.
  • Longest Monologue: The length of the respondent's most extended uninterrupted speech.
  • Filler Words: Analysis of filler words (e.g., "um," "uh") used in the response.
  • Emotion Distribution: Insights into the emotional expressions conveyed during the response.
  • Transcript: A written text version of the spoken content.
  • Highlights: You can select specific parts of the transcript and create highlights, which can be used to reference important topics, keep track of action items, or share specific sections with other team members.

You can use the video response block as a screener question and add the testers directly to your panel from the video screening question. In this article, we will guide you on the process of adding testers to the native panel from the video screener block.

Step 1: After conducting the video screening test, navigate to the results section where you'll find all the submitted responses neatly organized.

Step 2: Hover over the testers' video thumbnails, and a convenient checkbox will appear. Use this checkbox to select the testers you want to add to your panel.

Step 3: Click on the "export tester" button located in the bottom bar. A pop-up form will appear, providing options to add testers to the default panel, create a new tag for them, or add them to an existing tag.

why tagging?

Think of tagging as a super helpful tool for keeping things organized in your native panel and while sharing the test. When you tag respondents' videos, you're basically creating a neat and easy-to-search system.

Picture this: you've done lots of tests, and you've got a bunch of different testers. When you tag them based on things like their traits, likes, or other important stuff, it's like making a different group of testers. This makes it a breeze to find and group-specific testers quickly.

Step 4: Once you've chosen the desired videos, click the "add" button, and voila! The selected testers have now been successfully added to your panel.

Step 4: Your newly added testers will be seamlessly integrated into your native panel. When sharing a test with the panel, you'll find the added testers listed, making it effortless to include them in your testing initiatives.

Consent Block in Qatalyst

Ensuring transparency and user consent is paramount in any testing process. In Qatalyst, you have the ability to integrate a Consent Block into your studies seamlessly. This block allows you to add titles and descriptions, or upload files as consent materials. During the test, testers can conveniently access and review the contents of the Consent Block, affirming their understanding and agreement with the terms and conditions through a simple checkbox. This ensures that testers are fully informed and compliant, contributing to an ethical and transparent testing environment.

How to Add a Consent Block: Step-by-Step Guide

Step 1: Log in to your Qatalyst Account

Upon logging into your Qatalyst account, you will be directed to the dashboard, where you can manage and create studies.

Step 2: Create a New Study

Click the "Create Study" button on the dashboard to initiate a new study. Choose to start from scratch or use an existing template to streamline the process.

Step 3: Add a Consent Block

Once in the study creation interface, click on the "Add New Block" button. From the list of block options, select "Consent Block" to add this feature to your study.

Step 4: Customize the Consent Block

In the Consent Block, you have the flexibility to add a title and description. Alternatively, you can upload a PDF file containing your consent materials for thorough documentation.

Preview of text and PDF consent:

As shown above, the Consent Block provides a preview of both text and PDF-based consent materials. This ensures that your testers have a clear understanding of the terms and conditions before proceeding with the test.

Step 5: Publish your study

Once you've finished creating your study by adding other blocks you can go ahead and publish it.

Test Execution

After the welcome block consent block appears, respondents will be prompted to either accept or reject the terms and conditions. If they choose to agree, the test will proceed. In the event of a decline, the study will conclude for the respective tester, ensuring a respectful and consensual testing experience.

A first-click test is a usability research method used to evaluate how easy it is for users to complete a specific task on a website, app, or any interface with a visual component. It essentially gauges how intuitive the design is by analyzing a user's initial click towards achieving a goal.

Create a FIRST CLICK Test

To create a 5-second test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "First Click Test".

Properties

  • Number of Clicks Allowed: You can give the number of maximum clicks a user can make in the interface, only their last click will be taken into account. 
  • Image Dimension:
  1. Fit to Screen: In this setting, the image is displayed in such a way that it completely fits within the boundaries of the screen, regardless of its original dimensions. Users can view the entire image without the need to scroll vertically or horizontally.
  1. Fit to Width: The image is scaled to cover the full width of the screen while maintaining its original aspect ratio. If the dimensions of the image exceed the width of the screen, users can scroll vertically to view the portions of the image that extend beyond the visible area.
  1. Fit to Height: In this configuration, the height of the image is adjusted to fit the screen while preserving its original width. If the width of the image exceeds the screen width, users can scroll horizontally to explore the entire image.
  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the clicks of the users on the screen as they interact with the design. 

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

Based on the technology selected, you will find the following metrics : 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.

Eye Tracking

Eye-tracking is the process of measuring the point of the human gaze (where one is looking) on the screen.

Qatalyst Eye Tracking uses the standard webcam embedded with the laptop/desktop/mobile to measure eye positions and movements. The webcam identifies the position of both the eyes and records eye movement as the viewer looks at some kind of stimulus presented in front of him, either on a laptop, desktop, or mobile screen.

Insights from Eye Tracking

For the UX blocks, i.e. Prototype Testing, 5-second Testing, A/B Testing and Preference Testing, you will find the heatmap for the point of gaze.  It uses different colours to show which areas were looked at the most by the respondents. This visual representation helps us understand where their attention was focused, making it easier to make informed decisions.

Properties of Heatmap

  • Radius: The radius refers to the size of the individual data points or "hotspots" in the heatmap. Increasing the radius will make the hotspots larger and more prominent, while decreasing it will make them smaller and more concentrated.
  • Shadow: The shadow parameter controls the presence and intensity of shadows around the hotspots in the heatmap. Adding a shadow effect can enhance the visual depth and make the hotspots stand out, while reducing or removing it will create a flatter and more minimalistic heatmap.
  • Blur: The blur parameter determines the level of blurriness or fuzziness applied to the heatmap. Increasing the blur will result in a smoother and more diffused appearance, while reducing it will make the heatmap sharper and more defined.

Experience Eye Tracking

To experience how Eye Tracking works, click here: https://eye.affectlab.io

Read the instructions and start Eye Tracking on your Laptop or Desktop. Eye Tracking begins with a calibration, post which the system will identify your point of gaze on prompted pictures in real-time. 

Please enable the camera while accessing the application.InsightsfromEyeTrackingStudy results are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye-tracking studies, please click Eye Tracking Insights Intercom.com.

Eye trackers are used in marketing as an input devices for human-computer interaction, product design, and many other areas.

Mouse Tracking

Mouse click tracking is a technique used to capture and analyze user interactions with a digital interface, specifically focusing on the clicks made by the user using their mouse cursor. It involves tracking the position of mouse clicks and recording this data for analysis and insights.

Insights from Mouse Tracking


A Mouse click-tracking heatmap is a visual representation that showcases the distribution of user clicks on a design or prototype. It provides users with a comprehensive overview of respondents' engagement by highlighting areas that attract the most attention and receive the highest number of clicks. This information can reveal valuable insights into user preferences, pain points, and overall usability, aiding in creating more intuitive and user-friendly interfaces.

For the prototype, you will get the following insights:

  • All Clicks: This feature allows users to access a comprehensive record of all the clicks made on a design or prototype. By selecting this option, you can view and analyze each interaction point initiated by the respondents, providing valuable insights into user behaviour and preferences.
A screenshot of a computerDescription automatically generated
  • Misclicks: With this option, you can specifically focus on the clicks made in areas of the prototype that are not designated as hotspots. It enables you to identify and analyze instances where users unintentionally click on non-interactive regions, offering valuable feedback on the clarity and intuitiveness of the design.
A screenshot of a computerDescription automatically generated

  • Scroll: The scroll option provides information about the specific areas visited by respondents within the prototype. By analyzing this data, you can gain insights into how users navigate and interact with the content, allowing you to optimize the layout, placement, and prominence of key elements within the design.
A screenshot of a computerDescription automatically generated

Facial Coding

Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a web camera and decoded into their respective emotions.

Qatalyst helps to capture the emotions of any respondent when they are exposed to any design or prototype. Their expressions are captured using a web camera. Facial movements, such as changes in the position of eyebrows, jawline, mouth, cheeks, etc., are identified. The system can track even minute movements of facial muscles and give data about emotions such as happiness, sadness, surprise, anger, etc.

Insights From Facial Coding

By analyzing facial expressions captured during user interactions, facial coding systems can detect and quantify various emotional metrics, allowing researchers to understand users' emotional states and their impact on design experiences. Let's explore the specific metrics commonly derived from facial coding:

  • Positive Emotion Metrics - Positive emotions play a crucial role in user satisfaction and engagement. Facial coding can measure several positive emotion metrics, including:
  1. Happiness: This metric indicates the level of joy and contentment expressed by users. By detecting smiles and other facial expressions associated with pleasure, users can assess the extent to which a design elicits positive emotional responses.
  1. Surprise: The surprise metric captures the degree of astonishment or amazement shown by users. It helps identify moments when users encounter unexpected or novel elements within a design, highlighting aspects that capture their attention and evoke positive emotional responses.

  • Negative Emotion Metrics - Understanding negative emotions is equally important to address pain points and enhance user experiences. Facial coding provides insights into various negative emotion metrics, such as:
  1. Sadness: This metric reveals the extent of sadness or disappointment exhibited by users. Detecting facial expressions associated with sadness helps researchers identify design elements that may evoke negative emotional responses and require improvement or adjustment.
  1. Disgust: The disgust metric gauges the level of revulsion or aversion expressed by users. It helps uncover design aspects that users find unpleasant or repulsive, leading to negative emotional experiences. By identifying and rectifying these elements, designers can create more appealing and user-friendly interfaces.
  1. Anger: The anger metric measures the intensity of anger or frustration displayed by users. It indicates moments when users experience irritation or dissatisfaction with a design, highlighting areas that need refinement. Addressing these sources of anger can significantly enhance the user experience and reduce user frustration.

  • Neutral Attention - While positive and negative emotions are essential, neutral attention provides insight into user engagement without explicit emotional responses. This metric reveals the level of focus and concentration users exhibit when their facial expressions do not convey any specific emotional cues. By analyzing neutral attention, researchers can gauge overall user engagement and measure the effectiveness of design elements in capturing users' interest.
 

Experience Facial Coding

To experience how Facial Recognition works, click here: https://facial.affectlab.io/.

Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face needs to be in the outline provided next to the media.

Please enable your web camera while accessing the application.

Table of contents

Feature Guide

Understanding 5 Second Test

In today's digital age, where attention spans are decreasing and users are quick to judge a website or app, it's essential to capture users' attention and engage them quickly. This is where the 5-second test comes into play. In this article, we'll discuss what the 5-second test is and how it's used in UX research.

What is the 5-second test?

The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.

How does it work?

The 5-second test typically involves the following steps:

  • Prepare the design: Prepare a screenshot or a design of your website or app that you want to test. Make sure it's representative of the overall design and messaging of the website or app.
  • Show the design: Show the design to the user for 5 seconds. During this time, the user can't interact with the design or scroll through the page.
  • Ask questions: After the 5 seconds are up, ask the user a series of questions about what they remember from the design or what they think the design is about. These questions can range from general impressions to specific details about the design.
  • Analyze the results: Analyze the data collected from the user responses and identify any common themes or issues that users may have with the design. Use this information to make changes or improvements to the design.

When to use the 5-second test?

The 5-second test is often used in the early stages of design or redesign to capture users' initial reactions and make quick improvements. It's a valuable tool for testing new designs, landing pages, and marketing campaigns and can be used to identify potential issues before launching a website or app.

Benefits of the 5-second test

The 5-second test has many benefits, including:

  • Quick and easy: The 5-second test is a quick and easy way to get user feedback and identify potential issues or improvements in a design.
  • Cost-effective: Compared to other UX research methods, the 5-second test is relatively inexpensive and can be conducted with a small group of users.
  • Real-world insights: By capturing users' initial reactions, the 5-second test provides real-world insights into how users perceive and interact with a website or app.
  • Early detection of issues: The 5-second test can help identify potential issues or improvements early in the design process, saving time and money in the long run.

Best Practices

  • Be clear on your research goals: Before conducting a 5-second test, it’s important to identify what specific aspect of your website or design you want to test. This will help you create focused questions and gather meaningful data.
  • Keep it simple: The purpose of a 5-second test is to quickly capture a user's first impression, so keep the test simple. Avoid cluttering the test with too many questions or visuals that might distract from the main focus of the test.
  • Use clear visuals: Use clear and high-quality visuals in your 5-second test. Avoid using visuals that are too complex or hard to interpret, as they may skew the results.
  • Test with the right audience: It’s important to test with the right audience to ensure that the results are relevant. Consider your target audience and recruit participants who match the demographics of your user base.
  • Test iteratively: Don’t rely on a single 5-second test to make decisions. Conduct multiple tests iteratively to ensure that the changes you make are effective and result in a better user experience.

Use cases

  • Landing pages: Landing pages are crucial for user engagement and conversions. A 5-second test can help identify whether the landing page is effective in grabbing the user’s attention and conveying the message clearly.
  • Branding: A 5-second test can help test the effectiveness of branding elements such as logos, colour schemes, and fonts. It can help determine if the brand message is being conveyed in the first few seconds of user engagement.
  • Call to action: Call-to-action (CTA) buttons are crucial for user engagement and conversions. A 5-second test can help determine if the CTA button is placed prominently and if the messaging is clear.
  • Product pages: A 5-second test can help test the effectiveness of product pages by identifying whether the key product features and benefits are being conveyed effectively.

5 Second Testing in Qatalyst

Qatalyst offers a test block feature that allows users to conduct 5-second testing. The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.

A screenshot of a video chatDescription automatically generated

Create a 5-second Test

To create a 5-second test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "5-second test".

A screenshot of a computerDescription automatically generated

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Image Dimension:
  • Fit to Screen: In this setting, the image is displayed in such a way that it completely fits within the boundaries of the screen, regardless of its original dimensions. Users can view the entire image without the need to scroll vertically or horizontally.
  • Fit to Width: The image is scaled to cover the full width of the screen while maintaining its original aspect ratio. If the dimensions of the image exceed the width of the screen, users can scroll vertically to view the portions of the image that extend beyond the visible area.
  • Fit to Height: In this configuration, the height of the image is adjusted to fit the screen while preserving its original width. If the width of the image exceeds the screen width, users can scroll horizontally to explore the entire image.
  • Time Limit: You can change the time limit of the test to 10, 15 or 20 seconds using this option. 

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip:  The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

Based on the technology selected, you will find the following metrics : 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a websiteDescription automatically generated

  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.
A screenshot of a phoneDescription automatically generated

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.
A screenshot of a websiteDescription automatically generated
  • Mouse Scroll Data: This metric provides valuable insights into how users navigate a scrollable page by revealing the extent of the page they have visited. This metric helps us understand the user's engagement and attention as they scroll through the content, offering valuable information about which areas of the page are being explored.
A screenshot of a websiteDescription automatically generated

Example

 Here is an example of how you can use Qatalyst for 5-second testing:

Suppose you are a designer working on a new landing page for a website. You want to use 5-second testing to get feedback on the visual appeal and memorability of the landing page design.

You decide to run a five-second test and ask participants if they are able to understand what the company does by looking at the landing page or if the message is clearly speaking to the audience or not.

Step 1: Upload the Image 

After determining the focus of your test, you can proceed to configure your test within Qatalyst. This can be done by uploading an image of the specific screen you wish to test.

Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.

Step 2: Create Questions around the Test

Now, add questions to understand users' comprehension of the landing page and gauge their overall perception of the website. For this, use the survey blocks. Keep the questions concise and focused to gather quick, instinctive responses within the limited 5-second timeframe. 

Step 3: Publish the test and share 

Now that your test is ready, it’s time to share the test with the participants.

Step 4: Analyze the Result

After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.

In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.

Five-second tests are a quick and effective exercise to measure the clarity of your design and how it communicates a message which can later help you improve the user experience of your design.


Understanding A/B Testing

A/B testing is a popular method used in UX research to evaluate the effectiveness of different design options. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.

What is A/B testing?

A/B testing is a technique used to compare two versions of a design to determine which one performs better. In UX research, this technique is used to compare two different designs or variations of the same design to understand which one performs better in terms of user behaviour, engagement, and conversion.

How does A/B testing work?

A/B testing involves creating two different versions of a design element, such as a button or a page layout and showing it to the users. User behaviour, such as clicks, eye gaze, or conversions, is then measured and compared between the two versions to determine which one performs better.

For example, let's say a company wants to test two different versions of the website to see which one performs better. The first version of the website has a dark colour scheme, while the second version has a light colour scheme. The company decides to run an A/B test to see which version of the website has a higher conversion rate. 

The company analyzes the results of the A/B test, and they find that the second version of the website has a higher conversion rate. The company decides to implement the second version of the website, and they see an increase in sales.

When to conduct A/B Testing?

A/B testing can be conducted at any stage of website design, but the most effective time to conduct A/B testing is during the design and development phase. This is because A/B testing can help you identify the most effective design elements, user interface, and user experience, which can save time and resources in the long run.

When conducting A/B testing during website designing, it's important to test different variations of your website design, such as layout, colour scheme, font, and images. This can help you identify the most effective design elements that resonate with your audience.

It's also important to conduct A/B testing on different devices, such as desktops, laptops, tablets, and mobile phones, as user behaviour can vary significantly depending on the device. By testing on different devices, you can ensure that your website is optimized for all types of users.

Furthermore, it's essential to conduct A/B testing on different segments of your audience to ensure that your website design is effective for all user groups. This can include testing different versions of your website on different demographics, such as age, gender, location, and interests.

Best Practices

Here are some best practices to keep in mind when conducting A/B testing:

  • Clearly define your goals: Before conducting A/B testing, it's important to clearly define your goals and what you want to achieve. This will help you choose the right metrics to measure and ensure that your A/B testing is aligned with your business objectives.
  • Choose the right metrics: When conducting A/B testing, it's important to choose the right metrics to measure. This will depend on your business goals and what you want to achieve. Common metrics include click-through rates, conversion rates, bounce rates, and time on the page.
  • Monitor results regularly: A/B testing should be an ongoing process, and it's important to monitor your results regularly to identify any trends or changes in user behaviour. This will help you make informed decisions and optimize your online presence over time.
  • Right Audience: It's essential to test on a representative sample of your target audience to ensure that your results are meaningful and relevant. This means selecting participants who match your target demographic, interests, and behaviour patterns.

Use Cases

Here are some use cases for A/B testing:

  • Landing page optimization: A/B testing can be used to optimize landing pages for better conversion rates. By testing different variations of design elements such as headlines, images, call-to-action (CTA) buttons, and forms, businesses can determine which design leads to the highest conversion rate.
  • Email marketing: A/B testing can be used to optimize email marketing campaigns for better open and click-through rates. By testing different variations of subject lines, email copy, and CTA buttons, businesses can determine which version of the email performs the best.
  • Pricing strategy: A/B testing can be used to determine the most effective pricing strategy for a product or service. By testing different pricing models, such as tiered pricing or a flat rate, businesses can determine which pricing strategy leads to the highest revenue.
  • Product features: A/B testing can be used to determine which product features are most appealing to users. By testing different variations of product features, such as the placement of a search bar or the size of product images, businesses can determine which version leads to the highest engagement and conversion rates.
  • Ad campaigns: A/B testing can be used to optimize ad campaigns for better performance. By testing different variations of ad copy, images, and targeting options, businesses can determine which version of the ad leads to the highest click-through and conversion rates.

In all of these cases, A/B testing allows businesses to make data-driven decisions about their marketing and product strategies. By testing different variations of design elements and features, they can identify the most effective approach and improve their overall performance.

A/B testing in Qatalyst

In Qatalyst, you can conduct A/B testing on images to determine which one users prefer. Additionally, we offer you the ability to integrate various technologies, such as mouse tracking, facial coding, and eye tracking, to gather additional data and insights about user behaviour and preferences.

Create an A/B Test

To create an A/B test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "A/B test".

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Required: selecting one answer from the list is mandatory; the respondent will not be able to move to another question without answering the question.
  • Randomize: The images will appear in random order.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

On the next screen, based on the technology selected, you will find the following metrics: 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a computerDescription automatically generated

  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.

Example

Suppose you are an e-commerce business looking to optimize your product page layout for better conversion rates. Specifically, you want to compare two different variations of the "Add to Cart" button to determine which design yields higher user engagement and click-through rates.

Step 1: Set Up the Test

In Qatalyst, set up the A/B test by uploading the two versions of the product page, each with a different design for the "Add to Cart" button. Ensure that only this specific element is changed while keeping the rest of the page consistent. This will help isolate the impact of the button design on user behaviour.

Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.

Step 2: Create Questions around the Test

Now, add questions based on the information you want to gather from respondents. Consider using open-ended questions to gather qualitative feedback that can provide deeper insights.

Step 3: Publish the test and share 

Now that your test is ready, it’s time to share the test with the participants.

Step 4: Analyze the Result

After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.

In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.

A/B testing with Qatalyst empowers you to make data-driven decisions about design changes, enabling continuous optimization and improvement of your product pages to maximize conversions and enhance user experience.

Understanding Preference Testing

In UX research, it is crucial to understand the preferences of your users. This is where preference testing comes in. Preference testing is a technique that allows you to test multiple design options to determine which one is preferred by users. In this article, we will discuss preference testing in UX research, how it works, and its benefits. 

What is Preference Testing?

Preference testing is a type of research that helps businesses understand what their customers like and prefer. It involves showing different design variants to people and asking them which one they like the most. By doing this, businesses can learn their customer's preferences and make decisions about how to improve their product to meet their customers' needs better.

A group of women looking at a computer screenDescription automatically generated

Benefits of preference testing

Preference testing has many benefits in UX research, some of which include:

  • User-centred design: By testing multiple design variations with users, preference testing ensures that your design decisions are based on user preferences and needs rather than assumptions or personal preferences.
  • Improved user experience: Preference testing allows you to identify design elements that users prefer, which can be incorporated into your final design to create a more enjoyable and engaging user experience.
  • Time and cost-effectiveness: Preference testing is relatively quick and inexpensive compared to other UX research methods, such as usability testing. This makes it a great option for small or medium-sized businesses with limited resources.
  • Competitive advantage: Preference testing can help you gain a competitive advantage by creating a design that is optimized for user preferences and needs, leading to increased user engagement and customer satisfaction.

How does Preference Testing work?

To conduct preference testing, you first need to identify the design elements that you want to test. These could include anything from different colour schemes to variations in layout, content, or navigation. Once you have identified the design elements, you can create multiple variations of each element and then present them to users in randomized order.

Participants in the study are typically shown each variation for a few seconds and then asked to choose which one they prefer. This process is repeated for each design element being tested. Once all the data is collected, you can analyze the results to determine which design elements are most preferred by your target audience.

When to perform preference testing?

Well, ideally, you should conduct preference testing whenever you're trying to improve the user experience of a website or application. More specifically, preference testing can be particularly useful when you're trying to make decisions about design, content, navigation, or user flows.

For example, let's say you're designing a new website, and you're trying to decide which colour scheme to use. You could conduct a preference test to see which colour scheme is more appealing to your target audience. Or, let's say you're redesigning your e-commerce site, and you're trying to decide where to place the "add to cart" button. You could conduct a preference test to see which placement is more intuitive and leads to more conversions.

In short, preference testing can be a valuable tool whenever you're trying to make decisions about the user experience of a website or application. It allows you to get feedback from users and make data-driven decisions that can improve the overall user experience.

Best Practices

  • Define clear goals: Before conducting preference testing, it's important to define clear goals and objectives. This can help ensure that the study is focused and that the data collected is relevant and useful.
  • Use a representative sample: To ensure that the results of preference testing are accurate and reliable, it's important to use a representative sample of participants. This means selecting participants who are similar to your target audience in terms of demographics, behaviour, and preferences.
  • Choose appropriate stimuli: The stimuli used in preference testing should be appropriate and relevant to the research goals. This might include different product designs, packaging options, or marketing messages, depending on what is being tested.
  • Use a randomized design: To avoid bias, it's important to use a randomized design when presenting stimuli to participants. This can help ensure that each option is given an equal chance of being chosen.

Use Cases

  • User interface (UI) design: Preference testing can be used to test different UI designs, such as the placement of buttons, the layout of menus, and the use of colour schemes. This can help businesses determine which design elements are most intuitive and user-friendly.
  • Information architecture: Information architecture refers to the organization and structure of content on a website or application. Preference testing can be used to test different information architectures to determine which ones are most effective in helping users find the information they need.
  • Content: Content plays an important role in shaping user experience. Preference testing can be used to test different types of content, such as headlines, descriptions, and calls to action, to determine which ones are most engaging and persuasive.
  • Navigation: Navigation is a critical aspect of UX design. Preference testing can be used to test different navigation structures, such as menus and navigation bars, to determine which ones are most effective in helping users find their way around a website or application.
  • User flows: User flows refer to the series of actions a user takes to accomplish a task. Preference testing can be used to test different user flows to determine which ones are most efficient and user-friendly.
  • Prototypes: Prototyping is an important part of the UX design process. Preference testing can be used to test different prototypes to determine which ones are most effective in meeting users' needs and preferences.

Preference Testing in Qatalyst

Qatalyst offers a test block feature that allows users to conduct preference testing on various elements of the product. Users can add different versions of an element, such as two different designs, and ask users which one they prefer. This data can be used to inform product development decisions and optimize the product's design and features.

A screenshot of a computerDescription automatically generated

Properties

  • Required: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Randomize: The image options will appear in random order.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.

A screenshot of a computerDescription automatically generated

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

On the next screen, based on the technology selected, you will find the following metrics: 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.
A screenshot of a computerDescription automatically generated
  • AOI (Area of Interest): On the image, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, and average fixation duration providing a deeper understanding of user engagement.
  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the image. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.
A screenshot of a computerDescription automatically generated


Understanding Prototype Testing

In UX research, it is important to test your prototype before start building your product. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.

What is a Prototype?

A prototype is an early version or a design mock-up of a product or feature that is used to test its design and functionality before it is produced or released. It is a simplified representation of the final product created to illustrate key features and identify design flaws. 

What is Prototype Testing?

Prototype testing is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. The goal of prototype testing is to improve the product's design, functionality, and user experience before it is released to the market. Prototype testing helps users refine their ideas and concepts before investing time and resources in the final product, saving time and money and ensuring the product meets the needs and expectations of users.

Why is it Important?

  • Identifying design flaws early: By testing a prototype, you can identify design flaws and usability issues early in the product development process when it is easier and less costly to address them. This can save time and money in the long run and result in a more successful product launch.
  • Reducing development costs: Prototype testing can help identify design flaws and usability issues early in the product development process. By addressing these issues early, you can avoid costly redesigns or reworks later in the development process.
  • Saving time: By testing the product early and often, you can identify and address issues in a timely manner. This can save time by avoiding lengthy and costly delays caused by major design flaws or usability issues.
  • Improving user experience: Prototype testing can help you identify and address usability issues that could negatively impact the user experience. By refining the product's design and functionality based on user feedback, you can improve the overall user experience and increase the likelihood of user adoption.
  • Mitigating risk: Prototype testing can help mitigate the risk of launching a product that does not meet the needs of its intended audience or has significant design flaws. By testing the product early and often, you can identify and address issues before they become major problems.

How to conduct?

The process of prototype testing typically involves the following steps:

  • Design and create a prototype: A physical or digital prototype is designed and created, representing a simplified version of the final product. 
  1. To effectively utilize prototype testing, it's important to have a clear objective in mind. Defining what you want to validate will guide the type of prototype you create. Prototypes can range in complexity, from simple sketches to fully interactive versions. Low-fidelity prototypes are ideal for concept testing, while high-fidelity prototypes are valuable for assessing usability and identifying workflow issues. By aligning your prototype with the specific goals of your testing, you can maximize its effectiveness in gathering meaningful feedback and insights.
  • Recruit participants: Participants recruited who represent the target audience or user group and who are willing to provide feedback on the prototype.
  • Conduct prototype testing: Participants are asked to perform specific tasks or scenarios using the prototype while the test facilitator observes and collects feedback. The feedback may be collected through questionnaires or interviews.
  • Analyze feedback: The feedback collected during prototype testing is analyzed to identify usability issues, design flaws, and areas for improvement.
  • Refine the prototype: The feedback is incorporated into the prototype's design, resulting in a more refined and user-friendly product.
  • Repeat the testing process: The prototype testing process may be repeated several times, with each iteration improving upon the previous one until the final product is deemed satisfactory for release.

Prototype testing can help ensure that the final product meets the needs and expectations of users and is free of design flaws or usability issues. It can save time and resources by identifying and addressing design flaws early in the development process, resulting in a more successful product launch.

When to Conduct Prototype Testing?

Prototype testing should be conducted during the product development process, ideally after a preliminary version of the product has been created. The timing of prototype testing will depend on the specific product being developed and the stage of the development process.

In general, prototype testing should be conducted when:

  • Design concepts are being developed: Prototype testing can be used to test and refine initial design concepts and ideas before investing significant time and resources in the final product.
  • Major design changes are made: Prototype testing can be used to test the impact of major design changes on the product's functionality and usability.
  • New features or functionalities are added: Prototype testing can be used to test new features or functionalities and gather feedback on their usefulness and effectiveness.
  • Usability issues are identified: Prototype testing can be used to identify and address usability issues before they become major problems.
  • User feedback is needed: Prototype testing can be used to gather feedback from users or stakeholders to ensure that the product meets their needs and expectations.

Overall, prototype testing should be conducted early and often during the product development process to ensure that the final product is user-friendly, effective, and meets the needs of its intended audience.

Best Practices

Here are some best practices to consider when conducting prototype testing:

  • Define clear testing objectives: Clearly define the objectives of the prototype testing, including what features or functionalities will be tested and who the target audience or user group is.
  • Use realistic scenarios: Create realistic scenarios for users to perform using the prototype to simulate how they would use the final product in real-life situations.
  • Recruit representative participants: Recruit participants who represent the target audience or user group and have the knowledge, experience, and skills necessary to provide meaningful feedback.
  • Use multiple testing methods: Use a variety of testing methods, including surveys, interviews, and observation, to collect data from participants and get a complete picture of their experience with the prototype.
  • Create a comfortable testing environment: Create a comfortable testing environment for participants where they feel at ease and can focus on the tasks at hand.
  • Document and analyze feedback: Document the feedback collected during prototype testing and analyze it to identify patterns and themes, as well as specific design flaws or usability issues.
  • Refine and iterate: Incorporate the feedback into the prototype's design and refine it, conducting additional rounds of prototype testing as needed until the final product meets the desired level of usability and functionality.

Prototype Testing in Qatalyst

Qatalyst offers a test block feature that allows users to conduct prototype testing. It is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. You can upload a prototype of your website or mobile app, define the flow of the design and test it on respondents and gather responses.

Prerequisite for prototype link

  • The file has to be publicly accessible.
  • The file has to be from the services we support, "Figma" and "Sketch", for now. (We will be adding more in future)
  • The file should have at least two screens. 
  • Ensure the prototype nodes are connected, and there is a clear starting node with no independent nodes or all the nodes are connected.

Steps for adding prototypes:

  • Access your recent Figma projects by visiting https://www.figma.com/files/recent. This page will display a list of your Figma projects.
  • Choose the specific project you want to test with Qatalyst.
  • To open the prototype, simply click on the play button ▶️ located in the top menu.
  • In the top menu, locate and click the "Share Prototype" option. This will generate a shared URL for your prototype.
  • Qatalyst is compatible with all types of prototypes, including those designed for desktop, mobile, and tablet devices.
  • Ensure the Link Sharing Settings are set to "Anyone with the link can view." It is important for your Figma prototype to be publicly accessible in order to import it successfully.
  • Once you have set the sharing settings, click on the "copy link" button. Then, simply paste the copied link into Qatalyst for seamless integration.

Journey Paths

  • Defined Path: A defined path is a predetermined sequence of steps or actions that a user can follow to complete a specific task or goal within the prototype. However, Qatalyst allows you to define multiple paths for the different screens. The start screen will be the same, and the user can change the end screen of the test. In between the end and start screen user can define multiple paths.
  • Exploratory Path: In this path type, while creating the research, the user can define the start and end screen, and while taking the test, the respondents navigate from different screens to reach the endpoint. Using this technique, you can identify if participants can finish an activity effectively, the time needed to accomplish a task, and adjustments necessary to increase user performance and happiness and examine the performance to determine if it satisfies your usability goal.

When to use which journey Path?

Defined Path: If you have pre-determined navigation paths for your prototype, using a defined path allows you to assess which path is most convenient or preferred by users. This helps you understand which specific path users tend to choose among the available options.

Exploratory Path: Choose an exploratory path when you want to test whether the respondents are able to navigate between the screens and are able to complete the given task and gather information about users' natural behaviour and preferences. This approach encourages users to freely explore the prototype and interact with it based on their own instincts and preferences. It can reveal unexpected insights and usage patterns that may not have been accounted for in predefined paths.

Properties

  • Required: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Randomize: The image options will appear in random order.
  • Screen Recording: Using this option, the whole session of taking the test will be recorded along with the audio.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the movement of the user's cursor on the screen as they interact with the design. This technology can provide insights into how users navigate through the design.
  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

1. Blocks Summary

Type image caption here (optional)

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip the Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Task Summary

In the dashboard of the result, you will find the summary of the test and the following information:

  • Average Duration: The average time respondents have spent on the block.
  • Bounce Rate: The percentage of drop off and skips relative to the total number of responses.
  • Misclick rate: Percentage of clicks made other than the actionable CTAs.
  • Success Rate: percentage of respondents who have successfully completed the task. (not skipped, not dropped).
  • Alternate Success Rate: This metric is available only for the defined path and shows the percentage of users who have reached the goal screen i.e. completed the task but have used an alternate path instead of using the defined path.

Overall Usability Score: This score represents the overall performance of your prototype. It is calculated by harnessing various metrics such as success rate, alternate success, average time, bounce rate, and misclick rate.

Overall Usability Score = Direct Success Rate + (Indirect Success Rate/ 2) - avg(Misclick%) - avg(Duration%)

3. User Journey Tree

Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.

For the journeys, you will find the information by the coloured lines in the tree.

In the defined journey, the following information is shown:

  • Green line: The respondents who have navigated through the defined path and landed on the goal screen.
  • Purple Line: When the respondents have taken some path other than the defined path yet landed on the goal screen.
  • Red Line:  When travelling this path, the respondents have not reached the goal screen.
  • Red down arrow 🔽: This icon displays the number of users who closed/skipped the journey after visiting a particular screen.

For the exploratory journey, there is no alternate path. The journey can be either a success or a failure.

Success: When the respondents can reach the goal screen.

Failure: When respondents do not reach the goal screen.

Insights from the User Journey

  • Popular paths: By examining the user journey tree, you can identify the most common paths taken by respondents. This insight helps you understand the preferred navigation patterns and the pages or features that attract the most attention. You can leverage this information to optimize and enhance the user experience on these popular paths.
  • Abandoned paths: The user journey tree can also reveal paths that are frequently abandoned or not followed by respondents. These abandoned paths may indicate where users encounter difficulties, confusion, or disinterest. 
  • Navigation patterns: Analyzing the user journey tree allows you to observe the navigation patterns of respondents. You can identify if users follow a linear path, explore different branches, or backtrack to previous pages. This insight helps you understand how users interact with your prototype and adapt the navigation flow accordingly to ensure a seamless and intuitive user experience.
  • Bottlenecks or roadblocks: The user journey tree can highlight specific pages or interactions where users frequently get stuck or face challenges. These bottlenecks or roadblocks in the user journey can provide valuable insights into areas that may require improvements, such as unclear instructions, confusing interface elements, or complex tasks. By addressing these issues, you can smoothen the user journey and enhance usability.
  • Deviations from expected paths: The user journey tree might reveal unexpected paths taken by respondents that differ from the intended user flow. These deviations can indicate opportunities to optimize the prototype by aligning user behaviour with the desired user journey. Understanding why users deviate from the expected paths can provide insights into their needs, preferences, and potential design or content improvements.

4. Graph metrics

The performance metrics provide a clear picture of the average time spent on each page in the prototype. This information is presented alongside the total time taken to complete the task and the number of respondents who have visited each page. By mapping these metrics together, we gain insights into how users interact with each page and how it contributes to the overall task completion.

Insights from Performance metrics

  • Column height: This indicates a significant drop in people's engagement with the corresponding page. It suggests that users are leaving or losing interest in that particular page. It could be a sign that the content or design of the page needs improvement to retain user attention.
  • Column width: If any particular column is wider than the other columns, it suggests that respondents have spent a considerable amount of time on the page. It may indicate that the page is either providing valuable information or engaging the users in some way. However, it's important to note that spending too much time on a page also indicates confusion or difficulty in finding the desired information.

5. Performance Breakdown

A screenshot of a phoneDescription automatically generated

This chart showcases the comprehensive performance analysis of each page within the prototype. It presents valuable insights such as the average time spent by respondents on each page, the misclick rate, and the drop-off rate.

By harnessing these metrics, we derive a usability score for every page, offering users a clear understanding of how each page performed so that they can focus on areas that require improvement.

Usability Score = MAX(0,100 - (Drop Off) - (Misclick Rate *misclick weight) - (MIN(10,MAX(0,(Average duration in sec) - 5)/2))))

The Misclick weight equals 0.5 points for every misclick.

Insights from Performance Breakdown

  • Identify high-performing pages: Pages with a shorter average time spent, lower misclick rate, and lower drop-off rate can be considered well-designed and well-performing. These pages likely provide intuitive interactions and a smooth user experience.
  • Identify low-performing pages: Pages with a higher average time spent, higher misclick rate, and higher drop-off rate may require further investigation and improvement. These pages may have usability issues, unclear navigation, confusing elements, or uninteresting content.
  • Prioritize improvements: By analyzing the metrics, you can prioritize your efforts based on the insights obtained. Focus on optimizing pages with high drop-off rates and high misclick rates to improve user experience, reduce abandonment, and increase engagement.

The page with a usability score below 80 calls for attention. The researchers can check eye tracking, mouse tracking and facial coding data and figure out if the behaviour is expected or an anomaly.

6. Emotion AI Metrics

When you click on any page using the performance metrics, you will be seamlessly transported to the detailed Metrics page, where you can delve into insights gathered from eye tracking, facial coding, and mouse clicks.

Here, you will discover information such as the average time spent on the page, the number of respondents who have visited the page, and intricate details regarding the misclick rate.

In the Analytics section, you'll have access to a wealth of metrics, including:

  • All Clicks: This encompasses all clicks made within the prototype, offering a holistic view of user interaction.

  • Misclick: This specific metric isolates clicks made outside the designated clickable areas within the prototype, shedding light on user behaviour in unintended interactions.

  • Mouse Scroll Data: This metric provides valuable insights into how users navigate a scrollable page by revealing the extent of the page they have visited. This metric helps us understand the user's engagement as they scroll through the content, offering valuable information about which areas of the page are being explored.

  • AOI (Area of Interest): On the prototype page, you can create AOIs. Within AOIs, you can glean insights into metrics such as time spent, average time to first fixation, average fixation duration, the number of clicks, and the number of miss-clicks, providing a deeper understanding of user engagement.

For example, an AOI could be used to track the time that users spend looking at a call to action button, or the number of times they click on a link. This information can be used to improve the usability of the website or app by making sure that the most important elements are easy to find and interact with.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as neutral, positive, or negative, allowing you to gauge the emotional impact of your design or content.

By exploring these meticulously curated metrics, you can gain a comprehensive understanding of user engagement and behaviour, empowering you to make data-driven decisions to enhance your project's performance and user experience.

7. Screen Recording

Under this section, you will find the screen recordings of the session taking the test. You can use the top dropdown to select the testers. 
Along with the video recording, you will get the following functionality: 

  • Eye Tracking Metrics: Shows where users look on the screen with a heatmap.
  • Facial Coding Metrics: Tracks how users feel using facial expressions, displayed as positive and negative emotion charts.
  • AOI (Area of Interest): Let you choose specific parts of the video to study closely.
  • Transcript: Writes down everything users say in the video.
  • Highlighting: Helps you point out important parts in the transcript.
  • Notes: Allows you to jot down thoughts or comments at specific times in the video.

Highlight Creation


Understanding Card Sorting Test

What is card sorting?

Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.

Card sorting can be conducted using physical cards, where participants physically manipulate and group the cards, or it can be done digitally using online platforms like Qatalyst. The technique allows researchers to gain insights into users' mental models, understand their organizational preferences, and inform the design and structure of information architecture, navigation systems, menus, and labelling within a product or website.

Why do we use Card Sorting?

Card sorting is a valuable technique in UX research for several reasons:

  • Understand how users think: Card sorting helps us understand how users naturally organize and categorize information in their minds. This insight helps us design interfaces that match their mental models, making it easier for them to find what they're looking for.
  • User-friendly design: By involving users in organizing information, we ensure our designs are user-friendly and intuitive. Card sorting helps us create interfaces that feel familiar and make sense to users, resulting in a better user experience.
  • Find common patterns: Card sorting helps us identify common patterns and groupings in how users categorize items. This knowledge guides us in organizing information in a way that makes sense to the majority of users.
  • Improve findability: Effective information organization improves how easily users can find what they're looking for. Card sorting helps us identify logical groupings and labelling conventions that enhance the findability of content.

Types of Card Sorting?

There are three main types of card sorting:

  • Open card sorting: Users are given a set of cards with labels on them and asked to sort them into groups that make sense to them. The researcher does not provide any guidance on how to sort the cards.
  • Closed card sorting: Users are given a set of cards with labels on them and asked to sort the cards into pre-determined categories. The researcher provides a list of categories to choose from.
  • Hybrid card sorting: This is a combination of open and closed card sorting. Users are given a set of cards with labels on them and asked to sort the cards into pre-determined categories. They are also allowed to create their own categories if they do not see any that fit their needs.

How to conduct Card Sorting?

1. Choose the correct type of card sorting. There are three main types of card sorting; choose the methods you want to choose.

Type

When to use it

Open card sorting

When you want to understand how users naturally group information.

Closed card sorting

When you already have a good idea of what your categories should be.

Hybrid card sorting

When you want to get feedback on both your initial ideas and how users naturally group information.

2. Prepare the cards. The cards should be clear and concise, and they should represent the information that you want users to sort. Use index cards, sticky notes, or a digital card sorting tool like Qatalyst.

3. Recruit participants. You should recruit participants who are representative of your target audience. 

4. Conduct the card sort. You can conduct the card sorting in person or online. If you conduct the card sorting in person, you must provide a quiet space and a comfortable place for participants to work. If you are conducting the card sorting online, you will need to use a digital card sorting tool.

5. Analyze the results. Once you have collected the results of the card sort, you will need to analyze them. You can use various methods to analyze the results, such as frequency analysis and category analysis.

6. Use the results to improve your information architecture. Once you have analyzed the results of the card sort, you can use them to improve your information architecture. You can use the results to identify the most essential categories for users, determine the best way to label categories and validate or invalidate initial assumptions about information architecture.

Best Practices

  • Choose the correct type of card sorting for your needs. As mentioned earlier, there are three main types of card sorting: open, closed, and hybrid. The type of card sorting you choose will depend on your specific needs and goals.
  • Use clear and concise labels on the cards. The labels on the cards should be clear and concise, and they should represent the information that you want users to sort. Avoid using jargon or technical terms that users may not understand.
  • Have a clear goal for the study. What do you want to learn from the card sort? Once you know your goal, you can tailor the study to collect the necessary data.
  • Collect enough data from enough participants. The number of participants you need will depend on the complexity of your information architecture. However, as a general rule of thumb, you should aim to collect data from at least 15 participants.
  • Be patient and let participants think aloud. This will help you to understand why they are making the decisions they are making. Ask them to explain their thought process as they are sorting the cards.

How to add Approval Manager?

Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.

Create a Card Sort in Qatalyst

To create a card sort, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Card Sorting" option.

A screenshot of a computerDescription automatically generated

Step 3: Here, you can add the task and add multiple cards and Categories by clicking on the "+" button available. There are multiple properties and options also available to enhance the experience.

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

1. Card Property

a. Image: An Image will also appear on the card along with the text.

b. Hide Title: The card will appear without the text; this option can be enabled if you have an image added to the cards.

c. Randomize:  The cards will appear in random order.

2. Card Category

1. Limit card in category: Using this property, only the given number of cards can be added to a particular category.

A screenshot of a computerDescription automatically generated

2.Randomize - The category will appear in random order.

Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Result View

In the result section of a card sort, you will find the quantitative data about the selection made by different respondents.

In the Categories and Cards section, you will find the following two views for the result data:

Card View: It will show the information and the number of categories in which they are added, along with the agreement percentage. By clicking on the plus icon, you can get information about the categories to which these cards are added.

A screenshot of a graphDescription automatically generated

How to read this data?

From the first column, users can infer that the "DBZ" card is added in two categories, and the agreement percentage is 50%, which means users agree that this card belongs to two categories. 

You can also expand the cards and view the percentage of users who have added the card in a particular category. 

A screenshot of a computerDescription automatically generated

Category View

In the category view, the user can view the category names and the number of cards added in that category, along with the agreement matrix.

A screenshot of a graphDescription automatically generated

After expanding the card, users can view the cards added in that category and the percentage of users who have added them.

A screenshot of a cardDescription automatically generated

Agreement Matrix

An agreement matrix is a visual representation of how often users agree that a card belongs in each category in a card sort. It is a table with rows representing the cards and columns representing the categories. Each cell in the table indicates the agreement rate for a card in a category. The agreement rate is the percentage of users who placed the card in that category.

The agreement matrix can be used to identify which categories are most agreed upon by users, as well as which cards are most ambiguous or difficult to categorize. It can also be used to identify clusters of cards that are often grouped together.

Understanding Tree Testing

What is Tree Testing?

Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.

By conducting tree testing, we aim to address the fundamental question, "Can users find what they are looking for?" This research technique allows us to evaluate the effectiveness of our information architecture and assess whether users can navigate through the content intuitively, locate specific topics, and comprehend the overall structure of our product. It provides valuable insights into the findability and clarity of our content hierarchy, enabling us to refine and optimize the user experience.

A group of people working on a projectDescription automatically generated

In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.

What is Information Architecture(IA)?

Information architecture (IA) refers to the structural design and organization of information within a system, such as a website, application, or other digital product. It involves arranging and categorizing information in a logical and coherent manner to facilitate effective navigation, retrieval, and understanding by users.

Why do we use Tree Sorting?

Here are some of the things that tree testing can be used for:

  • Evaluate Information Architecture: Tree testing allows researchers to evaluate the effectiveness of the information architecture (IA) of a website or app. Testing the navigational structure in isolation, without the influence of visual design or other distractions, it provides a focused assessment of how well users can find and understand information within the product. For example, you might want to test whether users can easily find the page that describes your company's history.
  • Assess Findability: Tree testing helps determine the findability of specific topics or pieces of information. By presenting users with tasks or scenarios and observing their navigation through the tree structure, researchers can identify any difficulties or inefficiencies in locating desired information. This insight helps refine the IA and improve the overall findability for users. For example, you might want to test whether users find the main categories of your website to be straightforward and easy to understand.
  • Identify Navigation Issues: Tree testing allows for the identification of navigation problems, such as incorrect paths, dead ends, or confusing labelling. By analyzing user interactions and collecting feedback, researchers can uncover areas where the IA may be causing confusion or hindering users' ability to navigate efficiently.  For example, you might find that users are having difficulty finding the page that describes your company's products.

Here are some questions that tree testing can answer:

  • What are the most critical topics for users? This question can be used to prioritize the content on your website. By asking users to rank the topics in order of importance, you can get feedback on which topics are most important to them.
  • What are the most common paths that users take through my website? This question can be used to identify the most popular areas of your website. By tracking the paths that users take through your website, you can get feedback on which areas are most popular and which areas need improvement.
  • What are the most common problems that users have finding information on my website? This question can be used to identify the areas of your website that are most difficult to use. By asking users to describe the problems they have found, you can get feedback on how to improve the usability of your website.
  • Do my labels make sense? This question can be used to validate ideas before designing. By asking users to find topics based on their labels, you can get feedback on whether the labels are clear and easy to understand.
  • Is my content grouped logically? This question can be used to test the usability of your navigation. By asking users to find topics based on their location in the hierarchy, you can get feedback on whether the content is grouped logically and easy to navigate.
  • Can users find the information they want easily and quickly? This question can be used to build a foundation for the design that will lay on top of your product structure. By asking users to find topics within a specific amount of time, you can get feedback on whether the information is easy to find and navigate.

How to conduct Tree Testing?

  • Define your goals. What do you want to achieve with the tree test? Do you want to evaluate the findability of specific topics or subtopics? Do you want to get feedback on the overall hierarchy of your website? Once you know your goals, you can start to develop your tree test.
  • Create a tree diagram. This is a visual representation of your website's hierarchy. It should show the top-level categories, as well as the subcategories and sub-subcategories. You can use a spreadsheet or a tree-testing tool to create your tree diagram.
  • Write tasks. The tasks that you give to users will help you to evaluate the findability of topics on your website. The tasks should be specific and measurable. For example, you might ask users to "Find the page that describes our company's history."
  • Recruit participants. You will need to recruit a group of users who represent your target audience. The participants should be familiar with the type of website that they are testing.
  • Conduct the tree test. You can conduct the tree test remotely or in person. If you are conducting the test remotely, you will need to use a tree testing tool. The tool will allow you to present the tree diagram to the participants and track their progress.
  • Analyze the results. The results of the tree test will show you how well users were able to find the topics that you asked them to find. You can use the results to identify areas of the hierarchy that are difficult to understand or navigate.

Best Practices

  • Keep the tasks short and simple.
  • Use clear and concise language.
  • Give the participants enough time to complete the tasks.
  • Ask the participants to think aloud as they are completing the tasks.

Tree Testing in Qatalyst

What is Tree Testing?

Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.

In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.

A screenshot of a computerDescription automatically generated

Create a Tree Test in Qatalyst

To create a tree test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

A screenshot of a web pageDescription automatically generated

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Tree Testing" option.

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Step 3: Once you have added the block, design your question and information architecture by simply adding the labels and defining the parent-child relationship in a tree-like structure.

A screenshot of a computerDescription automatically generated

Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Property

Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Result View📊

In the result section of the Tree Test, you will find the following two sections:

End Screen: This section will show the label submitted by the users as the answer to the task and the percentage of users who selected that label.

The below screenshot suggests that 4 respondents have taken the test and they have submitted two labels (Preference Testing, Qualitative study) as the answer to the task, and out of that 75% of the respondents have selected Preference Testing and 25% have selected Qualitative Study.

A screenshot of a computerDescription automatically generated

Common Path: In this section, you will the actual path navigated by the respondents, starting from the parent label to the child label they have submitted.

Live Website Testing in Qatalyst

What is Live Website Testing?

In UX research, live website testing refers to the practice of conducting usability testing or user testing on a live and functioning website. This type of testing is done to gather insights and feedback from users as they interact with the website in its actual environment.

Website testing aims to ensure that the website is user-friendly, efficient, engaging, and aligned with the needs and expectations of its target audience.

Live Website Testing Using Qatalyst

The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website or digital platform, allowing researchers to observe how participants engage with the task in a controlled environment. This provides a focused way to gather insights into user behaviour, decision-making, and preferences during a research session.

  • Path tracking: Path tracking in live website testing involves monitoring and analyzing the specific journeys or paths that users take while navigating through the website. This method allows researchers to understand the sequence of actions users perform, the pages they visit, and the interactions they engage in during their browsing sessions.
  • Average time and drop-off: Understanding the average time users spend on the website and the drop-off points (where users exit the website) is crucial in live website testing. This data provides essential metrics to evaluate user engagement and identify areas that may need improvement.
  • Leveraging Facial Coding and Eye Tracking
Website testing can be further enriched through advanced technologies such as facial coding and eye tracking. These techs offer the capability to Qatalyst respondents' nonverbal cues, shedding light on facial expressions and eye movements. This nuanced analysis unveils unspoken emotions, cognitive processes, and areas of focus, providing a holistic perspective on participant engagement.
  • Unveiling Insights Through Audio Transcripts
The recorded audio holds immense potential for researchers. Audio transcripts offer a deep dive into participants' verbal responses, enabling researchers to analyze language nuances, sentiments, and communication patterns. These insights contribute to a comprehensive understanding of participant attitudes and viewpoints, enriching the research findings.

Conduct Live Website testing in Qatalyst

The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website, allowing researchers to observe how participants engage with the task in a controlled environment. 

Create a Live website Test

To create a Live Website test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. 

From the list of options that appear, select "Live website Testing" under the Task-based research section.

Step 3:  Place the test instructions or scenarios in the top bar and enter the website URL where you want respondents to perform the task in the URL field. Click the "Add Task" icon to include multiple tasks in your testing scenario.

Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Time Limit: You can change the time limit of the test to 30, 60, 90, or 120 seconds using this option. This will depend on the length and the complexity of the task. Please note once the time is over, the task will be over and submitted, and the recording will be available for that time only.
  • Show Timer: By enabling this option, a timer will be displayed during the website testing session.
  • Screen Recording: This option is mandatorily enabled. Using this option, the screens of the respondents will be recorded for the entire duration of the test. 

Technology

  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

You can select more than one tracking technology at once, too.

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.

Note: When a single task is assigned, the insights provided include a video recording, user path visualizer, journey tree, and performance breakdown. For multiple tasks, the insights will feature only the video recording accompanied by supporting metrics.

Single Task Results

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.

  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

3. User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

4. Journey Tree

Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.

5. Performance Breakdown

Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability. 

Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.

Multiple Task Results

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.
  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

Live Website Testing Analytics

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.

1. Summary Section

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

2. Recording and Transcript

In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents. 

Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius, and shadow as per your preference.

  • Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame and insights will appear in a few seconds.
  • Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your website.

3. User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

4. Journey Tree

Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.

5. Performance Breakdown

Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability. 

Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.

Understanding Moderated Research

Moderated research in UX (User Experience) involves a controlled testing environment where a researcher interacts directly with participants. This method employs a skilled moderator to guide participants through tasks, observe their interactions, and gather qualitative insights. This approach offers an in-depth understanding of user behaviour, preferences, and challenges, enabling researchers to make informed design decisions and enhance the overall user experience. The researcher's active involvement allows for real-time adjustments, targeted questioning, and nuanced observations, making moderated research a valuable tool for evaluation and improvement.

Moderated testing can be conducted in person or remotely. In person, the moderator and participant will be in the same room. Remotely, the moderator and participant will use video conferencing to communicate.

Moderated testing is a more time-consuming and expensive type of usability testing than unmoderated testing. However, it can provide more in-depth feedback that can be used to improve the user experience.

Here are some of the benefits of moderated testing:

  • Deeper Insights: Moderated research allows researchers to probe deeper into participants' thoughts, feelings, and behaviours. The real-time interaction provides an opportunity to uncover underlying motivations and reasons behind user actions, leading to richer insights.
  • Clarification: Moderators can clarify any misunderstandings participants might have about tasks, questions, or the product itself. This ensures that the data collected is accurate and reflects participants' true experiences.
  • Real-time Feedback: Researchers can gather immediate feedback from participants about their experiences with a product or interface. This feedback can lead to quick adjustments and improvements in the design process.
  • Behaviour Observation: Moderators can observe participants' facial expressions, body language, and verbal cues. These non-verbal signals can provide additional context to participants' actions and help in understanding emotional responses.

Here are some of the drawbacks of moderated testing:

  • It can be more time-consuming and expensive than unmoderated testing.
  • It can be difficult to find qualified moderators.
  • It can be disruptive to the user's experience.

Overall, moderated testing is a valuable tool for usability testing. It can provide valuable insights into the user experience that can be used to improve the product or service.

Here are some of the situations where moderated testing is a good choice:

Complex User Flows and Interactions: If your product involves intricate user flows, complex interactions, or multi-step tasks, moderated testing can help guide participants through these processes and ensure they understand and complete tasks correctly.

In-Depth Understanding: If you aim to gather in-depth qualitative insights into participants' experiences, thoughts, emotions, and motivations, moderated testing allows you to ask follow-up questions, probe deeper, and gain a richer understanding of user behaviour.

Usability Issue Identification: If your primary goal is to identify usability issues, pain points, and obstacles users face while interacting with your product, moderated testing is recommended. Moderators can observe participant struggles in real-time and gather detailed context around the issues.

Customized Probing: When you want to tailor your research approach to each participant's unique responses and behaviours, moderated testing provides the flexibility to delve deeper into areas of interest based on individual participant feedback.

Real-Time Feedback: If you need immediate feedback on design changes, feature iterations, or prototypes, moderated testing can offer instant insights that can be acted upon quickly.

Small Sample Sizes: For studies with a small sample size, moderated testing can provide a more nuanced understanding of individual participant experiences and preferences.

Early Design Iterations: During the early stages of design or development, moderated testing can be valuable. A moderator can quickly adapt to changes and provide real-time feedback, enabling iterative improvements before the product reaches advanced stages.

Session Block in Qatalyst

In the realm of user-centric research, we have introduced a vital block known as the "Session Block." This feature enables researchers to delve deep into user experiences, fostering a holistic comprehension of behaviours and insights.

A person sitting at a desk with a computerDescription automatically generated

Understanding the Essence of the Session Block

At its core, the Session Block is a component within Qatalyst that empowers researchers to schedule and conduct insightful sessions with participants. This capability plays a pivotal role in what is known as moderated research—a method that hinges on guiding participants through specific tasks while encouraging them to vocalize their thoughts and actions. The result is an invaluable stream of real-time insights that provide a comprehensive view of user behaviour, opinions, and reactions.

Unveiling the Dynamics of Moderated Research

Moderated research, made possible through the Session Block, stands as a dynamic approach to engaging with participants.  It works by asking users to talk about what they're doing and thinking while they use a product or service. This helps us understand how they make decisions, what frustrates them, and when they feel happy about their experience.

As participants navigate through tasks, the moderator assumes a guiding role, ensuring that the user's journey is well-structured and aligns with the study's objectives. Moderators also wield the power of follow-up questions, enabling them to probe deeper into participants' responses and elicit more nuanced insights.

Leveraging Session Blocks for Comprehensive Insights

The Session Block not only empowers researchers to facilitate these insightful interactions but also provides a structured framework for conducting them. Here's how it works:

A few people in a discussionDescription automatically generated with medium confidence
Type image caption here (optional)

Scheduling and Set-Up: Researchers can seamlessly schedule sessions by defining crucial details such as the session's name, participant's name and email, moderator's information, language preferences, date and time, and even the option to incorporate facial coding technology if desired.

Real-Time Interaction: During the session, participants engage in tasks while verbally sharing their thought processes. Moderators actively guide the discussion, prompting participants to elaborate on their actions and decisions.

Deeper Exploration: Moderators leverage follow-up questions to delve deeper into participants' viewpoints. This enables them to uncover underlying motivations, preferences, and pain points that might otherwise remain hidden.

Rich Insights: The real-time nature of the interaction, combined with follow-up queries, yields a wealth of qualitative data. These insights provide a nuanced understanding of user behaviours, allowing researchers to make informed decisions and improvements. The following insights can be drawn from the session.

  • Transcript of conversation
  • Tags and highlights
  • Emotion Analytics
  • Talk time and Max. Monologue
  • Filler words

In essence, the Session Block in Qatalyst transforms moderated research into a fluid and structured process. It empowers researchers to not only guide participants through tasks but also to extract profound insights that fuel informed decisions, leading to enhanced user experiences and product refinements. As the bridge between participants and researchers, the Session Block exemplifies Qatalyst's commitment to enabling in-depth, user-focused research in the digital age.

Conduct Moderated Testing using Session Block In Qatalyst

In Qatalyst, you can conduct moderated research by scheduling a meeting online using a session block. Moderated research hinges on the concept of guiding participants through tasks and prompting them to articulate their thoughts aloud as they navigate a product or service. This dynamic approach facilitates a comprehensive understanding of user behaviour as participants verbalize their actions and reactions in real-time. Additionally, moderators have the opportunity to pose follow-up questions, delving deeper into participants' perspectives and extracting valuable insights.

Here are the steps for setting up a session in Qatalyst: 

Steps

Step 1:  Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Session Block".

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Step 3: After selecting the block, a form will appear on the screen, where you need to fill in the following details: 

  • Session Name
  • Participant Name and Email
  • Moderator Name and Email
  • Observer: If you add any observers to the meeting, they will not be visible to the participants. Only the moderator can see them.
  • Language: Specify the language to be used during the session. This is required for accurate transcript generation.
  • Date and time
  • Technology: If facial coding technology is selected, the participant's facial expression will be captured using the webcam, and insights will be shown in the result.
A screenshot of a computerDescription automatically generated

Step 4: after you have added the details, you can keep on adding other blocks if required. Once done, you can publish the study by clicking on the publish button available at the top right corner of the page.

Note that session time follows UTC.

A screenshot of a computerDescription automatically generated

Step 5: Once you publish the study, you will be directed to the share page for the session joining link, and the attendees of the session will receive a joining link as well, using which they can join the session.

A screenshot of a surveyDescription automatically generated

A screenshot of a computerDescription automatically generated

Insights

Once the session is conducted, you can see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)
A screenshot of a video chatDescription automatically generated

The result of the mobile app testing consists of following 4 sections: 

  • Video
  • Transcript
  • Highlights
  • Notes
A screenshot of a video conferenceDescription automatically generated

Video

 This section allows you to revisit the session whenever needed, preserving the content and context for your convenience.

If you have enabled facial coding tech, you can see the participant's emotional responses during the session. 

Transcript

The next section is Transcript. You will find the auto-generated transcript for it, where you can create tags and highlight the important parts.

The transcript will be generated in the same language which was selected while creating the session. 

Additionally you can also translate the transcript to 100+ languages available in the platform.

For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag". 

Highlight

All the highlights created on the transcript will appear in this section. You can also play the specific video portion.

A screenshot of a chatDescription automatically generated

Notes

This section displays the notes created on the recorded video. 

A screenshot of a chatDescription automatically generated

How to create notes?

  • On the recorded video, click on the notes.
  • Click on the specific time from the video player seeker where you want add note
  • A prompt will appear, add the notes. 

Live Mobile App Testing

Live mobile app testing in UX research is the practice of observing and collecting feedback from users as they interact with a mobile app in real time. This approach allows researchers to gather feedback and observe user behaviour, providing invaluable data for optimizing app usability and functionality.

Live Mobile App Testing Using Qatalyst

The Live Mobile App Testing block in Qatalyst allows the users to add their mobile application Play Store URL and the task to be performed in the application. By presenting these tasks to participants, researchers observe and analyze how users engage with the app, providing a controlled yet authentic environment for comprehensive user behaviour analysis.

  • Leveraging Facial Coding and Eye Tracking
Website testing can be further enriched through advanced technologies such as facial coding and eye tracking. These techs offer the capability to Qatalyst respondents' nonverbal cues, shedding light on facial expressions and eye movements. This nuanced analysis unveils unspoken emotions, cognitive processes, and areas of focus, providing a holistic perspective on participant engagement.
  • Unveiling Insights Through Audio Transcripts
The recorded audio holds immense potential for researchers. Audio transcripts offer a deep dive into participants' verbal responses, enabling researchers to analyze language nuances, sentiments, and communication patterns. These insights contribute to a comprehensive understanding of participant attitudes and viewpoints, enriching the research findings.

How to conduct mobile application testing in Qatalyst?

Mobile application testing involves assessing and evaluating the usability, functionality, design, and overall user experience of a mobile application. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.

Create a Live Mobile App Test

To create a Live Website test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. 

From the list of options that appear, select "Mobile App Testing" under the Task-based research section.

Step 3: Add the task title and the description. In the URL field, add the Play Store URL of the app on which you want the respondents to perform the task.

How to get the Play Store app URL: 

  • Open Play Store on Mobile or web
  • Search for the application. 
  • On the web: Click on the share button and copy the URL.
  • On mobile: Click on the three dots and then "Share"
Type image caption here (optional)


Properties

  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Time Limit: You can change the time limit of the test to 30, 60,90 or 120 seconds using this option. This will depend on the length and the complexity of the task. Please note once the time is over, the task will be over and submitted, and the recording will be available for that time only.
  • Show Timer: If enabled, a timer will be displayed during the test.
  • Screen Recording: This option is mandatorily enabled. Using this option, the screens of the respondents will be recorded for the entire duration of the test. 
  • Picture in Picture mode: This option will show the tester's video in the recording as well.

Technology

  • Eye tracking: Eye tracking is a technology that records the movement of the user's eyes as they interact with the design. This technology can provide insights into which elements of the design users are looking for, which areas are most engaging, and which areas may need improvement.
  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions as they interact with the design. This technology can provide insights into users' emotional responses to the product. It can be used to optimize the product's design and messaging to elicit more positive emotional responses from users.

To select the technologies, click on the boxes.

Mobile App Testing Analytics

Mobile application testing involves assessing and evaluating a mobile application's usability, functionality, design, and overall user experience. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.

In this article, we will provide you insights on different metrics you will get for mobile app testing in Qatalyst and their definitions. 

Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

The result of the mobile app testing consists of following 4 sections: 

  • Video
  • Transcript
  • Highlights
  • Notes

Video

In the video section, you will find the screen recording of the test for every respondent, use the dropdown at the top of the screen to select the respondent. 

If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.

  • ET Heatmap: An eye-tracking heatmap is a visual representation of where people look on a page. It is created by tracking the eye movements of users as they interact with the prototype. The heatmap then shows the areas of the screen that received the most attention, with the hottest areas being those that were looked at the most.

You can adjust the parameters like blur, radius and shadow per your preference.

Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your app.

Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame, and insights will appear in a few seconds.

Transcript

The next section is Transcript. If the recording has audio, you will find the auto-generated transcript for it, where you can create tags and highlight the important parts.

For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag". 

Highlight

All the highlights created on the transcript will appear in this section. You can also play the specific video portion.

Notes

This section displays the notes created on the recorded video. 

How to create notes?

  • On the recorded video, click on the notes.
  • Click on the specific time from the video player seeker where you want add note
  • A prompt will appear, add the notes. 

User Path Visualizer

User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.

 Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.

User Journey Tree

Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.

For the journeys, you will find the information by the lines and the name of each page in the tree.

The following information is shown:

  • Green line: The path navigated by the respondents to complete the task.
  • Red down arrow 🔽: This icon displays the number of users who closed/skipped the journey after visiting a particular screen.

Performance Breakdown

This chart showcases the comprehensive performance analysis of each page the respondents have navigated. It presents valuable insights such as the average time spent by respondents on each page, and the drop-off rate.

Video Recording Block

What is the Video Screener Block?

The Video Screener Block within the Qatalyst app serves as a dedicated tool for testers to record and submit videos as part of their testing process. This block allows users to integrate video-based insights into their tests effortlessly. Whether it's capturing user interactions, feedback, or suggestions, this feature adds a rich layer to the testing process, providing a holistic understanding of user experiences.

How to Utilize the Video Screener Block

Utilizing the Video Screener Block in the Qatalyst app is straightforward. Testers can easily add this block to their testing sequences, prompting users to record and submit videos based on specified criteria. The intuitive interface ensures a user-friendly experience, allowing for easy navigation and management of video submissions.

Here are the steps for adding a video screener block to the Qatalyst test: 

Steps

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.

Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Video Response Block".

Step 3: Once you have added the block, you can add the title of the task and the description. In the property section, you can define the following: 

  • Time Limit: maximum time limit for the video recording or uploading.
  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
  • Upload Media: Testers can also upload the media if they do not wish to record it live.
  • Preferred Language: Language spoken in the live recording or uploaded media; this selection is important for accurate transcript generation.

Technology

  • Facial Coding: Facial coding is a technology that is used to analyze users' facial expressions. in the video, if the face of the user is detected, this technology will give you insights into the expression portrayed by the users.

Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.

Result

Users can access a concise summary of the Video Screener block, providing an overview of the responses within. Users can view the overall summary, view individual testers' responses and seamlessly navigate to a specific tester's view with just a click. We've also added transcripts and analytics, along with the ability to create and manage highlights.

In the summary section on the right hand side of the page, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved on to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

In the video response summary dashboard, you will get the following information:

  • Total number of testers: The total number of participants in a study.
  • Completed Tester: Total number of participants who submitted the response.
  • Drop off: Number of users who dropped the test.
  • Emotion metrics: An overall percentage distribution of the emotions in the video response submitted. 
  • Word cloud: A visual representation of the most common words used by participants in their video responses.

Below in the screen, you will find the video responses submitted by the users; you can open and expand it to view the detailed analytics of the particular response. Here, you will find the following insights:

  • Media Player: The media player is where you can play the video. 
  • Total Talk Time: The overall duration of the respondent's spoken content.
  • Longest Monologue: The length of the respondent's most extended uninterrupted speech.
  • Filler Words: Analysis of filler words (e.g., "um," "uh") used in the response.
  • Emotion Distribution: Insights into the emotional expressions conveyed during the response.
  • Transcript: A written text version of the spoken content.
  • Highlights: You can select specific parts of the transcript and create highlights, which can be used to reference important topics, keep track of action items, or share specific sections with other team members.

Using Video Response Block as Screening Block

You can use the video response block as a screener question and add the testers directly to your panel from the video screening question. In this article, we will guide you on the process of adding testers to the native panel from the video screener block.

Step 1: After conducting the video screening test, navigate to the results section where you'll find all the submitted responses neatly organized.

Step 2: Hover over the testers' video thumbnails, and a convenient checkbox will appear. Use this checkbox to select the testers you want to add to your panel.

Step 3: Click on the "export tester" button located in the bottom bar. A pop-up form will appear, providing options to add testers to the default panel, create a new tag for them, or add them to an existing tag.

why tagging?

Think of tagging as a super helpful tool for keeping things organized in your native panel and while sharing the test. When you tag respondents' videos, you're basically creating a neat and easy-to-search system.

Picture this: you've done lots of tests, and you've got a bunch of different testers. When you tag them based on things like their traits, likes, or other important stuff, it's like making a different group of testers. This makes it a breeze to find and group-specific testers quickly.

Step 4: Once you've chosen the desired videos, click the "add" button, and voila! The selected testers have now been successfully added to your panel.

Step 4: Your newly added testers will be seamlessly integrated into your native panel. When sharing a test with the panel, you'll find the added testers listed, making it effortless to include them in your testing initiatives.

Consent Block in Qatalyst

Consent Block in Qatalyst

Ensuring transparency and user consent is paramount in any testing process. In Qatalyst, you have the ability to integrate a Consent Block into your studies seamlessly. This block allows you to add titles and descriptions, or upload files as consent materials. During the test, testers can conveniently access and review the contents of the Consent Block, affirming their understanding and agreement with the terms and conditions through a simple checkbox. This ensures that testers are fully informed and compliant, contributing to an ethical and transparent testing environment.

How to Add a Consent Block: Step-by-Step Guide

Step 1: Log in to your Qatalyst Account

Upon logging into your Qatalyst account, you will be directed to the dashboard, where you can manage and create studies.

Step 2: Create a New Study

Click the "Create Study" button on the dashboard to initiate a new study. Choose to start from scratch or use an existing template to streamline the process.

Step 3: Add a Consent Block

Once in the study creation interface, click on the "Add New Block" button. From the list of block options, select "Consent Block" to add this feature to your study.

Step 4: Customize the Consent Block

In the Consent Block, you have the flexibility to add a title and description. Alternatively, you can upload a PDF file containing your consent materials for thorough documentation.

Preview of text and PDF consent:

As shown above, the Consent Block provides a preview of both text and PDF-based consent materials. This ensures that your testers have a clear understanding of the terms and conditions before proceeding with the test.

Step 5: Publish your study

Once you've finished creating your study by adding other blocks you can go ahead and publish it.

Test Execution

After the welcome block consent block appears, respondents will be prompted to either accept or reject the terms and conditions. If they choose to agree, the test will proceed. In the event of a decline, the study will conclude for the respective tester, ensuring a respectful and consensual testing experience.

First Click Test in Qatalyst

A first-click test is a usability research method used to evaluate how easy it is for users to complete a specific task on a website, app, or any interface with a visual component. It essentially gauges how intuitive the design is by analyzing a user's initial click towards achieving a goal.

Create a FIRST CLICK Test

To create a 5-second test, follow these simple steps: 

Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.

Step 2: Once in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "First Click Test".

Properties

  • Number of Clicks Allowed: You can give the number of maximum clicks a user can make in the interface, only their last click will be taken into account. 
  • Image Dimension:
  1. Fit to Screen: In this setting, the image is displayed in such a way that it completely fits within the boundaries of the screen, regardless of its original dimensions. Users can view the entire image without the need to scroll vertically or horizontally.
  1. Fit to Width: The image is scaled to cover the full width of the screen while maintaining its original aspect ratio. If the dimensions of the image exceed the width of the screen, users can scroll vertically to view the portions of the image that extend beyond the visible area.
  1. Fit to Height: In this configuration, the height of the image is adjusted to fit the screen while preserving its original width. If the width of the image exceeds the screen width, users can scroll horizontally to explore the entire image.
  • Mandatory Test: Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.

Technology

  • Mouse Tracking: Mouse tracking is a technology that records the clicks of the users on the screen as they interact with the design. 

Result View

Once the respondents have taken the test, you will be able to see the analytics in the result section. 

In the summary section, you will find the following information:

  • Respondents: Number of people who initiated the test block.
  • Skip: The Number of people who choose to skip the block.
  • Drop-off:  Number of people who have not moved to the next block.
  • Bounce Rate: ((Dropoff + Skip)/ Number of Responses)*100 . (In Percentage)

Based on the technology selected, you will find the following metrics : 

  • All clicks: All Clicks metrics provide insights on the clicks made by the respondents on the image. It gives a complete view of how users interact with the image. The size of a click depends on how many times respondents have clicked in that area. This helps us understand which parts of the picture are getting more attention from users.

Tracking Technology

Eye Tracking

Eye Tracking

Eye-tracking is the process of measuring the point of the human gaze (where one is looking) on the screen.

Qatalyst Eye Tracking uses the standard webcam embedded with the laptop/desktop/mobile to measure eye positions and movements. The webcam identifies the position of both the eyes and records eye movement as the viewer looks at some kind of stimulus presented in front of him, either on a laptop, desktop, or mobile screen.

Insights from Eye Tracking

For the UX blocks, i.e. Prototype Testing, 5-second Testing, A/B Testing and Preference Testing, you will find the heatmap for the point of gaze.  It uses different colours to show which areas were looked at the most by the respondents. This visual representation helps us understand where their attention was focused, making it easier to make informed decisions.

Properties of Heatmap

  • Radius: The radius refers to the size of the individual data points or "hotspots" in the heatmap. Increasing the radius will make the hotspots larger and more prominent, while decreasing it will make them smaller and more concentrated.
  • Shadow: The shadow parameter controls the presence and intensity of shadows around the hotspots in the heatmap. Adding a shadow effect can enhance the visual depth and make the hotspots stand out, while reducing or removing it will create a flatter and more minimalistic heatmap.
  • Blur: The blur parameter determines the level of blurriness or fuzziness applied to the heatmap. Increasing the blur will result in a smoother and more diffused appearance, while reducing it will make the heatmap sharper and more defined.

Experience Eye Tracking

To experience how Eye Tracking works, click here: https://eye.affectlab.io

Read the instructions and start Eye Tracking on your Laptop or Desktop. Eye Tracking begins with a calibration, post which the system will identify your point of gaze on prompted pictures in real-time. 

Please enable the camera while accessing the application.InsightsfromEyeTrackingStudy results are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye-tracking studies, please click Eye Tracking Insights Intercom.com.

Eye trackers are used in marketing as an input devices for human-computer interaction, product design, and many other areas.

Mouse Tracking

Mouse Tracking

Mouse click tracking is a technique used to capture and analyze user interactions with a digital interface, specifically focusing on the clicks made by the user using their mouse cursor. It involves tracking the position of mouse clicks and recording this data for analysis and insights.

Insights from Mouse Tracking


A Mouse click-tracking heatmap is a visual representation that showcases the distribution of user clicks on a design or prototype. It provides users with a comprehensive overview of respondents' engagement by highlighting areas that attract the most attention and receive the highest number of clicks. This information can reveal valuable insights into user preferences, pain points, and overall usability, aiding in creating more intuitive and user-friendly interfaces.

For the prototype, you will get the following insights:

  • All Clicks: This feature allows users to access a comprehensive record of all the clicks made on a design or prototype. By selecting this option, you can view and analyze each interaction point initiated by the respondents, providing valuable insights into user behaviour and preferences.
A screenshot of a computerDescription automatically generated
  • Misclicks: With this option, you can specifically focus on the clicks made in areas of the prototype that are not designated as hotspots. It enables you to identify and analyze instances where users unintentionally click on non-interactive regions, offering valuable feedback on the clarity and intuitiveness of the design.
A screenshot of a computerDescription automatically generated

  • Scroll: The scroll option provides information about the specific areas visited by respondents within the prototype. By analyzing this data, you can gain insights into how users navigate and interact with the content, allowing you to optimize the layout, placement, and prominence of key elements within the design.
A screenshot of a computerDescription automatically generated

Facial Coding

Facial Coding

Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a web camera and decoded into their respective emotions.

Qatalyst helps to capture the emotions of any respondent when they are exposed to any design or prototype. Their expressions are captured using a web camera. Facial movements, such as changes in the position of eyebrows, jawline, mouth, cheeks, etc., are identified. The system can track even minute movements of facial muscles and give data about emotions such as happiness, sadness, surprise, anger, etc.

Insights From Facial Coding

By analyzing facial expressions captured during user interactions, facial coding systems can detect and quantify various emotional metrics, allowing researchers to understand users' emotional states and their impact on design experiences. Let's explore the specific metrics commonly derived from facial coding:

  • Positive Emotion Metrics - Positive emotions play a crucial role in user satisfaction and engagement. Facial coding can measure several positive emotion metrics, including:
  1. Happiness: This metric indicates the level of joy and contentment expressed by users. By detecting smiles and other facial expressions associated with pleasure, users can assess the extent to which a design elicits positive emotional responses.
  1. Surprise: The surprise metric captures the degree of astonishment or amazement shown by users. It helps identify moments when users encounter unexpected or novel elements within a design, highlighting aspects that capture their attention and evoke positive emotional responses.

  • Negative Emotion Metrics - Understanding negative emotions is equally important to address pain points and enhance user experiences. Facial coding provides insights into various negative emotion metrics, such as:
  1. Sadness: This metric reveals the extent of sadness or disappointment exhibited by users. Detecting facial expressions associated with sadness helps researchers identify design elements that may evoke negative emotional responses and require improvement or adjustment.
  1. Disgust: The disgust metric gauges the level of revulsion or aversion expressed by users. It helps uncover design aspects that users find unpleasant or repulsive, leading to negative emotional experiences. By identifying and rectifying these elements, designers can create more appealing and user-friendly interfaces.
  1. Anger: The anger metric measures the intensity of anger or frustration displayed by users. It indicates moments when users experience irritation or dissatisfaction with a design, highlighting areas that need refinement. Addressing these sources of anger can significantly enhance the user experience and reduce user frustration.

  • Neutral Attention - While positive and negative emotions are essential, neutral attention provides insight into user engagement without explicit emotional responses. This metric reveals the level of focus and concentration users exhibit when their facial expressions do not convey any specific emotional cues. By analyzing neutral attention, researchers can gauge overall user engagement and measure the effectiveness of design elements in capturing users' interest.
 

Experience Facial Coding

To experience how Facial Recognition works, click here: https://facial.affectlab.io/.

Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face needs to be in the outline provided next to the media.

Please enable your web camera while accessing the application.