In today's digital age, where attention spans are decreasing and users are quick to judge a website or app, it's essential to capture users' attention and engage them quickly. This is where the 5-second test comes into play. In this article, we'll discuss what the 5-second test is and how it's used in UX research.
The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.
The 5-second test typically involves the following steps:
The 5-second test is often used in the early stages of design or redesign to capture users' initial reactions and make quick improvements. It's a valuable tool for testing new designs, landing pages, and marketing campaigns and can be used to identify potential issues before launching a website or app.
Benefits of the 5-second test
The 5-second test has many benefits, including:
Best Practices
Use cases
Qatalyst offers a test block feature that allows users to conduct 5-second testing. The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.
To create a 5-second test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "5-second test".
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
In the summary section, you will find the following information:
Based on the technology selected, you will find the following metrics :
Example
Here is an example of how you can use Qatalyst for 5-second testing:
Suppose you are a designer working on a new landing page for a website. You want to use 5-second testing to get feedback on the visual appeal and memorability of the landing page design.
You decide to run a five-second test and ask participants if they are able to understand what the company does by looking at the landing page or if the message is clearly speaking to the audience or not.
Step 1: Upload the Image
After determining the focus of your test, you can proceed to configure your test within Qatalyst. This can be done by uploading an image of the specific screen you wish to test.
Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.
Step 2: Create Questions around the Test
Now, add questions to understand users' comprehension of the landing page and gauge their overall perception of the website. For this, use the survey blocks. Keep the questions concise and focused to gather quick, instinctive responses within the limited 5-second timeframe.
Step 3: Publish the test and share
Now that your test is ready, it’s time to share the test with the participants.
Step 4: Analyze the Result
After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.
In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.
Five-second tests are a quick and effective exercise to measure the clarity of your design and how it communicates a message which can later help you improve the user experience of your design.
A/B testing is a popular method used in UX research to evaluate the effectiveness of different design options. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.
A/B testing is a technique used to compare two versions of a design to determine which one performs better. In UX research, this technique is used to compare two different designs or variations of the same design to understand which one performs better in terms of user behaviour, engagement, and conversion.
A/B testing involves creating two different versions of a design element, such as a button or a page layout and showing it to the users. User behaviour, such as clicks, eye gaze, or conversions, is then measured and compared between the two versions to determine which one performs better.
For example, let's say a company wants to test two different versions of the website to see which one performs better. The first version of the website has a dark colour scheme, while the second version has a light colour scheme. The company decides to run an A/B test to see which version of the website has a higher conversion rate.
The company analyzes the results of the A/B test, and they find that the second version of the website has a higher conversion rate. The company decides to implement the second version of the website, and they see an increase in sales.
A/B testing can be conducted at any stage of website design, but the most effective time to conduct A/B testing is during the design and development phase. This is because A/B testing can help you identify the most effective design elements, user interface, and user experience, which can save time and resources in the long run.
When conducting A/B testing during website designing, it's important to test different variations of your website design, such as layout, colour scheme, font, and images. This can help you identify the most effective design elements that resonate with your audience.
It's also important to conduct A/B testing on different devices, such as desktops, laptops, tablets, and mobile phones, as user behaviour can vary significantly depending on the device. By testing on different devices, you can ensure that your website is optimized for all types of users.
Furthermore, it's essential to conduct A/B testing on different segments of your audience to ensure that your website design is effective for all user groups. This can include testing different versions of your website on different demographics, such as age, gender, location, and interests.
Best Practices
Here are some best practices to keep in mind when conducting A/B testing:
Use Cases
Here are some use cases for A/B testing:
In all of these cases, A/B testing allows businesses to make data-driven decisions about their marketing and product strategies. By testing different variations of design elements and features, they can identify the most effective approach and improve their overall performance.
In Qatalyst, you can conduct A/B testing on images to determine which one users prefer. Additionally, we offer you the ability to integrate various technologies, such as mouse tracking, facial coding, and eye tracking, to gather additional data and insights about user behaviour and preferences.
To create an A/B test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "A/B test".
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Once the respondents have taken the test, you will be able to see the analytics in the result section.
On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.
In the summary section, you will find the following information:
On the next screen, based on the technology selected, you will find the following metrics:
Example
Suppose you are an e-commerce business looking to optimize your product page layout for better conversion rates. Specifically, you want to compare two different variations of the "Add to Cart" button to determine which design yields higher user engagement and click-through rates.
Step 1: Set Up the Test
In Qatalyst, set up the A/B test by uploading the two versions of the product page, each with a different design for the "Add to Cart" button. Ensure that only this specific element is changed while keeping the rest of the page consistent. This will help isolate the impact of the button design on user behaviour.
Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.
Step 2: Create Questions around the Test
Now, add questions based on the information you want to gather from respondents. Consider using open-ended questions to gather qualitative feedback that can provide deeper insights.
Step 3: Publish the test and share
Now that your test is ready, it’s time to share the test with the participants.
Step 4: Analyze the Result
After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.
In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.
A/B testing with Qatalyst empowers you to make data-driven decisions about design changes, enabling continuous optimization and improvement of your product pages to maximize conversions and enhance user experience.
In UX research, it is crucial to understand the preferences of your users. This is where preference testing comes in. Preference testing is a technique that allows you to test multiple design options to determine which one is preferred by users. In this article, we will discuss preference testing in UX research, how it works, and its benefits.
Preference testing is a type of research that helps businesses understand what their customers like and prefer. It involves showing different design variants to people and asking them which one they like the most. By doing this, businesses can learn their customer's preferences and make decisions about how to improve their product to meet their customers' needs better.
Benefits of preference testing
Preference testing has many benefits in UX research, some of which include:
To conduct preference testing, you first need to identify the design elements that you want to test. These could include anything from different colour schemes to variations in layout, content, or navigation. Once you have identified the design elements, you can create multiple variations of each element and then present them to users in randomized order.
Participants in the study are typically shown each variation for a few seconds and then asked to choose which one they prefer. This process is repeated for each design element being tested. Once all the data is collected, you can analyze the results to determine which design elements are most preferred by your target audience.
Well, ideally, you should conduct preference testing whenever you're trying to improve the user experience of a website or application. More specifically, preference testing can be particularly useful when you're trying to make decisions about design, content, navigation, or user flows.
For example, let's say you're designing a new website, and you're trying to decide which colour scheme to use. You could conduct a preference test to see which colour scheme is more appealing to your target audience. Or, let's say you're redesigning your e-commerce site, and you're trying to decide where to place the "add to cart" button. You could conduct a preference test to see which placement is more intuitive and leads to more conversions.
In short, preference testing can be a valuable tool whenever you're trying to make decisions about the user experience of a website or application. It allows you to get feedback from users and make data-driven decisions that can improve the overall user experience.
Best Practices
Use Cases
Qatalyst offers a test block feature that allows users to conduct preference testing on various elements of the product. Users can add different versions of an element, such as two different designs, and ask users which one they prefer. This data can be used to inform product development decisions and optimize the product's design and features.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.
In the summary section, you will find the following information:
On the next screen, based on the technology selected, you will find the following metrics:
In UX research, it is important to test your prototype before start building your product. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.
A prototype is an early version or a design mock-up of a product or feature that is used to test its design and functionality before it is produced or released. It is a simplified representation of the final product created to illustrate key features and identify design flaws.
Prototype testing is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. The goal of prototype testing is to improve the product's design, functionality, and user experience before it is released to the market. Prototype testing helps users refine their ideas and concepts before investing time and resources in the final product, saving time and money and ensuring the product meets the needs and expectations of users.
The process of prototype testing typically involves the following steps:
Prototype testing can help ensure that the final product meets the needs and expectations of users and is free of design flaws or usability issues. It can save time and resources by identifying and addressing design flaws early in the development process, resulting in a more successful product launch.
Prototype testing should be conducted during the product development process, ideally after a preliminary version of the product has been created. The timing of prototype testing will depend on the specific product being developed and the stage of the development process.
In general, prototype testing should be conducted when:
Overall, prototype testing should be conducted early and often during the product development process to ensure that the final product is user-friendly, effective, and meets the needs of its intended audience.
Best Practices
Here are some best practices to consider when conducting prototype testing:
Qatalyst offers a test block feature that allows users to conduct prototype testing. It is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. You can upload a prototype of your website or mobile app, define the flow of the design and test it on respondents and gather responses.
Prerequisite for prototype link
Steps for adding prototypes:
Journey Paths
Defined Path: If you have pre-determined navigation paths for your prototype, using a defined path allows you to assess which path is most convenient or preferred by users. This helps you understand which specific path users tend to choose among the available options.
Exploratory Path: Choose an exploratory path when you want to test whether the respondents are able to navigate between the screens and are able to complete the given task and gather information about users' natural behaviour and preferences. This approach encourages users to freely explore the prototype and interact with it based on their own instincts and preferences. It can reveal unexpected insights and usage patterns that may not have been accounted for in predefined paths.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
1. Blocks Summary
In the summary section, you will find the following information:
2. Task Summary
In the dashboard of the result, you will find the summary of the test and the following information:
Overall Usability Score: This score represents the overall performance of your prototype. It is calculated by harnessing various metrics such as success rate, alternate success, average time, bounce rate, and misclick rate.
Overall Usability Score = Direct Success Rate + (Indirect Success Rate/ 2) - avg(Misclick%) - avg(Duration%)
3. User Journey Tree
Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.
For the journeys, you will find the information by the coloured lines in the tree.
In the defined journey, the following information is shown:
For the exploratory journey, there is no alternate path. The journey can be either a success or a failure.
Success: When the respondents can reach the goal screen.
Failure: When respondents do not reach the goal screen.
Insights from the User Journey
4. Graph metrics
The performance metrics provide a clear picture of the average time spent on each page in the prototype. This information is presented alongside the total time taken to complete the task and the number of respondents who have visited each page. By mapping these metrics together, we gain insights into how users interact with each page and how it contributes to the overall task completion.
Insights from Performance metrics
5. Performance Breakdown
This chart showcases the comprehensive performance analysis of each page within the prototype. It presents valuable insights such as the average time spent by respondents on each page, the misclick rate, and the drop-off rate.
By harnessing these metrics, we derive a usability score for every page, offering users a clear understanding of how each page performed so that they can focus on areas that require improvement.
Usability Score = MAX(0,100 - (Drop Off) - (Misclick Rate *misclick weight) - (MIN(10,MAX(0,(Average duration in sec) - 5)/2))))
The Misclick weight equals 0.5 points for every misclick.
Insights from Performance Breakdown
The page with a usability score below 80 calls for attention. The researchers can check eye tracking, mouse tracking and facial coding data and figure out if the behaviour is expected or an anomaly.
6. Emotion AI Metrics
When you click on any page using the performance metrics, you will be seamlessly transported to the detailed Metrics page, where you can delve into insights gathered from eye tracking, facial coding, and mouse clicks.
Here, you will discover information such as the average time spent on the page, the number of respondents who have visited the page, and intricate details regarding the misclick rate.
In the Analytics section, you'll have access to a wealth of metrics, including:
For example, an AOI could be used to track the time that users spend looking at a call to action button, or the number of times they click on a link. This information can be used to improve the usability of the website or app by making sure that the most important elements are easy to find and interact with.
By exploring these meticulously curated metrics, you can gain a comprehensive understanding of user engagement and behaviour, empowering you to make data-driven decisions to enhance your project's performance and user experience.
7. Screen Recording
Under this section, you will find the screen recordings of the session taking the test. You can use the top dropdown to select the testers.
Along with the video recording, you will get the following functionality:
Highlight Creation
Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.
Card sorting can be conducted using physical cards, where participants physically manipulate and group the cards, or it can be done digitally using online platforms like Qatalyst. The technique allows researchers to gain insights into users' mental models, understand their organizational preferences, and inform the design and structure of information architecture, navigation systems, menus, and labelling within a product or website.
Card sorting is a valuable technique in UX research for several reasons:
Types of Card Sorting?
There are three main types of card sorting:
How to conduct Card Sorting?
1. Choose the correct type of card sorting. There are three main types of card sorting; choose the methods you want to choose.
Type
When to use it
Open card sorting
When you want to understand how users naturally group information.
Closed card sorting
When you already have a good idea of what your categories should be.
Hybrid card sorting
When you want to get feedback on both your initial ideas and how users naturally group information.
2. Prepare the cards. The cards should be clear and concise, and they should represent the information that you want users to sort. Use index cards, sticky notes, or a digital card sorting tool like Qatalyst.
3. Recruit participants. You should recruit participants who are representative of your target audience.
4. Conduct the card sort. You can conduct the card sorting in person or online. If you conduct the card sorting in person, you must provide a quiet space and a comfortable place for participants to work. If you are conducting the card sorting online, you will need to use a digital card sorting tool.
5. Analyze the results. Once you have collected the results of the card sort, you will need to analyze them. You can use various methods to analyze the results, such as frequency analysis and category analysis.
6. Use the results to improve your information architecture. Once you have analyzed the results of the card sort, you can use them to improve your information architecture. You can use the results to identify the most essential categories for users, determine the best way to label categories and validate or invalidate initial assumptions about information architecture.
Best Practices
Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.
To create a card sort, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Card Sorting" option.
Step 3: Here, you can add the task and add multiple cards and Categories by clicking on the "+" button available. There are multiple properties and options also available to enhance the experience.
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
1. Card Property
a. Image: An Image will also appear on the card along with the text.
b. Hide Title: The card will appear without the text; this option can be enabled if you have an image added to the cards.
c. Randomize: The cards will appear in random order.
2. Card Category
1. Limit card in category: Using this property, only the given number of cards can be added to a particular category.
2. Randomize - The category will appear in random order.
Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
Result View
In the result section of a card sort, you will find the quantitative data about the selection made by different respondents.
In the Categories and Cards section, you will find the following two views for the result data:
Card View: It will show the information and the number of categories in which they are added, along with the agreement percentage. By clicking on the plus icon, you can get information about the categories to which these cards are added.
How to read this data?
From the first column, users can infer that the "DBZ" card is added in two categories, and the agreement percentage is 50%, which means users agree that this card belongs to two categories.
You can also expand the cards and view the percentage of users who have added the card in a particular category.
Category View
In the category view, the user can view the category names and the number of cards added in that category, along with the agreement matrix.
After expanding the card, users can view the cards added in that category and the percentage of users who have added them.
Agreement Matrix
An agreement matrix is a visual representation of how often users agree that a card belongs in each category in a card sort. It is a table with rows representing the cards and columns representing the categories. Each cell in the table indicates the agreement rate for a card in a category. The agreement rate is the percentage of users who placed the card in that category.
The agreement matrix can be used to identify which categories are most agreed upon by users, as well as which cards are most ambiguous or difficult to categorize. It can also be used to identify clusters of cards that are often grouped together.
Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.
By conducting tree testing, we aim to address the fundamental question, "Can users find what they are looking for?" This research technique allows us to evaluate the effectiveness of our information architecture and assess whether users can navigate through the content intuitively, locate specific topics, and comprehend the overall structure of our product. It provides valuable insights into the findability and clarity of our content hierarchy, enabling us to refine and optimize the user experience.
In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.
Information architecture (IA) refers to the structural design and organization of information within a system, such as a website, application, or other digital product. It involves arranging and categorizing information in a logical and coherent manner to facilitate effective navigation, retrieval, and understanding by users.
Here are some of the things that tree testing can be used for:
Here are some questions that tree testing can answer:
How to conduct Tree Testing?
Best Practices
Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.
In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.
To create a tree test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Tree Testing" option.
Step 3: Once you have added the block, design your question and information architecture by simply adding the labels and defining the parent-child relationship in a tree-like structure.
Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Property
Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
Result View📊
In the result section of the Tree Test, you will find the following two sections:
End Screen: This section will show the label submitted by the users as the answer to the task and the percentage of users who selected that label.
The below screenshot suggests that 4 respondents have taken the test and they have submitted two labels (Preference Testing, Qualitative study) as the answer to the task, and out of that 75% of the respondents have selected Preference Testing and 25% have selected Qualitative Study.
Common Path: In this section, you will the actual path navigated by the respondents, starting from the parent label to the child label they have submitted.
In UX research, live website testing refers to the practice of conducting usability testing or user testing on a live and functioning website. This type of testing is done to gather insights and feedback from users as they interact with the website in its actual environment.
Website testing aims to ensure that the website is user-friendly, efficient, engaging, and aligned with the needs and expectations of its target audience.
The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website or digital platform, allowing researchers to observe how participants engage with the task in a controlled environment. This provides a focused way to gather insights into user behaviour, decision-making, and preferences during a research session.
The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website, allowing researchers to observe how participants engage with the task in a controlled environment.
To create a Live Website test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block.
From the list of options that appear, select "Live website Testing" under the Task-based research section.
Step 3: Place the test instructions or scenarios in the top bar and enter the website URL where you want respondents to perform the task in the URL field. Click the "Add Task" icon to include multiple tasks in your testing scenario.
Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once, too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.
Note: When a single task is assigned, the insights provided include a video recording, user path visualizer, journey tree, and performance breakdown. For multiple tasks, the insights will feature only the video recording accompanied by supporting metrics.
Single Task Results
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
3. User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
4. Journey Tree
Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.
5. Performance Breakdown
Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability.
Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.
Multiple Task Results
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
3. User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
4. Journey Tree
Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.
5. Performance Breakdown
Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability.
Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.
Moderated research in UX (User Experience) involves a controlled testing environment where a researcher interacts directly with participants. This method employs a skilled moderator to guide participants through tasks, observe their interactions, and gather qualitative insights. This approach offers an in-depth understanding of user behaviour, preferences, and challenges, enabling researchers to make informed design decisions and enhance the overall user experience. The researcher's active involvement allows for real-time adjustments, targeted questioning, and nuanced observations, making moderated research a valuable tool for evaluation and improvement.
Moderated testing can be conducted in person or remotely. In person, the moderator and participant will be in the same room. Remotely, the moderator and participant will use video conferencing to communicate.
Moderated testing is a more time-consuming and expensive type of usability testing than unmoderated testing. However, it can provide more in-depth feedback that can be used to improve the user experience.
Here are some of the benefits of moderated testing:
Here are some of the drawbacks of moderated testing:
Overall, moderated testing is a valuable tool for usability testing. It can provide valuable insights into the user experience that can be used to improve the product or service.
Here are some of the situations where moderated testing is a good choice:
Complex User Flows and Interactions: If your product involves intricate user flows, complex interactions, or multi-step tasks, moderated testing can help guide participants through these processes and ensure they understand and complete tasks correctly.
In-Depth Understanding: If you aim to gather in-depth qualitative insights into participants' experiences, thoughts, emotions, and motivations, moderated testing allows you to ask follow-up questions, probe deeper, and gain a richer understanding of user behaviour.
Usability Issue Identification: If your primary goal is to identify usability issues, pain points, and obstacles users face while interacting with your product, moderated testing is recommended. Moderators can observe participant struggles in real-time and gather detailed context around the issues.
Customized Probing: When you want to tailor your research approach to each participant's unique responses and behaviours, moderated testing provides the flexibility to delve deeper into areas of interest based on individual participant feedback.
Real-Time Feedback: If you need immediate feedback on design changes, feature iterations, or prototypes, moderated testing can offer instant insights that can be acted upon quickly.
Small Sample Sizes: For studies with a small sample size, moderated testing can provide a more nuanced understanding of individual participant experiences and preferences.
Early Design Iterations: During the early stages of design or development, moderated testing can be valuable. A moderator can quickly adapt to changes and provide real-time feedback, enabling iterative improvements before the product reaches advanced stages.
In the realm of user-centric research, we have introduced a vital block known as the "Session Block." This feature enables researchers to delve deep into user experiences, fostering a holistic comprehension of behaviours and insights.
Understanding the Essence of the Session Block
At its core, the Session Block is a component within Qatalyst that empowers researchers to schedule and conduct insightful sessions with participants. This capability plays a pivotal role in what is known as moderated research—a method that hinges on guiding participants through specific tasks while encouraging them to vocalize their thoughts and actions. The result is an invaluable stream of real-time insights that provide a comprehensive view of user behaviour, opinions, and reactions.
Unveiling the Dynamics of Moderated Research
Moderated research, made possible through the Session Block, stands as a dynamic approach to engaging with participants. It works by asking users to talk about what they're doing and thinking while they use a product or service. This helps us understand how they make decisions, what frustrates them, and when they feel happy about their experience.
As participants navigate through tasks, the moderator assumes a guiding role, ensuring that the user's journey is well-structured and aligns with the study's objectives. Moderators also wield the power of follow-up questions, enabling them to probe deeper into participants' responses and elicit more nuanced insights.
Leveraging Session Blocks for Comprehensive Insights
The Session Block not only empowers researchers to facilitate these insightful interactions but also provides a structured framework for conducting them. Here's how it works:
Scheduling and Set-Up: Researchers can seamlessly schedule sessions by defining crucial details such as the session's name, participant's name and email, moderator's information, language preferences, date and time, and even the option to incorporate facial coding technology if desired.
Real-Time Interaction: During the session, participants engage in tasks while verbally sharing their thought processes. Moderators actively guide the discussion, prompting participants to elaborate on their actions and decisions.
Deeper Exploration: Moderators leverage follow-up questions to delve deeper into participants' viewpoints. This enables them to uncover underlying motivations, preferences, and pain points that might otherwise remain hidden.
Rich Insights: The real-time nature of the interaction, combined with follow-up queries, yields a wealth of qualitative data. These insights provide a nuanced understanding of user behaviours, allowing researchers to make informed decisions and improvements. The following insights can be drawn from the session.
In essence, the Session Block in Qatalyst transforms moderated research into a fluid and structured process. It empowers researchers to not only guide participants through tasks but also to extract profound insights that fuel informed decisions, leading to enhanced user experiences and product refinements. As the bridge between participants and researchers, the Session Block exemplifies Qatalyst's commitment to enabling in-depth, user-focused research in the digital age.
In Qatalyst, you can conduct moderated research by scheduling a meeting online using a session block. Moderated research hinges on the concept of guiding participants through tasks and prompting them to articulate their thoughts aloud as they navigate a product or service. This dynamic approach facilitates a comprehensive understanding of user behaviour as participants verbalize their actions and reactions in real-time. Additionally, moderators have the opportunity to pose follow-up questions, delving deeper into participants' perspectives and extracting valuable insights.
Here are the steps for setting up a session in Qatalyst:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Session Block".
Step 3: After selecting the block, a form will appear on the screen, where you need to fill in the following details:
Step 4: after you have added the details, you can keep on adding other blocks if required. Once done, you can publish the study by clicking on the publish button available at the top right corner of the page.
Note that session time follows UTC.
Step 5: Once you publish the study, you will be directed to the share page for the session joining link, and the attendees of the session will receive a joining link as well, using which they can join the session.
Insights
Once the session is conducted, you can see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.
In the summary section, you will find the following information:
The result of the mobile app testing consists of following 4 sections:
Video
This section allows you to revisit the session whenever needed, preserving the content and context for your convenience.
If you have enabled facial coding tech, you can see the participant's emotional responses during the session.
Transcript
The next section is Transcript. You will find the auto-generated transcript for it, where you can create tags and highlight the important parts.
The transcript will be generated in the same language which was selected while creating the session.
Additionally you can also translate the transcript to 100+ languages available in the platform.
For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag".
Highlight
All the highlights created on the transcript will appear in this section. You can also play the specific video portion.
Notes
This section displays the notes created on the recorded video.
Live mobile app testing in UX research is the practice of observing and collecting feedback from users as they interact with a mobile app in real time. This approach allows researchers to gather feedback and observe user behaviour, providing invaluable data for optimizing app usability and functionality.
The Live Mobile App Testing block in Qatalyst allows the users to add their mobile application Play Store URL and the task to be performed in the application. By presenting these tasks to participants, researchers observe and analyze how users engage with the app, providing a controlled yet authentic environment for comprehensive user behaviour analysis.
Mobile application testing involves assessing and evaluating the usability, functionality, design, and overall user experience of a mobile application. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.
To create a Live Website test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block.
From the list of options that appear, select "Mobile App Testing" under the Task-based research section.
Step 3: Add the task title and the description. In the URL field, add the Play Store URL of the app on which you want the respondents to perform the task.
How to get the Play Store app URL:
Properties
Technology
To select the technologies, click on the boxes.
Mobile application testing involves assessing and evaluating a mobile application's usability, functionality, design, and overall user experience. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.
In this article, we will provide you insights on different metrics you will get for mobile app testing in Qatalyst and their definitions.
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.
In the summary section, you will find the following information:
The result of the mobile app testing consists of following 4 sections:
Video
In the video section, you will find the screen recording of the test for every respondent, use the dropdown at the top of the screen to select the respondent.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius and shadow per your preference.
Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your app.
Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame, and insights will appear in a few seconds.
Transcript
The next section is Transcript. If the recording has audio, you will find the auto-generated transcript for it, where you can create tags and highlight the important parts.
For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag".
Highlight
All the highlights created on the transcript will appear in this section. You can also play the specific video portion.
Notes
This section displays the notes created on the recorded video.
How to create notes?
User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
User Journey Tree
Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.
For the journeys, you will find the information by the lines and the name of each page in the tree.
The following information is shown:
Performance Breakdown
This chart showcases the comprehensive performance analysis of each page the respondents have navigated. It presents valuable insights such as the average time spent by respondents on each page, and the drop-off rate.
The Video Screener Block within the Qatalyst app serves as a dedicated tool for testers to record and submit videos as part of their testing process. This block allows users to integrate video-based insights into their tests effortlessly. Whether it's capturing user interactions, feedback, or suggestions, this feature adds a rich layer to the testing process, providing a holistic understanding of user experiences.
How to Utilize the Video Screener Block
Utilizing the Video Screener Block in the Qatalyst app is straightforward. Testers can easily add this block to their testing sequences, prompting users to record and submit videos based on specified criteria. The intuitive interface ensures a user-friendly experience, allowing for easy navigation and management of video submissions.
Here are the steps for adding a video screener block to the Qatalyst test:
Steps
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Video Response Block".
Step 3: Once you have added the block, you can add the title of the task and the description. In the property section, you can define the following:
Technology
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Result
Users can access a concise summary of the Video Screener block, providing an overview of the responses within. Users can view the overall summary, view individual testers' responses and seamlessly navigate to a specific tester's view with just a click. We've also added transcripts and analytics, along with the ability to create and manage highlights.
In the summary section on the right hand side of the page, you will find the following information:
In the video response summary dashboard, you will get the following information:
Below in the screen, you will find the video responses submitted by the users; you can open and expand it to view the detailed analytics of the particular response. Here, you will find the following insights:
You can use the video response block as a screener question and add the testers directly to your panel from the video screening question. In this article, we will guide you on the process of adding testers to the native panel from the video screener block.
Step 1: After conducting the video screening test, navigate to the results section where you'll find all the submitted responses neatly organized.
Step 2: Hover over the testers' video thumbnails, and a convenient checkbox will appear. Use this checkbox to select the testers you want to add to your panel.
Step 3: Click on the "export tester" button located in the bottom bar. A pop-up form will appear, providing options to add testers to the default panel, create a new tag for them, or add them to an existing tag.
Think of tagging as a super helpful tool for keeping things organized in your native panel and while sharing the test. When you tag respondents' videos, you're basically creating a neat and easy-to-search system.
Picture this: you've done lots of tests, and you've got a bunch of different testers. When you tag them based on things like their traits, likes, or other important stuff, it's like making a different group of testers. This makes it a breeze to find and group-specific testers quickly.
Step 4: Once you've chosen the desired videos, click the "add" button, and voila! The selected testers have now been successfully added to your panel.
Step 4: Your newly added testers will be seamlessly integrated into your native panel. When sharing a test with the panel, you'll find the added testers listed, making it effortless to include them in your testing initiatives.
Ensuring transparency and user consent is paramount in any testing process. In Qatalyst, you have the ability to integrate a Consent Block into your studies seamlessly. This block allows you to add titles and descriptions, or upload files as consent materials. During the test, testers can conveniently access and review the contents of the Consent Block, affirming their understanding and agreement with the terms and conditions through a simple checkbox. This ensures that testers are fully informed and compliant, contributing to an ethical and transparent testing environment.
Step 1: Log in to your Qatalyst Account
Upon logging into your Qatalyst account, you will be directed to the dashboard, where you can manage and create studies.
Step 2: Create a New Study
Click the "Create Study" button on the dashboard to initiate a new study. Choose to start from scratch or use an existing template to streamline the process.
Step 3: Add a Consent Block
Once in the study creation interface, click on the "Add New Block" button. From the list of block options, select "Consent Block" to add this feature to your study.
Step 4: Customize the Consent Block
In the Consent Block, you have the flexibility to add a title and description. Alternatively, you can upload a PDF file containing your consent materials for thorough documentation.
Preview of text and PDF consent:
As shown above, the Consent Block provides a preview of both text and PDF-based consent materials. This ensures that your testers have a clear understanding of the terms and conditions before proceeding with the test.
Step 5: Publish your study
Once you've finished creating your study by adding other blocks you can go ahead and publish it.
Test Execution
After the welcome block consent block appears, respondents will be prompted to either accept or reject the terms and conditions. If they choose to agree, the test will proceed. In the event of a decline, the study will conclude for the respective tester, ensuring a respectful and consensual testing experience.
A first-click test is a usability research method used to evaluate how easy it is for users to complete a specific task on a website, app, or any interface with a visual component. It essentially gauges how intuitive the design is by analyzing a user's initial click towards achieving a goal.
To create a 5-second test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "First Click Test".
Properties
Technology
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
In the summary section, you will find the following information:
Based on the technology selected, you will find the following metrics :
Eye-tracking is the process of measuring the point of the human gaze (where one is looking) on the screen.
Qatalyst Eye Tracking uses the standard webcam embedded with the laptop/desktop/mobile to measure eye positions and movements. The webcam identifies the position of both the eyes and records eye movement as the viewer looks at some kind of stimulus presented in front of him, either on a laptop, desktop, or mobile screen.
Insights from Eye Tracking
For the UX blocks, i.e. Prototype Testing, 5-second Testing, A/B Testing and Preference Testing, you will find the heatmap for the point of gaze. It uses different colours to show which areas were looked at the most by the respondents. This visual representation helps us understand where their attention was focused, making it easier to make informed decisions.
Properties of Heatmap
Experience Eye Tracking
To experience how Eye Tracking works, click here: https://eye.affectlab.io
Read the instructions and start Eye Tracking on your Laptop or Desktop. Eye Tracking begins with a calibration, post which the system will identify your point of gaze on prompted pictures in real-time.
Please enable the camera while accessing the application.InsightsfromEyeTrackingStudy results are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye-tracking studies, please click Eye Tracking Insights Intercom.com.
Eye trackers are used in marketing as an input devices for human-computer interaction, product design, and many other areas.
Mouse click tracking is a technique used to capture and analyze user interactions with a digital interface, specifically focusing on the clicks made by the user using their mouse cursor. It involves tracking the position of mouse clicks and recording this data for analysis and insights.
Insights from Mouse Tracking
A Mouse click-tracking heatmap is a visual representation that showcases the distribution of user clicks on a design or prototype. It provides users with a comprehensive overview of respondents' engagement by highlighting areas that attract the most attention and receive the highest number of clicks. This information can reveal valuable insights into user preferences, pain points, and overall usability, aiding in creating more intuitive and user-friendly interfaces.
For the prototype, you will get the following insights:
Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a web camera and decoded into their respective emotions.
Qatalyst helps to capture the emotions of any respondent when they are exposed to any design or prototype. Their expressions are captured using a web camera. Facial movements, such as changes in the position of eyebrows, jawline, mouth, cheeks, etc., are identified. The system can track even minute movements of facial muscles and give data about emotions such as happiness, sadness, surprise, anger, etc.
Insights From Facial Coding
By analyzing facial expressions captured during user interactions, facial coding systems can detect and quantify various emotional metrics, allowing researchers to understand users' emotional states and their impact on design experiences. Let's explore the specific metrics commonly derived from facial coding:
Experience Facial Coding
To experience how Facial Recognition works, click here: https://facial.affectlab.io/.
Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face needs to be in the outline provided next to the media.
Please enable your web camera while accessing the application.
Table of contents
In today's digital age, where attention spans are decreasing and users are quick to judge a website or app, it's essential to capture users' attention and engage them quickly. This is where the 5-second test comes into play. In this article, we'll discuss what the 5-second test is and how it's used in UX research.
The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.
The 5-second test typically involves the following steps:
The 5-second test is often used in the early stages of design or redesign to capture users' initial reactions and make quick improvements. It's a valuable tool for testing new designs, landing pages, and marketing campaigns and can be used to identify potential issues before launching a website or app.
Benefits of the 5-second test
The 5-second test has many benefits, including:
Best Practices
Use cases
Qatalyst offers a test block feature that allows users to conduct 5-second testing. The 5-second test is a type of UX research that involves showing users a screenshot or a design for 5 seconds and then asking them questions about what they remember or what they think the design is about. The idea is to simulate a user's first impression of a website or app and capture their immediate reactions.
To create a 5-second test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "5-second test".
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
In the summary section, you will find the following information:
Based on the technology selected, you will find the following metrics :
Example
Here is an example of how you can use Qatalyst for 5-second testing:
Suppose you are a designer working on a new landing page for a website. You want to use 5-second testing to get feedback on the visual appeal and memorability of the landing page design.
You decide to run a five-second test and ask participants if they are able to understand what the company does by looking at the landing page or if the message is clearly speaking to the audience or not.
Step 1: Upload the Image
After determining the focus of your test, you can proceed to configure your test within Qatalyst. This can be done by uploading an image of the specific screen you wish to test.
Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.
Step 2: Create Questions around the Test
Now, add questions to understand users' comprehension of the landing page and gauge their overall perception of the website. For this, use the survey blocks. Keep the questions concise and focused to gather quick, instinctive responses within the limited 5-second timeframe.
Step 3: Publish the test and share
Now that your test is ready, it’s time to share the test with the participants.
Step 4: Analyze the Result
After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.
In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.
Five-second tests are a quick and effective exercise to measure the clarity of your design and how it communicates a message which can later help you improve the user experience of your design.
A/B testing is a popular method used in UX research to evaluate the effectiveness of different design options. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.
A/B testing is a technique used to compare two versions of a design to determine which one performs better. In UX research, this technique is used to compare two different designs or variations of the same design to understand which one performs better in terms of user behaviour, engagement, and conversion.
A/B testing involves creating two different versions of a design element, such as a button or a page layout and showing it to the users. User behaviour, such as clicks, eye gaze, or conversions, is then measured and compared between the two versions to determine which one performs better.
For example, let's say a company wants to test two different versions of the website to see which one performs better. The first version of the website has a dark colour scheme, while the second version has a light colour scheme. The company decides to run an A/B test to see which version of the website has a higher conversion rate.
The company analyzes the results of the A/B test, and they find that the second version of the website has a higher conversion rate. The company decides to implement the second version of the website, and they see an increase in sales.
A/B testing can be conducted at any stage of website design, but the most effective time to conduct A/B testing is during the design and development phase. This is because A/B testing can help you identify the most effective design elements, user interface, and user experience, which can save time and resources in the long run.
When conducting A/B testing during website designing, it's important to test different variations of your website design, such as layout, colour scheme, font, and images. This can help you identify the most effective design elements that resonate with your audience.
It's also important to conduct A/B testing on different devices, such as desktops, laptops, tablets, and mobile phones, as user behaviour can vary significantly depending on the device. By testing on different devices, you can ensure that your website is optimized for all types of users.
Furthermore, it's essential to conduct A/B testing on different segments of your audience to ensure that your website design is effective for all user groups. This can include testing different versions of your website on different demographics, such as age, gender, location, and interests.
Best Practices
Here are some best practices to keep in mind when conducting A/B testing:
Use Cases
Here are some use cases for A/B testing:
In all of these cases, A/B testing allows businesses to make data-driven decisions about their marketing and product strategies. By testing different variations of design elements and features, they can identify the most effective approach and improve their overall performance.
In Qatalyst, you can conduct A/B testing on images to determine which one users prefer. Additionally, we offer you the ability to integrate various technologies, such as mouse tracking, facial coding, and eye tracking, to gather additional data and insights about user behaviour and preferences.
To create an A/B test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "A/B test".
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Once the respondents have taken the test, you will be able to see the analytics in the result section.
On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.
In the summary section, you will find the following information:
On the next screen, based on the technology selected, you will find the following metrics:
Example
Suppose you are an e-commerce business looking to optimize your product page layout for better conversion rates. Specifically, you want to compare two different variations of the "Add to Cart" button to determine which design yields higher user engagement and click-through rates.
Step 1: Set Up the Test
In Qatalyst, set up the A/B test by uploading the two versions of the product page, each with a different design for the "Add to Cart" button. Ensure that only this specific element is changed while keeping the rest of the page consistent. This will help isolate the impact of the button design on user behaviour.
Qatalyst provides you with the ability to enable technologies like eye tracking, facial coding, and mouse tracking. These technologies can be used to collect valuable data about how users interact with your website or design.
Step 2: Create Questions around the Test
Now, add questions based on the information you want to gather from respondents. Consider using open-ended questions to gather qualitative feedback that can provide deeper insights.
Step 3: Publish the test and share
Now that your test is ready, it’s time to share the test with the participants.
Step 4: Analyze the Result
After all, participants have completed the testing process. It is time to delve into the analytics and examine their responses to assess the success of your test.
In Qatalyst, you will separate the results for the question blocks added and the research blocks. If you have enabled the technology of tracking, you will get the metrics accordingly.
A/B testing with Qatalyst empowers you to make data-driven decisions about design changes, enabling continuous optimization and improvement of your product pages to maximize conversions and enhance user experience.
In UX research, it is crucial to understand the preferences of your users. This is where preference testing comes in. Preference testing is a technique that allows you to test multiple design options to determine which one is preferred by users. In this article, we will discuss preference testing in UX research, how it works, and its benefits.
Preference testing is a type of research that helps businesses understand what their customers like and prefer. It involves showing different design variants to people and asking them which one they like the most. By doing this, businesses can learn their customer's preferences and make decisions about how to improve their product to meet their customers' needs better.
Benefits of preference testing
Preference testing has many benefits in UX research, some of which include:
To conduct preference testing, you first need to identify the design elements that you want to test. These could include anything from different colour schemes to variations in layout, content, or navigation. Once you have identified the design elements, you can create multiple variations of each element and then present them to users in randomized order.
Participants in the study are typically shown each variation for a few seconds and then asked to choose which one they prefer. This process is repeated for each design element being tested. Once all the data is collected, you can analyze the results to determine which design elements are most preferred by your target audience.
Well, ideally, you should conduct preference testing whenever you're trying to improve the user experience of a website or application. More specifically, preference testing can be particularly useful when you're trying to make decisions about design, content, navigation, or user flows.
For example, let's say you're designing a new website, and you're trying to decide which colour scheme to use. You could conduct a preference test to see which colour scheme is more appealing to your target audience. Or, let's say you're redesigning your e-commerce site, and you're trying to decide where to place the "add to cart" button. You could conduct a preference test to see which placement is more intuitive and leads to more conversions.
In short, preference testing can be a valuable tool whenever you're trying to make decisions about the user experience of a website or application. It allows you to get feedback from users and make data-driven decisions that can improve the overall user experience.
Best Practices
Use Cases
Qatalyst offers a test block feature that allows users to conduct preference testing on various elements of the product. Users can add different versions of an element, such as two different designs, and ask users which one they prefer. This data can be used to inform product development decisions and optimize the product's design and features.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
On the first dashboard of the result, you will be presented with valuable quantitative data showcasing the percentage of respondents who have chosen each respective image.
In the summary section, you will find the following information:
On the next screen, based on the technology selected, you will find the following metrics:
In UX research, it is important to test your prototype before start building your product. In this article, we will explore the importance of A/B testing in UX research, how to conduct it, use cases and some best practices to keep in mind.
A prototype is an early version or a design mock-up of a product or feature that is used to test its design and functionality before it is produced or released. It is a simplified representation of the final product created to illustrate key features and identify design flaws.
Prototype testing is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. The goal of prototype testing is to improve the product's design, functionality, and user experience before it is released to the market. Prototype testing helps users refine their ideas and concepts before investing time and resources in the final product, saving time and money and ensuring the product meets the needs and expectations of users.
The process of prototype testing typically involves the following steps:
Prototype testing can help ensure that the final product meets the needs and expectations of users and is free of design flaws or usability issues. It can save time and resources by identifying and addressing design flaws early in the development process, resulting in a more successful product launch.
Prototype testing should be conducted during the product development process, ideally after a preliminary version of the product has been created. The timing of prototype testing will depend on the specific product being developed and the stage of the development process.
In general, prototype testing should be conducted when:
Overall, prototype testing should be conducted early and often during the product development process to ensure that the final product is user-friendly, effective, and meets the needs of its intended audience.
Best Practices
Here are some best practices to consider when conducting prototype testing:
Qatalyst offers a test block feature that allows users to conduct prototype testing. It is a type of testing that involves evaluating a preliminary version of a product to identify design flaws and gather feedback from users or stakeholders. You can upload a prototype of your website or mobile app, define the flow of the design and test it on respondents and gather responses.
Prerequisite for prototype link
Steps for adding prototypes:
Journey Paths
Defined Path: If you have pre-determined navigation paths for your prototype, using a defined path allows you to assess which path is most convenient or preferred by users. This helps you understand which specific path users tend to choose among the available options.
Exploratory Path: Choose an exploratory path when you want to test whether the respondents are able to navigate between the screens and are able to complete the given task and gather information about users' natural behaviour and preferences. This approach encourages users to freely explore the prototype and interact with it based on their own instincts and preferences. It can reveal unexpected insights and usage patterns that may not have been accounted for in predefined paths.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
1. Blocks Summary
In the summary section, you will find the following information:
2. Task Summary
In the dashboard of the result, you will find the summary of the test and the following information:
Overall Usability Score: This score represents the overall performance of your prototype. It is calculated by harnessing various metrics such as success rate, alternate success, average time, bounce rate, and misclick rate.
Overall Usability Score = Direct Success Rate + (Indirect Success Rate/ 2) - avg(Misclick%) - avg(Duration%)
3. User Journey Tree
Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.
For the journeys, you will find the information by the coloured lines in the tree.
In the defined journey, the following information is shown:
For the exploratory journey, there is no alternate path. The journey can be either a success or a failure.
Success: When the respondents can reach the goal screen.
Failure: When respondents do not reach the goal screen.
Insights from the User Journey
4. Graph metrics
The performance metrics provide a clear picture of the average time spent on each page in the prototype. This information is presented alongside the total time taken to complete the task and the number of respondents who have visited each page. By mapping these metrics together, we gain insights into how users interact with each page and how it contributes to the overall task completion.
Insights from Performance metrics
5. Performance Breakdown
This chart showcases the comprehensive performance analysis of each page within the prototype. It presents valuable insights such as the average time spent by respondents on each page, the misclick rate, and the drop-off rate.
By harnessing these metrics, we derive a usability score for every page, offering users a clear understanding of how each page performed so that they can focus on areas that require improvement.
Usability Score = MAX(0,100 - (Drop Off) - (Misclick Rate *misclick weight) - (MIN(10,MAX(0,(Average duration in sec) - 5)/2))))
The Misclick weight equals 0.5 points for every misclick.
Insights from Performance Breakdown
The page with a usability score below 80 calls for attention. The researchers can check eye tracking, mouse tracking and facial coding data and figure out if the behaviour is expected or an anomaly.
6. Emotion AI Metrics
When you click on any page using the performance metrics, you will be seamlessly transported to the detailed Metrics page, where you can delve into insights gathered from eye tracking, facial coding, and mouse clicks.
Here, you will discover information such as the average time spent on the page, the number of respondents who have visited the page, and intricate details regarding the misclick rate.
In the Analytics section, you'll have access to a wealth of metrics, including:
For example, an AOI could be used to track the time that users spend looking at a call to action button, or the number of times they click on a link. This information can be used to improve the usability of the website or app by making sure that the most important elements are easy to find and interact with.
By exploring these meticulously curated metrics, you can gain a comprehensive understanding of user engagement and behaviour, empowering you to make data-driven decisions to enhance your project's performance and user experience.
7. Screen Recording
Under this section, you will find the screen recordings of the session taking the test. You can use the top dropdown to select the testers.
Along with the video recording, you will get the following functionality:
Highlight Creation
Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.
Card sorting can be conducted using physical cards, where participants physically manipulate and group the cards, or it can be done digitally using online platforms like Qatalyst. The technique allows researchers to gain insights into users' mental models, understand their organizational preferences, and inform the design and structure of information architecture, navigation systems, menus, and labelling within a product or website.
Card sorting is a valuable technique in UX research for several reasons:
Types of Card Sorting?
There are three main types of card sorting:
How to conduct Card Sorting?
1. Choose the correct type of card sorting. There are three main types of card sorting; choose the methods you want to choose.
Type
When to use it
Open card sorting
When you want to understand how users naturally group information.
Closed card sorting
When you already have a good idea of what your categories should be.
Hybrid card sorting
When you want to get feedback on both your initial ideas and how users naturally group information.
2. Prepare the cards. The cards should be clear and concise, and they should represent the information that you want users to sort. Use index cards, sticky notes, or a digital card sorting tool like Qatalyst.
3. Recruit participants. You should recruit participants who are representative of your target audience.
4. Conduct the card sort. You can conduct the card sorting in person or online. If you conduct the card sorting in person, you must provide a quiet space and a comfortable place for participants to work. If you are conducting the card sorting online, you will need to use a digital card sorting tool.
5. Analyze the results. Once you have collected the results of the card sort, you will need to analyze them. You can use various methods to analyze the results, such as frequency analysis and category analysis.
6. Use the results to improve your information architecture. Once you have analyzed the results of the card sort, you can use them to improve your information architecture. You can use the results to identify the most essential categories for users, determine the best way to label categories and validate or invalidate initial assumptions about information architecture.
Best Practices
Card sorting is a valuable user research technique used to understand how individuals organize information mentally. By leveraging card sorting, analysts can gain valuable insights into how users perceive relationships between concepts and how they expect information to be organized. These insights, in turn, inform the design and structure of websites, applications, and other information systems, leading to enhanced usability and an improved user experience.
To create a card sort, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Card Sorting" option.
Step 3: Here, you can add the task and add multiple cards and Categories by clicking on the "+" button available. There are multiple properties and options also available to enhance the experience.
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
1. Card Property
a. Image: An Image will also appear on the card along with the text.
b. Hide Title: The card will appear without the text; this option can be enabled if you have an image added to the cards.
c. Randomize: The cards will appear in random order.
2. Card Category
1. Limit card in category: Using this property, only the given number of cards can be added to a particular category.
2. Randomize - The category will appear in random order.
Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
Result View
In the result section of a card sort, you will find the quantitative data about the selection made by different respondents.
In the Categories and Cards section, you will find the following two views for the result data:
Card View: It will show the information and the number of categories in which they are added, along with the agreement percentage. By clicking on the plus icon, you can get information about the categories to which these cards are added.
How to read this data?
From the first column, users can infer that the "DBZ" card is added in two categories, and the agreement percentage is 50%, which means users agree that this card belongs to two categories.
You can also expand the cards and view the percentage of users who have added the card in a particular category.
Category View
In the category view, the user can view the category names and the number of cards added in that category, along with the agreement matrix.
After expanding the card, users can view the cards added in that category and the percentage of users who have added them.
Agreement Matrix
An agreement matrix is a visual representation of how often users agree that a card belongs in each category in a card sort. It is a table with rows representing the cards and columns representing the categories. Each cell in the table indicates the agreement rate for a card in a category. The agreement rate is the percentage of users who placed the card in that category.
The agreement matrix can be used to identify which categories are most agreed upon by users, as well as which cards are most ambiguous or difficult to categorize. It can also be used to identify clusters of cards that are often grouped together.
Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.
By conducting tree testing, we aim to address the fundamental question, "Can users find what they are looking for?" This research technique allows us to evaluate the effectiveness of our information architecture and assess whether users can navigate through the content intuitively, locate specific topics, and comprehend the overall structure of our product. It provides valuable insights into the findability and clarity of our content hierarchy, enabling us to refine and optimize the user experience.
In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.
Information architecture (IA) refers to the structural design and organization of information within a system, such as a website, application, or other digital product. It involves arranging and categorizing information in a logical and coherent manner to facilitate effective navigation, retrieval, and understanding by users.
Here are some of the things that tree testing can be used for:
Here are some questions that tree testing can answer:
How to conduct Tree Testing?
Best Practices
Tree testing is a UX research method used to evaluate the findability and effectiveness of a website or app's information architecture. It involves testing the navigational structure of a product without the influence of visual design, navigation aids, or other elements that may distract or bias users.
In tree testing, participants are presented with a simplified representation of the product's information hierarchy in the form of a text-based tree structure. This structure typically consists of labels representing different sections, categories, or pages of the website or app. The participants are then given specific tasks or scenarios and asked to locate specific information within the tree.
To create a tree test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Tree Testing" option.
Step 3: Once you have added the block, design your question and information architecture by simply adding the labels and defining the parent-child relationship in a tree-like structure.
Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Property
Required - Taking this test is mandatory; the respondent will not be able to move to another question without taking this test.
Result View📊
In the result section of the Tree Test, you will find the following two sections:
End Screen: This section will show the label submitted by the users as the answer to the task and the percentage of users who selected that label.
The below screenshot suggests that 4 respondents have taken the test and they have submitted two labels (Preference Testing, Qualitative study) as the answer to the task, and out of that 75% of the respondents have selected Preference Testing and 25% have selected Qualitative Study.
Common Path: In this section, you will the actual path navigated by the respondents, starting from the parent label to the child label they have submitted.
In UX research, live website testing refers to the practice of conducting usability testing or user testing on a live and functioning website. This type of testing is done to gather insights and feedback from users as they interact with the website in its actual environment.
Website testing aims to ensure that the website is user-friendly, efficient, engaging, and aligned with the needs and expectations of its target audience.
The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website or digital platform, allowing researchers to observe how participants engage with the task in a controlled environment. This provides a focused way to gather insights into user behaviour, decision-making, and preferences during a research session.
The Live Website Testing block enables researchers to create and present a specific task to be performed on a live website to participants. This task is designed to simulate a real user interaction on a website, allowing researchers to observe how participants engage with the task in a controlled environment.
To create a Live Website test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block.
From the list of options that appear, select "Live website Testing" under the Task-based research section.
Step 3: Place the test instructions or scenarios in the top bar and enter the website URL where you want respondents to perform the task in the URL field. Click the "Add Task" icon to include multiple tasks in your testing scenario.
Step 4: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Properties
Technology
To select the technologies, click on the boxes.
You can select more than one tracking technology at once, too.
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.
Note: When a single task is assigned, the insights provided include a video recording, user path visualizer, journey tree, and performance breakdown. For multiple tasks, the insights will feature only the video recording accompanied by supporting metrics.
Single Task Results
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
3. User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
4. Journey Tree
Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.
5. Performance Breakdown
Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability.
Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.
Multiple Task Results
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the recording of the session along with the transcript and other insights.
1. Summary Section
In the summary section, you will find the following information:
2. Recording and Transcript
In the result view, you will find the session recording for all the respondents; use the left and right view buttons to view the recordings of different respondents.
Along with the recording, if the recording has audio, you will find the auto-generated transcript for each screen, where you can create tags and highlight the important parts.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius, and shadow as per your preference.
3. User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
4. Journey Tree
Journey Tree provides a hierarchical representation of user journeys, presenting a structured view of the paths users take as they navigate through the website along with the dropped user count a page level.
5. Performance Breakdown
Average Time: This metric offers a glimpse into user engagement by tracking the average duration visitors spend on each page of the website. It serves as an indicator of user interest, interaction, and satisfaction with the provided content and usability.
Dropoff Rate: The drop-off rate provides a page-by-page analysis of the percentage of users who leave or abandon the website. These points highlight areas where users disengage or encounter difficulties. Analyzing drop-off points helps pinpoint potential issues such as confusing navigation, uninteresting content, or technical problems that prompt users to leave prematurely.
Moderated research in UX (User Experience) involves a controlled testing environment where a researcher interacts directly with participants. This method employs a skilled moderator to guide participants through tasks, observe their interactions, and gather qualitative insights. This approach offers an in-depth understanding of user behaviour, preferences, and challenges, enabling researchers to make informed design decisions and enhance the overall user experience. The researcher's active involvement allows for real-time adjustments, targeted questioning, and nuanced observations, making moderated research a valuable tool for evaluation and improvement.
Moderated testing can be conducted in person or remotely. In person, the moderator and participant will be in the same room. Remotely, the moderator and participant will use video conferencing to communicate.
Moderated testing is a more time-consuming and expensive type of usability testing than unmoderated testing. However, it can provide more in-depth feedback that can be used to improve the user experience.
Here are some of the benefits of moderated testing:
Here are some of the drawbacks of moderated testing:
Overall, moderated testing is a valuable tool for usability testing. It can provide valuable insights into the user experience that can be used to improve the product or service.
Here are some of the situations where moderated testing is a good choice:
Complex User Flows and Interactions: If your product involves intricate user flows, complex interactions, or multi-step tasks, moderated testing can help guide participants through these processes and ensure they understand and complete tasks correctly.
In-Depth Understanding: If you aim to gather in-depth qualitative insights into participants' experiences, thoughts, emotions, and motivations, moderated testing allows you to ask follow-up questions, probe deeper, and gain a richer understanding of user behaviour.
Usability Issue Identification: If your primary goal is to identify usability issues, pain points, and obstacles users face while interacting with your product, moderated testing is recommended. Moderators can observe participant struggles in real-time and gather detailed context around the issues.
Customized Probing: When you want to tailor your research approach to each participant's unique responses and behaviours, moderated testing provides the flexibility to delve deeper into areas of interest based on individual participant feedback.
Real-Time Feedback: If you need immediate feedback on design changes, feature iterations, or prototypes, moderated testing can offer instant insights that can be acted upon quickly.
Small Sample Sizes: For studies with a small sample size, moderated testing can provide a more nuanced understanding of individual participant experiences and preferences.
Early Design Iterations: During the early stages of design or development, moderated testing can be valuable. A moderator can quickly adapt to changes and provide real-time feedback, enabling iterative improvements before the product reaches advanced stages.
In the realm of user-centric research, we have introduced a vital block known as the "Session Block." This feature enables researchers to delve deep into user experiences, fostering a holistic comprehension of behaviours and insights.
Understanding the Essence of the Session Block
At its core, the Session Block is a component within Qatalyst that empowers researchers to schedule and conduct insightful sessions with participants. This capability plays a pivotal role in what is known as moderated research—a method that hinges on guiding participants through specific tasks while encouraging them to vocalize their thoughts and actions. The result is an invaluable stream of real-time insights that provide a comprehensive view of user behaviour, opinions, and reactions.
Unveiling the Dynamics of Moderated Research
Moderated research, made possible through the Session Block, stands as a dynamic approach to engaging with participants. It works by asking users to talk about what they're doing and thinking while they use a product or service. This helps us understand how they make decisions, what frustrates them, and when they feel happy about their experience.
As participants navigate through tasks, the moderator assumes a guiding role, ensuring that the user's journey is well-structured and aligns with the study's objectives. Moderators also wield the power of follow-up questions, enabling them to probe deeper into participants' responses and elicit more nuanced insights.
Leveraging Session Blocks for Comprehensive Insights
The Session Block not only empowers researchers to facilitate these insightful interactions but also provides a structured framework for conducting them. Here's how it works:
Scheduling and Set-Up: Researchers can seamlessly schedule sessions by defining crucial details such as the session's name, participant's name and email, moderator's information, language preferences, date and time, and even the option to incorporate facial coding technology if desired.
Real-Time Interaction: During the session, participants engage in tasks while verbally sharing their thought processes. Moderators actively guide the discussion, prompting participants to elaborate on their actions and decisions.
Deeper Exploration: Moderators leverage follow-up questions to delve deeper into participants' viewpoints. This enables them to uncover underlying motivations, preferences, and pain points that might otherwise remain hidden.
Rich Insights: The real-time nature of the interaction, combined with follow-up queries, yields a wealth of qualitative data. These insights provide a nuanced understanding of user behaviours, allowing researchers to make informed decisions and improvements. The following insights can be drawn from the session.
In essence, the Session Block in Qatalyst transforms moderated research into a fluid and structured process. It empowers researchers to not only guide participants through tasks but also to extract profound insights that fuel informed decisions, leading to enhanced user experiences and product refinements. As the bridge between participants and researchers, the Session Block exemplifies Qatalyst's commitment to enabling in-depth, user-focused research in the digital age.
In Qatalyst, you can conduct moderated research by scheduling a meeting online using a session block. Moderated research hinges on the concept of guiding participants through tasks and prompting them to articulate their thoughts aloud as they navigate a product or service. This dynamic approach facilitates a comprehensive understanding of user behaviour as participants verbalize their actions and reactions in real-time. Additionally, moderators have the opportunity to pose follow-up questions, delving deeper into participants' perspectives and extracting valuable insights.
Here are the steps for setting up a session in Qatalyst:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Session Block".
Step 3: After selecting the block, a form will appear on the screen, where you need to fill in the following details:
Step 4: after you have added the details, you can keep on adding other blocks if required. Once done, you can publish the study by clicking on the publish button available at the top right corner of the page.
Note that session time follows UTC.
Step 5: Once you publish the study, you will be directed to the share page for the session joining link, and the attendees of the session will receive a joining link as well, using which they can join the session.
Insights
Once the session is conducted, you can see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.
In the summary section, you will find the following information:
The result of the mobile app testing consists of following 4 sections:
Video
This section allows you to revisit the session whenever needed, preserving the content and context for your convenience.
If you have enabled facial coding tech, you can see the participant's emotional responses during the session.
Transcript
The next section is Transcript. You will find the auto-generated transcript for it, where you can create tags and highlight the important parts.
The transcript will be generated in the same language which was selected while creating the session.
Additionally you can also translate the transcript to 100+ languages available in the platform.
For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag".
Highlight
All the highlights created on the transcript will appear in this section. You can also play the specific video portion.
Notes
This section displays the notes created on the recorded video.
Live mobile app testing in UX research is the practice of observing and collecting feedback from users as they interact with a mobile app in real time. This approach allows researchers to gather feedback and observe user behaviour, providing invaluable data for optimizing app usability and functionality.
The Live Mobile App Testing block in Qatalyst allows the users to add their mobile application Play Store URL and the task to be performed in the application. By presenting these tasks to participants, researchers observe and analyze how users engage with the app, providing a controlled yet authentic environment for comprehensive user behaviour analysis.
Mobile application testing involves assessing and evaluating the usability, functionality, design, and overall user experience of a mobile application. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.
To create a Live Website test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block.
From the list of options that appear, select "Mobile App Testing" under the Task-based research section.
Step 3: Add the task title and the description. In the URL field, add the Play Store URL of the app on which you want the respondents to perform the task.
How to get the Play Store app URL:
Properties
Technology
To select the technologies, click on the boxes.
Mobile application testing involves assessing and evaluating a mobile application's usability, functionality, design, and overall user experience. This process aims to understand how users interact with the app, identify potential issues or pain points, and gather insights to enhance the app's usability and user satisfaction.
In this article, we will provide you insights on different metrics you will get for mobile app testing in Qatalyst and their definitions.
Once the respondents have taken the test, you will be able to see the analytics in the result section. In the result view, you will find the session recording, transcript, and other insights.
In the summary section, you will find the following information:
The result of the mobile app testing consists of following 4 sections:
Video
In the video section, you will find the screen recording of the test for every respondent, use the dropdown at the top of the screen to select the respondent.
If you have enabled eye tracking and facial coding in the task, you will get insights for both, and they can be viewed by clicking the icons respectively.
You can adjust the parameters like blur, radius and shadow per your preference.
Emotion AI Metrics: Dive into the emotional resonance of your content with metrics that categorize user responses as positive and negative, allowing you to gauge the emotional impact of your app.
Area of Interest(AOI): AOI is an extension to the eye tracking metrics. It allows you to analyse the performance of particular elements on the screen. Once you select the AOI option, just draw a box over the element, and then a pop-up will appear on the screen for time selection, you can select the time frame, and insights will appear in a few seconds.
Transcript
The next section is Transcript. If the recording has audio, you will find the auto-generated transcript for it, where you can create tags and highlight the important parts.
For creating tags, select the text, and a prompt will appear where you can give the title of the highlight, which is " Tag".
Highlight
All the highlights created on the transcript will appear in this section. You can also play the specific video portion.
Notes
This section displays the notes created on the recorded video.
How to create notes?
User Path Visualizer
User Path Visualizer provides an insightful representation of user journeys via the Sankey chart. This visualizer presents the paths taken by users through the website, offering a comprehensive view of the navigation patterns.
Additionally, it encapsulates various essential details such as the time users spent on each page, the number of users who visited specific pages, and the transitions between these pages.
User Journey Tree
Below the summary dashboard of the task, you will find the user journey tree, which displays the path navigated by all the respondents while taking the test.
For the journeys, you will find the information by the lines and the name of each page in the tree.
The following information is shown:
Performance Breakdown
This chart showcases the comprehensive performance analysis of each page the respondents have navigated. It presents valuable insights such as the average time spent by respondents on each page, and the drop-off rate.
The Video Screener Block within the Qatalyst app serves as a dedicated tool for testers to record and submit videos as part of their testing process. This block allows users to integrate video-based insights into their tests effortlessly. Whether it's capturing user interactions, feedback, or suggestions, this feature adds a rich layer to the testing process, providing a holistic understanding of user experiences.
How to Utilize the Video Screener Block
Utilizing the Video Screener Block in the Qatalyst app is straightforward. Testers can easily add this block to their testing sequences, prompting users to record and submit videos based on specified criteria. The intuitive interface ensures a user-friendly experience, allowing for easy navigation and management of video submissions.
Here are the steps for adding a video screener block to the Qatalyst test:
Steps
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create New Study" button to initiate the process of creating a new study.
Step 2: Once you're in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "Video Response Block".
Step 3: Once you have added the block, you can add the title of the task and the description. In the property section, you can define the following:
Technology
Step 3: To further enhance your study, continue adding additional survey blocks. Utilize the same process described in Step 2, clicking on the "+" button and selecting different block types to ask a variety of questions related to the test.
Result
Users can access a concise summary of the Video Screener block, providing an overview of the responses within. Users can view the overall summary, view individual testers' responses and seamlessly navigate to a specific tester's view with just a click. We've also added transcripts and analytics, along with the ability to create and manage highlights.
In the summary section on the right hand side of the page, you will find the following information:
In the video response summary dashboard, you will get the following information:
Below in the screen, you will find the video responses submitted by the users; you can open and expand it to view the detailed analytics of the particular response. Here, you will find the following insights:
You can use the video response block as a screener question and add the testers directly to your panel from the video screening question. In this article, we will guide you on the process of adding testers to the native panel from the video screener block.
Step 1: After conducting the video screening test, navigate to the results section where you'll find all the submitted responses neatly organized.
Step 2: Hover over the testers' video thumbnails, and a convenient checkbox will appear. Use this checkbox to select the testers you want to add to your panel.
Step 3: Click on the "export tester" button located in the bottom bar. A pop-up form will appear, providing options to add testers to the default panel, create a new tag for them, or add them to an existing tag.
Think of tagging as a super helpful tool for keeping things organized in your native panel and while sharing the test. When you tag respondents' videos, you're basically creating a neat and easy-to-search system.
Picture this: you've done lots of tests, and you've got a bunch of different testers. When you tag them based on things like their traits, likes, or other important stuff, it's like making a different group of testers. This makes it a breeze to find and group-specific testers quickly.
Step 4: Once you've chosen the desired videos, click the "add" button, and voila! The selected testers have now been successfully added to your panel.
Step 4: Your newly added testers will be seamlessly integrated into your native panel. When sharing a test with the panel, you'll find the added testers listed, making it effortless to include them in your testing initiatives.
Ensuring transparency and user consent is paramount in any testing process. In Qatalyst, you have the ability to integrate a Consent Block into your studies seamlessly. This block allows you to add titles and descriptions, or upload files as consent materials. During the test, testers can conveniently access and review the contents of the Consent Block, affirming their understanding and agreement with the terms and conditions through a simple checkbox. This ensures that testers are fully informed and compliant, contributing to an ethical and transparent testing environment.
Step 1: Log in to your Qatalyst Account
Upon logging into your Qatalyst account, you will be directed to the dashboard, where you can manage and create studies.
Step 2: Create a New Study
Click the "Create Study" button on the dashboard to initiate a new study. Choose to start from scratch or use an existing template to streamline the process.
Step 3: Add a Consent Block
Once in the study creation interface, click on the "Add New Block" button. From the list of block options, select "Consent Block" to add this feature to your study.
Step 4: Customize the Consent Block
In the Consent Block, you have the flexibility to add a title and description. Alternatively, you can upload a PDF file containing your consent materials for thorough documentation.
Preview of text and PDF consent:
As shown above, the Consent Block provides a preview of both text and PDF-based consent materials. This ensures that your testers have a clear understanding of the terms and conditions before proceeding with the test.
Step 5: Publish your study
Once you've finished creating your study by adding other blocks you can go ahead and publish it.
Test Execution
After the welcome block consent block appears, respondents will be prompted to either accept or reject the terms and conditions. If they choose to agree, the test will proceed. In the event of a decline, the study will conclude for the respective tester, ensuring a respectful and consensual testing experience.
A first-click test is a usability research method used to evaluate how easy it is for users to complete a specific task on a website, app, or any interface with a visual component. It essentially gauges how intuitive the design is by analyzing a user's initial click towards achieving a goal.
To create a 5-second test, follow these simple steps:
Step 1: Log in to your Qatalyst account, which will direct you to the dashboard. From the dashboard, click on the "Create Study" button to initiate the process of creating a new study.
Step 2: Once in the study creation interface, locate the "+" button and click on it to add a new block. From the list of options that appear, select the "First Click Test".
Properties
Technology
Result View
Once the respondents have taken the test, you will be able to see the analytics in the result section.
In the summary section, you will find the following information:
Based on the technology selected, you will find the following metrics :
Eye-tracking is the process of measuring the point of the human gaze (where one is looking) on the screen.
Qatalyst Eye Tracking uses the standard webcam embedded with the laptop/desktop/mobile to measure eye positions and movements. The webcam identifies the position of both the eyes and records eye movement as the viewer looks at some kind of stimulus presented in front of him, either on a laptop, desktop, or mobile screen.
Insights from Eye Tracking
For the UX blocks, i.e. Prototype Testing, 5-second Testing, A/B Testing and Preference Testing, you will find the heatmap for the point of gaze. It uses different colours to show which areas were looked at the most by the respondents. This visual representation helps us understand where their attention was focused, making it easier to make informed decisions.
Properties of Heatmap
Experience Eye Tracking
To experience how Eye Tracking works, click here: https://eye.affectlab.io
Read the instructions and start Eye Tracking on your Laptop or Desktop. Eye Tracking begins with a calibration, post which the system will identify your point of gaze on prompted pictures in real-time.
Please enable the camera while accessing the application.InsightsfromEyeTrackingStudy results are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye-tracking studies, please click Eye Tracking Insights Intercom.com.
Eye trackers are used in marketing as an input devices for human-computer interaction, product design, and many other areas.
Mouse click tracking is a technique used to capture and analyze user interactions with a digital interface, specifically focusing on the clicks made by the user using their mouse cursor. It involves tracking the position of mouse clicks and recording this data for analysis and insights.
Insights from Mouse Tracking
A Mouse click-tracking heatmap is a visual representation that showcases the distribution of user clicks on a design or prototype. It provides users with a comprehensive overview of respondents' engagement by highlighting areas that attract the most attention and receive the highest number of clicks. This information can reveal valuable insights into user preferences, pain points, and overall usability, aiding in creating more intuitive and user-friendly interfaces.
For the prototype, you will get the following insights:
Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a web camera and decoded into their respective emotions.
Qatalyst helps to capture the emotions of any respondent when they are exposed to any design or prototype. Their expressions are captured using a web camera. Facial movements, such as changes in the position of eyebrows, jawline, mouth, cheeks, etc., are identified. The system can track even minute movements of facial muscles and give data about emotions such as happiness, sadness, surprise, anger, etc.
Insights From Facial Coding
By analyzing facial expressions captured during user interactions, facial coding systems can detect and quantify various emotional metrics, allowing researchers to understand users' emotional states and their impact on design experiences. Let's explore the specific metrics commonly derived from facial coding:
Experience Facial Coding
To experience how Facial Recognition works, click here: https://facial.affectlab.io/.
Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face needs to be in the outline provided next to the media.
Please enable your web camera while accessing the application.