Types of Cognitive Biases to Avoid in User Research

Cognitive biases are one of the most insidious problems in user research. Here are some you must definitely avoid to conduct unbiased research.

Author

Aishwarya N K

Date

May 3, 2023

Short on time? Get an AI generated summary 
of this article instead

As companies increasingly rely on user research to validate product ideas and generate new ideas for businesses, it is more important than ever to ensure that the data that is collected is unbiased. Unfortunately, as with all human interactions, cognitive bias can often seep into user research, affecting the quality of insights obtained.

Cognitive bias can affect user research in multiple ways – both from the researchers and the respondents. Researchers often design the study influenced by their biases, leading to skewed results and conclusions. Respondents are also influenced by bias in how they interpret and respond to any questions asked during a study. Either way, it can lead to false/skewed results and must be avoided at all costs.

What are the cognitive biases you should avoid?

Confirmation bias

This is one of the most common cognitive biases and can impact anyone involved in user research, including researchers, designers and stakeholders. When user researchers have preconceived notions of what users want, they tend to look for evidence that supports a claim or belief they already have and reject any information that doesn’t fit in with their own perspective. For example, a designer who thinks users prefer a certain color scheme may only look for feedback that supports their idea, ignoring feedback that suggests otherwise.

Framing bias

This cognitive bias can also impact anyone involved in research and mainly occurs when people make judgements or form opinions based on HOW the information is presented. For example, a respondent may answer a question about a website's navigation differently if it's framed as "How easy is it to find what you're looking for?" versus "How difficult is it to navigate the website?". Similarly, a user researcher may frame the above question depending on their bias on the difficulty or ease of navigation.

Sunk cost fallacy

The sunk cost fallacy often impacts designers and stakeholders who have invested time and resources into a particular design or feature. It compels them to continue in a certain direction even when they realize it is not working. For example, a team may have spent months working on a feature that is too complex or confusing for users, but they may be hesitant to let it go because of the time and money already invested. This can be really dangerous if UX researchers fall for this, as they might not know when to let go of an idea, even if it is not viable at all.

Analysis paralysis

Analysis paralysis typically impacts users who are presented with too many options to choose from. This leads them to feel overwhelmed, often hindering their ability to make a choice. For example, let’s say you are conducting usability testing for a new mobile app and the participant is expected to select an option from a drop-down menu. If the drop-down menu is overly complex, with too many options and sub-menus, the participant will spend too much time scrolling and trying to make sense of everything. As a result, they may be unable to complete the task in the designated time.

Read more: How to Handle the Choice Conundrum in OTT with UX Research

The Hawthorne effect

The Hawthorne effect typically impacts testers and deals with people changing their behavior when they know they are being noticed or observed. For example, a user might feel free to express their opinions during an unmoderated test but feel differently during a moderated test. With a moderator, they might feel self-conscious and think they are being judged for being too truthful. This could lead to them saying what they think the moderator wants to hear, rather than the actual truth, which could lead to biased results. Moderators can avoid this by creating a relaxed testing environment and building a rapport with participants so that they feel free to express their opinions.

Serial position effect

This bias generally affects testers and refers to a person’s ability to remember items based on how items are arranged in a list. For instance, testers might easily remember the beginning (primacy effect) and end of the list (recency effect) but forget what’s in between. This can cause a problem in UX studies when multiple stimuli or features are being tested, as users might not recall all of them accurately enough to be able to give their opinion on them. Researchers can overcome this by randomizing the order of stimuli being presented to each user so that the effect of this bias is reduced overall. You can also try asking different types of questions, so people take the time to pause and consider each question (and, therefore, each stimulus), reducing the impact of the effect.  

Clustering illusion

It’s human to look for meaning where there is none – that’s why we see faces in cars and shapes in clouds. Similarly, the clustering illusion occurs in UX research when researchers analyze data and cluster them into groups, even when there is nothing connecting them. For example, if a researcher notices that a few people from a study who gave similar answers to a question also have something in common, like their income levels, they might assume that all people from that income level feel the same way towards that product, which is an incorrect assumption. To avoid this, researchers need to look at all the data objectively so that they can avoid jumping to conclusions.

False consensus bias

This bias stems from the tendency to overestimate how much people agree with a certain belief or thought you might have. For example, if a UX researcher believes that users prefer minimalistic designs for a website, they might not even conduct user testing to get feedback on this as they will be convinced that they are right. The only way to combat this is to ensure that you test every bit of information – no matter how much you are convinced that it is common, and that people will agree with you.

Social desirability bias

This bias, as the name suggests, is the tendency of respondents to give answers that will put them in a positive light with other respondents, even if they do not feel the same. For example, if the topic being discussed is controversial, a respondent with an alternative view might not express it – instead, they might agree or express a view that is in the majority, because they want to feel ‘accepted’ by their peers. Doing this can harm your study as it could lead to skewed results. You can avoid this by building trust or using anonymous surveys so that respondents will feel comfortable giving their unfiltered opinions.

Hindsight bias

It is said that hindsight is always 20/20 and that we only get the full picture of an event once it has passed. However, while we might believe that we KNEW what was going to happen once the moment has passed, it is simply not true – it is just one of the many ways a situation could have gone. For example, let’s say a researcher sees a user struggling to complete a task – this might lead them to believe that the product is difficult to use and that they should have seen it coming. However, this could lead to them overlooking other factors that might have contributed to the users’ difficulty, such as their prior inexperience with using such a product or the wording of the questions.  

Sampling bias

One of the most important things you can do while conducting user research is to get the right participants that are representative of your target demographic. Having the wrong participants can lead to inaccurate results that are not representative of target demographic. For example, let’s say a researcher is recruiting participants to conduct usability testing on a mobile banking app. They recruit participants from the city for the study and get to testing. However, what they might have failed to consider is that it’s not just people from the city who will be using the app – people from smaller towns and villages will be using it too. By inadvertently excluding them from the testing panel, the result they get will not truly represent the needs and difficulties of their entire demographic. One way to reduce this is by thoroughly defining the target demographic pre-study so that no one is excluded.  

Sponsor bias

Have you ever suddenly liked a brand more because they’ve given you some goodies? Brands often like to curry favor with customers by giving gifts or freebies but doing the same in user research might have unintended negative effects. For example, if you’re conducting a study where you are providing your company’s product or service as a freebie, and it has substantial value, it might cause users to avoid giving any negative feedback for the product. This will impact your study negatively as you will not get the unbiased, accurate feedback you need for the survey. One way to avoid this bias is to look at the industry compensation for studies and use that as a measure for compensation.

Implicit bias

As the name suggests, this bias stems from the inherent biases and beliefs each of us have and how it can affect our judgments. This can affect both the researchers and the respondents. For example, if researchers hold a certain belief against a certain demographic group, they might subconsciously recruit fewer of them. Similarly, if users have a previous negative association with the brand they are doing the study for, it might affect their ability to give neutral and unbiased responses. Recognizing and understanding your inherent biases is the best way to overcome this.

Anchoring bias

This bias occurs when we rely too heavily on one piece of information (usually the first piece we receive on a topic) while making a decision. The problem here is that there is no guarantee that this information is the most accurate. For example, let’s say that a user is asked to complete a task regarding a certain feature. They were initially shown a high-fidelity prototype of the feature, but they are now struggling to find the right buttons to navigate. Here, they might assume that any any issues they encounter are their own fault, rather than a design flaw. Anchoring bias can be avoided by ensuring that users have all the information before they can begin to make a decision and randomizing the information presented to users.

How can you avoid cognitive biases?

As a UX researcher, it's essential to be aware of cognitive biases that can impact the quality of your research. While biases can be difficult to control, you can take measures to avoid or reduce them. Here’s how:

Test your concepts and ideas early: One of the most effective ways to minimize cognitive biases is to test your concepts and ideas early. Spending too much time ideating and developing designs can lead to your own biases, so it's important to get feedback early on. By getting feedback on your ideas' feasibility, you can avoid investing too much time and resources in ideas that are not feasible.

Avoid leading questions: When framing questions for your UX study, avoid leading questions. For example, ask, “How did you feel while executing the task?” instead of “How easy was the task to execute?”. The first question provides a more neutral ground for the respondent to answer, while the second assumes that the task was easy and could influence their response accordingly.

Take detailed notes: Taking notes can help you remain objective in your research by helping you record all available data so that the impact of your biases can be lessened. Taking detailed notes can also help you from falling for memory biases, such as recall bias or hindsight bias and can help you capture data that is truly representative of your users’ experience.

To conclude

It’s human to have biases – but being aware of them and taking the right measures to curb their effects can help you conduct an unbiased study with accurate results. With Affect UX, you can conduct studies and get unbiased insights, thanks to the Facial Coding and Eye Tracking technologies that help you delve deep into your users’ psyche and understand the what and why behind their subconscious behavior!

{{cta-button}}

Supercharge your consumer research with actionable insights, faster on Decode's AI-driven consumer research platform.
Want to conduct lean and unbiased research? Try out Entropik's tech behavioral research platform today!
Want to conduct lean and unbiased research? Try out Entropik's tech behavioral research platform today!
Want to conduct lean and unbiased research? Try out Entropik's tech behavioral research platform today!
Build the Right Products, the Right Way: Elevate your UX with our AI powered user research platform

Frequently Asked Questions

1

Log into 

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

With lots of unique blocks, you can easily build a page without coding.

2

Click on Study templates

With lots of unique blocks, you can easily build a page without coding.

3

Start from scratch

With lots of unique blocks, you can easily build a page without coding.

4

Add blocks to the content

With lots of unique blocks, you can easily build a page without coding.

5

Saving the Template

With lots of unique blocks, you can easily build a page without coding.

6

Publish the Template

With lots of unique blocks, you can easily build a page without coding.

Author Bio

Aishwarya tries to be a meticulous writer who dots her i’s and crosses her t’s. She brings the same diligence while curating the best restaurants in Bangalore. When she is not dreaming about her next scuba dive, she can be found evangelizing the Lord of the Rings to everyone in earshot.

Aishwarya N K

Senior Product Marketing Specialist

FAQs

What is a usability testing template?
Why use a usability testing template?
What should be included in a usability testing template?
Who typically uses a usability testing template?
Are there any tips for using a usability testing template?
How do I use a usability testing template?

Maximize Your Research Potential

Experience why teams worldwide trust our Consumer & User Research solutions.

Book demo

Book a Demo

Thank You!

We will contact you soon.