Technologies in Decode

Eye tracking is the process of measuring the point of human gaze (where someone is looking) on the screen.

Decode CX Eye Tracking uses the standard webcam embedded in your laptop or desktop to measure eye positions and movements. The webcam identifies the position of both eyes and records eye movement as the viewer looks at a stimulus presented on a laptop, desktop, or mobile screen.

The data is processed in real-time, and the results are shown second by second in terms of affective parameters.

Experience Eye Tracking

To experience how Eye Tracking works, click here: Experience Eye Tracking.

Read the instructions and start Eye Tracking on your laptop or desktop. Eye Tracking begins with a calibration, after which the system will identify your point of gaze on prompted pictures in real-time.

Please enable the camera while accessing the application. Insights from Eye Tracking studies are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye Tracking studies, please click on Eye Tracking Insights.

Eye trackers are used in marketing as input devices for human-computer interaction, product design, and many other areas.

Insights from Eye Tracking

Eye Tracking results provide insight into the point of gaze and time spent discovering objects in the visual content shown to viewers by tracking and automatically producing heat maps, gaze maps, and transparency maps.

We have the following metrics for eye tracking:

  • Area of Interest (AOI): The AOI functionality allows users to define specific areas within their stimuli that need to be measured for noticeability, earned attention, etc. Examples include a brand logo on a pack, pack clusters on a shelf, or a character in a video.
    An AOI will have its coordinates (xmin, xmax, ymin, ymax) and time segment (t-start, tend) associated with it. For each AOI, we calculate the following metrics:some text
    • Exposure Time Duration: Calculated as the total time spent on a particular AOI.
    • Average Time to First Discovery: The average time elapsed when the tester first discovers the AOI across all testers, used to measure noticeability. Desirable: The lower, the better.
    • Percentage of People Who Watched: Calculated by the number of people who looked at the AOI versus the total number of testers, used to measure pack relevance with respondents. Desirable: The higher, the better.
    • Earned Attention/Held Attention: Number of seconds of continuous view held by the AOI, used to measure efficacy. Desirable: The higher, the better.

Maps from Eye Tracking

  • Heatmap: Eye Tracking heat maps use varying color intensities to highlight areas that were viewed the most by the respondents.
  • Transparency Maps: Similar to heat maps, transparency maps are also plotted using participants' gaze points on top of the stimulus. Instead of varying color intensities, the stimulus becomes transparent where gaze points aggregate. A higher concentration of points results in clearer visibility, while segments with fewer points become more transparent. Segments with no eye gaze points are hidden.

Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a webcam and decoded into their respective emotions.

Decode helps capture the emotions of any respondent when exposed to media or a product. Facial movements, such as changes in the position of eyebrows, jawline, mouth, and cheeks, are identified. The system can track even minute movements of facial muscles and provide data about emotions such as happiness, sadness, surprise, and anger.

Experience Facial Coding

To experience how Facial Coding works, click here: Experience Facial Coding.

Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face is within the outline provided next to the media.

Please enable your webcam while accessing the application.

Insights from Facial Coding

Facial coding results provide insight into viewers’ spontaneous, unfiltered reactions to visual content by recording and analyzing facial expressions in real-time. The information collected gives cognitive metrics of the user’s genuine emotions while experiencing or viewing the content. It depicts the user’s emotions through facial movements.

The results are made available to users in the Result Dashboards.

You will get the following metrics from facial coding:

  • Emotion AI Metrics: Emotion response refers to the collection and analysis of data related to the emotional reactions of respondents to various stimuli, such as advertisements, products, or brand experiences.some text
    • Positive Emotions: The platform uses a combination of two emotions—Happy and Surprise—to determine whether the user felt happy while watching a particular video.
    • Negative Emotions: The platform uses a combination of three emotions—Anger, Disgust, and Contempt—to determine whether the user felt sad while watching a particular video. Note: Fear and sadness are not included in negative emotions, as these need to be considered in the context of the media and can be shown as standalone emotions.

To provide a holistic view of emotional responses, the platform utilizes emotion metrics that incorporate voice, facial, and text sentiment analytics. By merging these three sources of data, the platform captures and analyzes positive, negative, and neutral emotions exhibited throughout a discussion or media content. This consolidation of data offers a more comprehensive and simplified overview of the overall emotions displayed.

What is Translation?

Translation is the process of converting written text from one language into another. It involves taking the meaning of a piece of text in one language and accurately conveying that meaning in another language while also considering the grammar, syntax, and idiomatic expressions of the target language.

What is Translation in Decode?

In Decode, we provide the facility to convert your transcript from one language to another. You can translate your transcript into multiple languages and get analytics for the translated language as well.

How to Translate the Transcript

To translate a transcript into another language, follow these steps:

Step 1: Open the media from the media section, and you will arrive at the media detail page. Navigate to the transcript page from the right navigation bar.

Step 2: Click on the "Translate" button at the top of the transcript section and select the language for translation.

Step 3: The translated transcript will sometimes be automatically populated in the transcript section. If the translation has been done earlier, you will receive the translated transcript in real-time.

What is Voice Tonality?

Voice tonality refers to how a person sounds when speaking. It encompasses the way you sound when you say words out loud.

Voice Tonality in Decode

Voice tonality involves studying users' emotions and behavior through their voices. Speech carries both verbal and non-verbal information, making it a rich source of data about human behavior.

Using Decode, you can detect multiple cues from voice, such as:

  • Speech (verbal data)
  • Emotion
  • Confidence or Nervousness

At a fundamental level, we extract the following data points from speech for each second:

  • Happy
  • Sad
  • Fear
  • Surprise
  • Neutral
  • Angry
  • Disgust

In addition to emotions, confidence is also calculated for each second.

Decode Voice Tonality Metrics

We provide these three metrics using voice data:

  • Positive Emotion: This metric represents the percentage of time speakers felt positive emotions during the entire meeting.
  • Negative Emotion: This metric represents the percentage of time negative emotions were felt by speakers in the meeting.

Note: Silence is excluded in these calculations, and the base is always the total speaking time. In Decode, positive and negative emotions are also calculated from facial data if a face video is available, and then a weighted sum is provided as the final AI metric.

Speaker Phase Metrics

This Gantt chart shows the sequence of speaker participation in a meeting. The Gantt chart displays the sequence of speakers in a horizontal timeline format, with each speaker represented by a distinct bar. The highlighted portion of each bar indicates the duration of the speaker's contribution, allowing viewers to quickly grasp the varying levels of engagement among participants. By examining the chart, you can easily identify which speakers had more significant roles or longer speaking durations and which had comparatively lesser involvement.

Speaker Metrics

This pie chart provides information on the percentage of involvement from each participant in the conversation. The percentage of involvement is calculated by measuring the total speaking time of each participant and comparing it to the overall duration of the conversation. This calculation provides a relative representation of each participant's speaking opportunities during the discussion.

What is Transcription?

Transcription is the process of converting spoken words into written text. This can be done by a person or a computer program. Transcription is often used in media, medicine, and law to create written records of interviews, speeches, and other audio sources.

Transcription in Decode

In Decode, we provide transcripts for every conversation available in Decode, with separate sections for different speakers. You can also edit the transcripts, translate them into other languages, and generate analysis for them.

All transcripts can be downloaded in the form of a CSV file with the following headers:

  • Media Name
  • Speaker Name
  • Start Time of the Transcript
  • Transcript Text
  • Analytics from the Transcript

Using the transcripts, the following analytics are generated in Decode:

  1. Emotion Metrics
    Sentiment analysis is a technique used in natural language processing to identify and extract subjective information from text. In Emotion Metrics, we incorporate sentiment analysis into the calculation of combined positive, negative, and neutral emotions. This ensures a more accurate representation of emotions by considering different modalities.
  2. Text Analysis
    This provides a visual representation of text data, depicting the keywords and most talked-about topics in your conversation. The most commonly used words are displayed in a larger font size, while less commonly used words are shown in a smaller font size.

Transcription is the process of converting spoken words into written text. For every conversation conducted or uploaded in Decode, we provide accurate transcripts. From widely spoken languages like English, Spanish, and Mandarin to lesser-known regional dialects, our language support is extensive.

1. English - US 2. Finnish 3. Slovenian 4. Kazakh
5. English - GN 6. Norwegian 7. Slovakian 8. Macedonian
9. French - CA 10. English - IN 11. Lithuanian 12. Albanian
13. English - CA 14. Hindi 15. Simplified Chinese (SC) 16. Romanian
17. Indonesia 18. Bengali 19. Mandarin 20. Uzbek
21. Japanese 22. Kannada 23. Russian 24. Venda
25. Korean 26. Tamil 27. Ukrainian 28. Slovak
29. Thai 30. Telugu 31. Flemish 32. Armenian
33. Malay 34. Mandarin - Simplified 35. Amharic 36. Georgian
37. English - SG 38. Mandarin - Traditional 39. Sundanese 40. Azerbaijani
41. Tamil - SG 42. Cantonese 43. Croatian 44. Greek
45. Filipino 46. Arabic 47. isiXhosa 48. Sinhala
49. Vietnamese 50. Spanish 51. Serbian 52. Burmese
53. English - GB 54. Bashkir 55. Afrikaans 56. Hebrew
57. French 58. Basque 59. Bosnian 60. Persian
61. German 62. Belarusian 63. Bulgarian 64. Javanese
65. Portuguese 66. Esperanto 67. Lao 68. Southern Sotho
69. Spanish 70. Estonian 71. Swati 72. Tsonga
73. Turkish 74. Galician 75. Czech 76. Malayalam
77. Italian 78. Interlingua 79. Hungarian 80. Zulu
81. Icelandic 82. Latvian 83. Gujarati 84. Nepali
85. Polish 86. Marathi 87. Kinyarwanda 88. Catalan
89. Danish 90. Mongolian 91. Setswana 92. Punjabi
93. Dutch 94. Uyghur 95. Swahili 96. Urdu
97. Swedish 98. Welsh 99. Khmer 100. Maltese
101. Irish 102. Somali 103. Pashto  

With Decode, embrace the power of multilingual capabilities for a truly global experience. We support the following languages for AI summaries, highlights, and topic generation:

French (fr-CA) Hindi (hi-IN) Korean
Indonesian (id-ID) Kannada (kn-IN) Romania
Japanese (ja-JP) Tamil (ta-IN) Nepali
Thai (th-TH) Telugu (te-IN) Punjabi
Malay (ms-MY) Arabic (ar-AE) Greek
French (fr-FR) Marathi (mr-IN) Hungarian
German (de-DE) Chinese Vietnamese
Spanish (es-ES) Malayalam African
Italian (it-IT) Arabic (ar-iq) Central Asia - Armenian

Table of contents

Eye Tracking

Eye tracking is the process of measuring the point of human gaze (where someone is looking) on the screen.

Decode CX Eye Tracking uses the standard webcam embedded in your laptop or desktop to measure eye positions and movements. The webcam identifies the position of both eyes and records eye movement as the viewer looks at a stimulus presented on a laptop, desktop, or mobile screen.

The data is processed in real-time, and the results are shown second by second in terms of affective parameters.

Experience Eye Tracking

To experience how Eye Tracking works, click here: Experience Eye Tracking.

Read the instructions and start Eye Tracking on your laptop or desktop. Eye Tracking begins with a calibration, after which the system will identify your point of gaze on prompted pictures in real-time.

Please enable the camera while accessing the application. Insights from Eye Tracking studies are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye Tracking studies, please click on Eye Tracking Insights.

Eye trackers are used in marketing as input devices for human-computer interaction, product design, and many other areas.

Insights from Eye Tracking

Eye Tracking results provide insight into the point of gaze and time spent discovering objects in the visual content shown to viewers by tracking and automatically producing heat maps, gaze maps, and transparency maps.

We have the following metrics for eye tracking:

  • Area of Interest (AOI): The AOI functionality allows users to define specific areas within their stimuli that need to be measured for noticeability, earned attention, etc. Examples include a brand logo on a pack, pack clusters on a shelf, or a character in a video.
    An AOI will have its coordinates (xmin, xmax, ymin, ymax) and time segment (t-start, tend) associated with it. For each AOI, we calculate the following metrics:some text
    • Exposure Time Duration: Calculated as the total time spent on a particular AOI.
    • Average Time to First Discovery: The average time elapsed when the tester first discovers the AOI across all testers, used to measure noticeability. Desirable: The lower, the better.
    • Percentage of People Who Watched: Calculated by the number of people who looked at the AOI versus the total number of testers, used to measure pack relevance with respondents. Desirable: The higher, the better.
    • Earned Attention/Held Attention: Number of seconds of continuous view held by the AOI, used to measure efficacy. Desirable: The higher, the better.

Maps from Eye Tracking

  • Heatmap: Eye Tracking heat maps use varying color intensities to highlight areas that were viewed the most by the respondents.
  • Transparency Maps: Similar to heat maps, transparency maps are also plotted using participants' gaze points on top of the stimulus. Instead of varying color intensities, the stimulus becomes transparent where gaze points aggregate. A higher concentration of points results in clearer visibility, while segments with fewer points become more transparent. Segments with no eye gaze points are hidden.

Facial Coding

Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a webcam and decoded into their respective emotions.

Decode helps capture the emotions of any respondent when exposed to media or a product. Facial movements, such as changes in the position of eyebrows, jawline, mouth, and cheeks, are identified. The system can track even minute movements of facial muscles and provide data about emotions such as happiness, sadness, surprise, and anger.

Experience Facial Coding

To experience how Facial Coding works, click here: Experience Facial Coding.

Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face is within the outline provided next to the media.

Please enable your webcam while accessing the application.

Insights from Facial Coding

Facial coding results provide insight into viewers’ spontaneous, unfiltered reactions to visual content by recording and analyzing facial expressions in real-time. The information collected gives cognitive metrics of the user’s genuine emotions while experiencing or viewing the content. It depicts the user’s emotions through facial movements.

The results are made available to users in the Result Dashboards.

You will get the following metrics from facial coding:

  • Emotion AI Metrics: Emotion response refers to the collection and analysis of data related to the emotional reactions of respondents to various stimuli, such as advertisements, products, or brand experiences.some text
    • Positive Emotions: The platform uses a combination of two emotions—Happy and Surprise—to determine whether the user felt happy while watching a particular video.
    • Negative Emotions: The platform uses a combination of three emotions—Anger, Disgust, and Contempt—to determine whether the user felt sad while watching a particular video. Note: Fear and sadness are not included in negative emotions, as these need to be considered in the context of the media and can be shown as standalone emotions.

To provide a holistic view of emotional responses, the platform utilizes emotion metrics that incorporate voice, facial, and text sentiment analytics. By merging these three sources of data, the platform captures and analyzes positive, negative, and neutral emotions exhibited throughout a discussion or media content. This consolidation of data offers a more comprehensive and simplified overview of the overall emotions displayed.

Translation in Decode

What is Translation?

Translation is the process of converting written text from one language into another. It involves taking the meaning of a piece of text in one language and accurately conveying that meaning in another language while also considering the grammar, syntax, and idiomatic expressions of the target language.

What is Translation in Decode?

In Decode, we provide the facility to convert your transcript from one language to another. You can translate your transcript into multiple languages and get analytics for the translated language as well.

How to Translate the Transcript

To translate a transcript into another language, follow these steps:

Step 1: Open the media from the media section, and you will arrive at the media detail page. Navigate to the transcript page from the right navigation bar.

Step 2: Click on the "Translate" button at the top of the transcript section and select the language for translation.

Step 3: The translated transcript will sometimes be automatically populated in the transcript section. If the translation has been done earlier, you will receive the translated transcript in real-time.

Voice Tonality

What is Voice Tonality?

Voice tonality refers to how a person sounds when speaking. It encompasses the way you sound when you say words out loud.

Voice Tonality in Decode

Voice tonality involves studying users' emotions and behavior through their voices. Speech carries both verbal and non-verbal information, making it a rich source of data about human behavior.

Using Decode, you can detect multiple cues from voice, such as:

  • Speech (verbal data)
  • Emotion
  • Confidence or Nervousness

At a fundamental level, we extract the following data points from speech for each second:

  • Happy
  • Sad
  • Fear
  • Surprise
  • Neutral
  • Angry
  • Disgust

In addition to emotions, confidence is also calculated for each second.

Decode Voice Tonality Metrics

We provide these three metrics using voice data:

  • Positive Emotion: This metric represents the percentage of time speakers felt positive emotions during the entire meeting.
  • Negative Emotion: This metric represents the percentage of time negative emotions were felt by speakers in the meeting.

Note: Silence is excluded in these calculations, and the base is always the total speaking time. In Decode, positive and negative emotions are also calculated from facial data if a face video is available, and then a weighted sum is provided as the final AI metric.

Speaker Phase Metrics

This Gantt chart shows the sequence of speaker participation in a meeting. The Gantt chart displays the sequence of speakers in a horizontal timeline format, with each speaker represented by a distinct bar. The highlighted portion of each bar indicates the duration of the speaker's contribution, allowing viewers to quickly grasp the varying levels of engagement among participants. By examining the chart, you can easily identify which speakers had more significant roles or longer speaking durations and which had comparatively lesser involvement.

Speaker Metrics

This pie chart provides information on the percentage of involvement from each participant in the conversation. The percentage of involvement is calculated by measuring the total speaking time of each participant and comparing it to the overall duration of the conversation. This calculation provides a relative representation of each participant's speaking opportunities during the discussion.

Transcription in Decode

What is Transcription?

Transcription is the process of converting spoken words into written text. This can be done by a person or a computer program. Transcription is often used in media, medicine, and law to create written records of interviews, speeches, and other audio sources.

Transcription in Decode

In Decode, we provide transcripts for every conversation available in Decode, with separate sections for different speakers. You can also edit the transcripts, translate them into other languages, and generate analysis for them.

All transcripts can be downloaded in the form of a CSV file with the following headers:

  • Media Name
  • Speaker Name
  • Start Time of the Transcript
  • Transcript Text
  • Analytics from the Transcript

Using the transcripts, the following analytics are generated in Decode:

  1. Emotion Metrics
    Sentiment analysis is a technique used in natural language processing to identify and extract subjective information from text. In Emotion Metrics, we incorporate sentiment analysis into the calculation of combined positive, negative, and neutral emotions. This ensures a more accurate representation of emotions by considering different modalities.
  2. Text Analysis
    This provides a visual representation of text data, depicting the keywords and most talked-about topics in your conversation. The most commonly used words are displayed in a larger font size, while less commonly used words are shown in a smaller font size.

Language Support for Transcription

Transcription is the process of converting spoken words into written text. For every conversation conducted or uploaded in Decode, we provide accurate transcripts. From widely spoken languages like English, Spanish, and Mandarin to lesser-known regional dialects, our language support is extensive.

1. English - US 2. Finnish 3. Slovenian 4. Kazakh
5. English - GN 6. Norwegian 7. Slovakian 8. Macedonian
9. French - CA 10. English - IN 11. Lithuanian 12. Albanian
13. English - CA 14. Hindi 15. Simplified Chinese (SC) 16. Romanian
17. Indonesia 18. Bengali 19. Mandarin 20. Uzbek
21. Japanese 22. Kannada 23. Russian 24. Venda
25. Korean 26. Tamil 27. Ukrainian 28. Slovak
29. Thai 30. Telugu 31. Flemish 32. Armenian
33. Malay 34. Mandarin - Simplified 35. Amharic 36. Georgian
37. English - SG 38. Mandarin - Traditional 39. Sundanese 40. Azerbaijani
41. Tamil - SG 42. Cantonese 43. Croatian 44. Greek
45. Filipino 46. Arabic 47. isiXhosa 48. Sinhala
49. Vietnamese 50. Spanish 51. Serbian 52. Burmese
53. English - GB 54. Bashkir 55. Afrikaans 56. Hebrew
57. French 58. Basque 59. Bosnian 60. Persian
61. German 62. Belarusian 63. Bulgarian 64. Javanese
65. Portuguese 66. Esperanto 67. Lao 68. Southern Sotho
69. Spanish 70. Estonian 71. Swati 72. Tsonga
73. Turkish 74. Galician 75. Czech 76. Malayalam
77. Italian 78. Interlingua 79. Hungarian 80. Zulu
81. Icelandic 82. Latvian 83. Gujarati 84. Nepali
85. Polish 86. Marathi 87. Kinyarwanda 88. Catalan
89. Danish 90. Mongolian 91. Setswana 92. Punjabi
93. Dutch 94. Uyghur 95. Swahili 96. Urdu
97. Swedish 98. Welsh 99. Khmer 100. Maltese
101. Irish 102. Somali 103. Pashto  

Language Support for AI summary, Highlights and topics

With Decode, embrace the power of multilingual capabilities for a truly global experience. We support the following languages for AI summaries, highlights, and topic generation:

French (fr-CA) Hindi (hi-IN) Korean
Indonesian (id-ID) Kannada (kn-IN) Romania
Japanese (ja-JP) Tamil (ta-IN) Nepali
Thai (th-TH) Telugu (te-IN) Punjabi
Malay (ms-MY) Arabic (ar-AE) Greek
French (fr-FR) Marathi (mr-IN) Hungarian
German (de-DE) Chinese Vietnamese
Spanish (es-ES) Malayalam African
Italian (it-IT) Arabic (ar-iq) Central Asia - Armenian