Eye tracking is the process of measuring the point of human gaze (where someone is looking) on the screen.
Decode CX Eye Tracking uses the standard webcam embedded in your laptop or desktop to measure eye positions and movements. The webcam identifies the position of both eyes and records eye movement as the viewer looks at a stimulus presented on a laptop, desktop, or mobile screen.
The data is processed in real-time, and the results are shown second by second in terms of affective parameters.
To experience how Eye Tracking works, click here: Experience Eye Tracking.
Read the instructions and start Eye Tracking on your laptop or desktop. Eye Tracking begins with a calibration, after which the system will identify your point of gaze on prompted pictures in real-time.
Please enable the camera while accessing the application. Insights from Eye Tracking studies are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye Tracking studies, please click on Eye Tracking Insights.
Eye trackers are used in marketing as input devices for human-computer interaction, product design, and many other areas.
Eye Tracking results provide insight into the point of gaze and time spent discovering objects in the visual content shown to viewers by tracking and automatically producing heat maps, gaze maps, and transparency maps.
We have the following metrics for eye tracking:
Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a webcam and decoded into their respective emotions.
Decode helps capture the emotions of any respondent when exposed to media or a product. Facial movements, such as changes in the position of eyebrows, jawline, mouth, and cheeks, are identified. The system can track even minute movements of facial muscles and provide data about emotions such as happiness, sadness, surprise, and anger.
To experience how Facial Coding works, click here: Experience Facial Coding.
Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face is within the outline provided next to the media.
Please enable your webcam while accessing the application.
Facial coding results provide insight into viewers’ spontaneous, unfiltered reactions to visual content by recording and analyzing facial expressions in real-time. The information collected gives cognitive metrics of the user’s genuine emotions while experiencing or viewing the content. It depicts the user’s emotions through facial movements.
The results are made available to users in the Result Dashboards.
You will get the following metrics from facial coding:
To provide a holistic view of emotional responses, the platform utilizes emotion metrics that incorporate voice, facial, and text sentiment analytics. By merging these three sources of data, the platform captures and analyzes positive, negative, and neutral emotions exhibited throughout a discussion or media content. This consolidation of data offers a more comprehensive and simplified overview of the overall emotions displayed.
Translation is the process of converting written text from one language into another. It involves taking the meaning of a piece of text in one language and accurately conveying that meaning in another language while also considering the grammar, syntax, and idiomatic expressions of the target language.
In Decode, we provide the facility to convert your transcript from one language to another. You can translate your transcript into multiple languages and get analytics for the translated language as well.
To translate a transcript into another language, follow these steps:
Step 1: Open the media from the media section, and you will arrive at the media detail page. Navigate to the transcript page from the right navigation bar.
Step 2: Click on the "Translate" button at the top of the transcript section and select the language for translation.
Step 3: The translated transcript will sometimes be automatically populated in the transcript section. If the translation has been done earlier, you will receive the translated transcript in real-time.
Voice tonality refers to how a person sounds when speaking. It encompasses the way you sound when you say words out loud.
Voice tonality involves studying users' emotions and behavior through their voices. Speech carries both verbal and non-verbal information, making it a rich source of data about human behavior.
Using Decode, you can detect multiple cues from voice, such as:
At a fundamental level, we extract the following data points from speech for each second:
In addition to emotions, confidence is also calculated for each second.
We provide these three metrics using voice data:
Note: Silence is excluded in these calculations, and the base is always the total speaking time. In Decode, positive and negative emotions are also calculated from facial data if a face video is available, and then a weighted sum is provided as the final AI metric.
This Gantt chart shows the sequence of speaker participation in a meeting. The Gantt chart displays the sequence of speakers in a horizontal timeline format, with each speaker represented by a distinct bar. The highlighted portion of each bar indicates the duration of the speaker's contribution, allowing viewers to quickly grasp the varying levels of engagement among participants. By examining the chart, you can easily identify which speakers had more significant roles or longer speaking durations and which had comparatively lesser involvement.
This pie chart provides information on the percentage of involvement from each participant in the conversation. The percentage of involvement is calculated by measuring the total speaking time of each participant and comparing it to the overall duration of the conversation. This calculation provides a relative representation of each participant's speaking opportunities during the discussion.
Transcription is the process of converting spoken words into written text. This can be done by a person or a computer program. Transcription is often used in media, medicine, and law to create written records of interviews, speeches, and other audio sources.
In Decode, we provide transcripts for every conversation available in Decode, with separate sections for different speakers. You can also edit the transcripts, translate them into other languages, and generate analysis for them.
All transcripts can be downloaded in the form of a CSV file with the following headers:
Using the transcripts, the following analytics are generated in Decode:
Transcription is the process of converting spoken words into written text. For every conversation conducted or uploaded in Decode, we provide accurate transcripts. From widely spoken languages like English, Spanish, and Mandarin to lesser-known regional dialects, our language support is extensive.
With Decode, embrace the power of multilingual capabilities for a truly global experience. We support the following languages for AI summaries, highlights, and topic generation:
Table of contents
Eye tracking is the process of measuring the point of human gaze (where someone is looking) on the screen.
Decode CX Eye Tracking uses the standard webcam embedded in your laptop or desktop to measure eye positions and movements. The webcam identifies the position of both eyes and records eye movement as the viewer looks at a stimulus presented on a laptop, desktop, or mobile screen.
The data is processed in real-time, and the results are shown second by second in terms of affective parameters.
To experience how Eye Tracking works, click here: Experience Eye Tracking.
Read the instructions and start Eye Tracking on your laptop or desktop. Eye Tracking begins with a calibration, after which the system will identify your point of gaze on prompted pictures in real-time.
Please enable the camera while accessing the application. Insights from Eye Tracking studies are made available to users in Affect Labs Insights Dashboards. To view the results and insights available for Eye Tracking studies, please click on Eye Tracking Insights.
Eye trackers are used in marketing as input devices for human-computer interaction, product design, and many other areas.
Eye Tracking results provide insight into the point of gaze and time spent discovering objects in the visual content shown to viewers by tracking and automatically producing heat maps, gaze maps, and transparency maps.
We have the following metrics for eye tracking:
Facial coding is the process of interpreting human emotions through facial expressions. Facial expressions are captured using a webcam and decoded into their respective emotions.
Decode helps capture the emotions of any respondent when exposed to media or a product. Facial movements, such as changes in the position of eyebrows, jawline, mouth, and cheeks, are identified. The system can track even minute movements of facial muscles and provide data about emotions such as happiness, sadness, surprise, and anger.
To experience how Facial Coding works, click here: Experience Facial Coding.
Once you are on the website, start Facial Coding by playing the media on the webpage. Ensure your face is within the outline provided next to the media.
Please enable your webcam while accessing the application.
Facial coding results provide insight into viewers’ spontaneous, unfiltered reactions to visual content by recording and analyzing facial expressions in real-time. The information collected gives cognitive metrics of the user’s genuine emotions while experiencing or viewing the content. It depicts the user’s emotions through facial movements.
The results are made available to users in the Result Dashboards.
You will get the following metrics from facial coding:
To provide a holistic view of emotional responses, the platform utilizes emotion metrics that incorporate voice, facial, and text sentiment analytics. By merging these three sources of data, the platform captures and analyzes positive, negative, and neutral emotions exhibited throughout a discussion or media content. This consolidation of data offers a more comprehensive and simplified overview of the overall emotions displayed.
Translation is the process of converting written text from one language into another. It involves taking the meaning of a piece of text in one language and accurately conveying that meaning in another language while also considering the grammar, syntax, and idiomatic expressions of the target language.
In Decode, we provide the facility to convert your transcript from one language to another. You can translate your transcript into multiple languages and get analytics for the translated language as well.
To translate a transcript into another language, follow these steps:
Step 1: Open the media from the media section, and you will arrive at the media detail page. Navigate to the transcript page from the right navigation bar.
Step 2: Click on the "Translate" button at the top of the transcript section and select the language for translation.
Step 3: The translated transcript will sometimes be automatically populated in the transcript section. If the translation has been done earlier, you will receive the translated transcript in real-time.
Voice tonality refers to how a person sounds when speaking. It encompasses the way you sound when you say words out loud.
Voice tonality involves studying users' emotions and behavior through their voices. Speech carries both verbal and non-verbal information, making it a rich source of data about human behavior.
Using Decode, you can detect multiple cues from voice, such as:
At a fundamental level, we extract the following data points from speech for each second:
In addition to emotions, confidence is also calculated for each second.
We provide these three metrics using voice data:
Note: Silence is excluded in these calculations, and the base is always the total speaking time. In Decode, positive and negative emotions are also calculated from facial data if a face video is available, and then a weighted sum is provided as the final AI metric.
This Gantt chart shows the sequence of speaker participation in a meeting. The Gantt chart displays the sequence of speakers in a horizontal timeline format, with each speaker represented by a distinct bar. The highlighted portion of each bar indicates the duration of the speaker's contribution, allowing viewers to quickly grasp the varying levels of engagement among participants. By examining the chart, you can easily identify which speakers had more significant roles or longer speaking durations and which had comparatively lesser involvement.
This pie chart provides information on the percentage of involvement from each participant in the conversation. The percentage of involvement is calculated by measuring the total speaking time of each participant and comparing it to the overall duration of the conversation. This calculation provides a relative representation of each participant's speaking opportunities during the discussion.
Transcription is the process of converting spoken words into written text. This can be done by a person or a computer program. Transcription is often used in media, medicine, and law to create written records of interviews, speeches, and other audio sources.
In Decode, we provide transcripts for every conversation available in Decode, with separate sections for different speakers. You can also edit the transcripts, translate them into other languages, and generate analysis for them.
All transcripts can be downloaded in the form of a CSV file with the following headers:
Using the transcripts, the following analytics are generated in Decode:
Transcription is the process of converting spoken words into written text. For every conversation conducted or uploaded in Decode, we provide accurate transcripts. From widely spoken languages like English, Spanish, and Mandarin to lesser-known regional dialects, our language support is extensive.
With Decode, embrace the power of multilingual capabilities for a truly global experience. We support the following languages for AI summaries, highlights, and topic generation: