You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: articles/azure-video-indexer/emotions-detection.md
+23-3
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,6 @@ Emotions detection is an Azure AI Video Indexer AI feature that automatically de
16
16
17
17
The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
@@ -27,6 +24,29 @@ There are many things you need to consider when deciding how to use and implemen
27
24
- Will this feature perform well in my scenario? Before deploying emotions detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
28
25
- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
Introduction: This model is designed to help detect emotions in the transcript of a video. However, it is not suitable for making assessments about an individual's emotional state, their ability, or their overall performance.
34
+
35
+
Use cases: This emotion detection model is intended to help determine the sentiment behind sentences in the video’s transcript. However, it only works on the text itself, and may not perform well for sarcastic input or in cases where input may be ambiguous or unclear. As such, it should not be used for assessing employee performance or the emotional state of any other person.
36
+
37
+
Information requirements: To increase the accuracy of this model, it is recommended that input data be in a clear and unambiguous format. Users should also note that this model does not have context about input data, which can impact its accuracy.
38
+
39
+
Limitations: This model can produce both false positives and false negatives. To reduce the likelihood of either, users are advised to follow best practices for input data and preprocessing, and to interpret outputs in the context of other relevant information. Interpretation: The outputs of this model should not be used to make assessments about an individual's emotional state or other human characteristics. This model is supported in English and may not function properly with non-English inputs. Not English inputs are being translated to English before entering the model, therefore may produce less accurate results.
40
+
41
+
## Notes related to below instructions
42
+
43
+
1. Specify examples of intended use cases and specify use cases for which emotion detection is not designed or tested, including evaluating employee performance, making assessments about a person, their emotional state, or their ability, or monitoring individuals or employees;
44
+
2. Requirements for input data and best practices to increase reliability, and a statement about situations in which the model does not perform well such as for sarcasm;
45
+
3. Best practices to reduce false positive and false negative results;
46
+
4. Information about how to interpret outputs, emphasizing that output is not to be used for assessments of people;
47
+
5. Disclosure that the system does not have any context about input data;
48
+
6. Information about supported and unsupported languages, and how the feature functions with non-English inputs.
49
+
30
50
## View the insight
31
51
32
52
When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file that includes the id, type, and a list of instances it appeared at, with their time and confidence.
0 commit comments