Automatic connotation for audio and visual content using IOT sensors
Abstract:
In an approach for enhancing an experience of a user listening to and/or watching an audio-visual content by modifying future audio and/or video frames of the audio-visual content, a processor captures a set of sensor data from an IoT device worn by the first user. A processor analyzes the set of sensor data to generate one or more connotations by converting the emotion using an emotional vector analytics technique and a supervised machine learning technique. A processor scores the one or more connotations on a basis of similarity between the emotion exhibited by the first user and an emotion expected to be provoked by a second user. A processor determines whether a score of the one or more connotations exceeds a pre-configured threshold level. Responsive to determining the score does not exceed the pre-configured threshold level, a processor generates a suggestion for the producer of the audio-visual content.
Information query
Patent Agency Ranking
0/0