Discuz! Board

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 68|回复: 0

[使用疑问] The potential and precision of emotional index

[复制链接]

1

主题

1

帖子

5

积分

新手上路

Rank: 1

积分
5
发表于 2023-4-16 18:19:53 | 显示全部楼层 |阅读模式
Particular facial, vocal, and body expressions, and from changes in physiology such as increased heart rate. detection technologies and sensors vary according to the circumstances, based on the type of emotion detected and their usability. In addition, attempts have been made to establish a standard and fixed relationship between emotional changes and physiological cues with respect to various cue types, characteristics, and classifiers. However, it has been found to be relatively difficult to accurately reflect emotional changes using a single physiological cue . For example, faces tend to be the most visible form of emotional communication.

They are also the most easily controlled in response to different mobile number list social situations compared to voice and other modes of expression. True affective understanding will likely only be achieved through a combination of methods in order to work around the flaws in each method. A schematic example of a multimodal system is shown in Figure. Emotion recognition using multiple physiological cues could be significant in both research and real-world applications. The responses of multiple systems that are linked in space and time during an emotional episode are essentially the hallmark of basic emotion theory. Emotion-recognition-emotional-indexes Multimodal systems are capable not.




Only of improving the results of recognition/understanding of emotional states, but also of simulating more vivid expressions in human-computer interaction. Thus, affect recognition is most likely to be accurate when it combines multiple types of user cues and information about the user's context, situation, goals, and preferences. D'Mello et al (2015) pointed out that emotion recognition systems that base their decisions on multimodal data tend to be nearly 10% more accurate than their unimodal counterparts. Back to index Bibliography Abdullah, SMA, Ameen, SYA, M. Sadeeq, MA, & Zeebaree, S. (2021). Multimodal Emotion Recognition using Deep Learning . Journal of Applied Science and Technology Trends, 2(02), 52–58. Breazeal, C. (2003). Emotion and sociable humanoid robots. International Journal of.
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|DiscuzX

GMT+8, 2025-6-8 18:34 , Processed in 0.096015 second(s), 18 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表