{"id":21058,"date":"2017-06-19T09:31:45","date_gmt":"2017-06-19T08:31:45","guid":{"rendered":"https:\/\/www.inovex.de\/blog\/?p=3251"},"modified":"2025-01-08T08:27:42","modified_gmt":"2025-01-08T07:27:42","slug":"affective-robots-emotionally-intelligent-machines","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/","title":{"rendered":"Affective Robots: Emotionally Intelligent Machines"},"content":{"rendered":"<p>Automatic emotion recognition is an emerging area which leverages and combines knowledge from multiple fields such as machine learning, computer vision and signal processing. It has potential applications in many areas including healthcare, robotic assistance, education, market survey and advertising. Another usage of this information is to improve Human Computer Interaction with what can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science. The concept of &#8222;affective robots&#8220; refers to leveraging these emotional capabilities in humanoid robots to respond in the most appropriate way based on the user\u2019s current mood and personality traits. In this article, we explore the emotion recognition capabilities of Pepper the robot and how they perform in contrast to other cutting-edge approaches.<!--more--><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_83 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Why-emotion-recognition-in-robots\" >Why emotion recognition in robots<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#How-does-automatic-emotion-recognition-work\" >How does automatic emotion recognition work<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#The-algorithms\" >The algorithms<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#The-training-data\" >The training data<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Pepper-Robot\" >Pepper Robot<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Comparison-of-Peppers-emotion-recognition-of-the-basic-emotions-with-other-cutting-edge-approaches\" >Comparison of Pepper\u2019s emotion recognition of the basic emotions with other cutting-edge approaches<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Peppers-ALMood-Module\" >Pepper\u2019s ALMood Module<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Deep-Convolutional-Neural-Network-using-Tensorflow\" >Deep Convolutional Neural Network, using Tensorflow<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Googles-Machine-Learning-Cloud-Vision-API\" >Google\u2019s Machine Learning Cloud Vision API<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Microsofts-Cognitive-Services-Emotion-Cloud-API\" >Microsoft\u2019s Cognitive Services Emotion Cloud API<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Kairos-Emotion-Analysis-Cloud-API\" >Kairos\u2019 Emotion Analysis Cloud API<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Evaluation\" >Evaluation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#Results-and-conclusion\" >Results and conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#References\" >References<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Why-emotion-recognition-in-robots\"><\/span>Why emotion recognition in robots<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Nowadays, we are accustomed to interacting with technology that helps us considerably in our daily routine. We rely on these devices for numerous tasks from early in the morning, when we use our smartphone to check the best route to work based on current traffic, at the ATM and even to pay at the supermarket. Technology reaches us continuously via notifications when a meeting is approaching, our flight has been delayed, a new album of our favourite artist has been released, and when it&#8217;s our friends&#8216; birthdays.<\/p>\n<p>Amazon Echo and Google Home are recently released voice-enabled wireless speakers that make use of intelligent personal assistants, Amazon Alexa and Google Assistant, respectively, i.e. software agents that understand user requests in natural language and perform appropriate actions. People get used to the simplicity of these indeed helpful assistants. However, the kind of interaction we have with all these devices is utterly different from our experience with social interaction. An icon on a screen, a phone vibrating or an LED light blinking is definitely an absolutely impersonal way to interact with the devices that handle our most personal information and details about our lives.<\/p>\n<p>Instead, benefitting from these services currently requires humans to behave more like machines instead of having them adapt to our needs. Humanoid robots, on the other side, that can also use intelligent assistants are equipped with numerous sensors, which offers the possibility to benefit from emotion recognition feedback during social interactions and use it to improve the quality of the service and thus make them emotionally intelligent machines.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"How-does-automatic-emotion-recognition-work\"><\/span>How does automatic emotion recognition work<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>From all the ways humans can convey emotions, e.g. facial expression, body gestures, physiological signals, speech or even from written word as sentiment, we focused on the facial expression. This goal requires the following steps:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34366 aligncenter\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/steps-300x41.png\" alt=\"\" width=\"918\" height=\"126\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/steps-300x41.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-1024x140.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-768x105.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-1536x209.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-2048x279.png 2048w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-1920x262.png 1920w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-400x54.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/steps-360x49.png 360w\" sizes=\"auto, (max-width: 918px) 100vw, 918px\" \/>How many and which emotions are there? And how can we identify them? Discrete emotion theory maintains for all human beings there is a set of basic emotions that are innate and expressed and recognized across cultures and which combined produce all others. A particularly extended research is Paul Ekman\u2019s, an american psychologist pioneer in the study of emotions, whose most influential work concludes with the finding that facial expressions can be universally recognized. He officially put forth six basic emotions in 1971: anger, fear, disgust, surprise, happiness, and sadness [1].<\/p>\n<p>&nbsp;<\/p>\n<p>Each of these need to be characterized by models that relate them to actually measurable cues, i.e. muscle positions. In computer vision and image processing, a feature is any piece of information relevant for solving the computational task. Features for facial expression recognition are e.g. the position of facial landmarks such as the eyes and eyebrows, or appearance features such as changes in skin texture, wrinkling and deepening of facial furrows. The widely used Facial Action Coding System (FACS) [2] published in 2002, tries to systematically categorize the expressions based upon these physical features.<\/p>\n<p>The recognition process first detects the face in an image and then identifies the landmarks and the other mentioned features. To track a dense set of landmarks, the so-called active appearance models [3] (AAMs) are a widely used approach. An AAM is a computer vision algorithm for matching a statistical model of object shape and appearance to an image. It decouples the shape from the appearance of a face image. A set of images and the coordinates of landmarks are provided to the algorithm, which then uses the difference between the current estimate of appearance and the target image to drive an optimization process.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"The-algorithms\"><\/span>The algorithms<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Underlined by literature, the most promising approach for facial expression recognition appears to be deep neural networks. The task of emotion recognition from facial expression is tackled as a classification problem for which supervised learning comes as a natural method, using labeled training data of the set of basic emotions.<\/p>\n<p>In the last few years the field of machine learning has made an extraordinary progress on addressing these difficult image classification tasks. In particular, the model called deep convolutional neural network seems to be able to achieve very reasonable performance, able to match and sometimes even exceed human performance in some domains. Researchers have demonstrated steady progress in computer vision by validating their work against ImageNet, an academic benchmark for computer vision. Successive models continue to exhibit improvements, each time achieving a new state-of-the-art result: QuocNet, AlexNet, Inception (GoogLeNet), BN-Inception-v2 or the latest model, Inception-v3.<\/p>\n<p>Other often used algorithms include variants of dynamic Bayesian networks e.g., Hidden Markov Models, Conditional Random Fields, support vector machine classifiers and rule-based systems.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"The-training-data\"><\/span>The training data<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Deep neural networks are known for their need for large amounts of training data. The development and validation of these algorithms requires access to large image and video databases. Normally curated databases compiled in academic environments are employed. Relevant variation in the data is needed, which includes having different poses, illumination, resolution, occlusion, facial expression, actions, and their intensity and timing, and individual differences in subjects. Some well-known and widely used with research purposes databases are: the extended Cohn-Kanade Dataset (CK+) [4], FER-2013 [5] or the Japanese Female Facial Expressions (JAFFE) [6].<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Pepper-Robot\"><\/span>Pepper Robot<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/www.slashgear.com\/wp-content\/uploads\/2021\/06\/softbank-pepper-1-1280x720.jpg\" width=\"551\" height=\"310\" \/><\/p>\n<p>Pepper is a 1.20 m tall autonomous humanoid programmable robot, developed by Softbank robotics and designed for interaction with humans. It communicates in a natural and intuitive way, through speech, body movements and a tablet. It is equipped with lots of sensors (infrared, sonars, laser and bumpers) that allow it to move around autonomously, as well as cameras and a 3D sensor to detect the environment, faces and recognize human emotions. It runs an unix-based OS called NaoQi OS and is fully programmable in several languages like C++ and Python, and therefore its capabilities can be extended by implementing new features. Because it is connected to the internet, it is also possible to integrate any online services.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Comparison-of-Peppers-emotion-recognition-of-the-basic-emotions-with-other-cutting-edge-approaches\"><\/span>Comparison of Pepper\u2019s emotion recognition of the basic emotions with other cutting-edge approaches<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>To assess how the emotion recognition from Pepper performs compared to other state-of-the-art solutions, we carried out an evaluation, integrating and testing them on the robot.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34364 aligncenter\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-300x47.png\" alt=\"\" width=\"754\" height=\"118\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-300x47.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-1024x160.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-768x120.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-1536x241.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-2048x321.png 2048w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-1920x301.png 1920w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-400x63.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/emotionrecognition-360x56.png 360w\" sizes=\"auto, (max-width: 754px) 100vw, 754px\" \/><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Peppers-ALMood-Module\"><\/span>Pepper\u2019s ALMood Module<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Pepper\u2019s ALMood Module, part of the standard libraries for Pepper, returns the instantaneous emotion perception extracted from a combination of data sources: facial expression and smile, acoustic voice emotion analysis, head angles, touch sensors, semantic analysis from speech, sound level and energy level of noise and movement detection.<\/p>\n<pre class=\"lang:default decode:true\">Results from ALMood::currentPersonState() method :\r\n{\r\n    \"valence\" : { value, confidence },\r\n\r\n    \"attention\" : { value, confidence },\r\n\r\n    \"bodyLanguageState\" : { \"ease\" : { level, confidence } },\r\n\r\n    \"smile\" : { value, confidence },\r\n\r\n    \"expressions\" :\u2028{\r\n\r\n        \"anger\" : { value, confidence }, \"joy\" : { value, confidence },\r\n\r\n        \"sorrow\" : { value, confidence }, \"calm\" : { value, confidence }, \"surprise\" : { value, confidence },\r\n\r\n        \"laughter\" : { value, confidence }, \"excitement\" : { value, confidence },\r\n\r\n    }\r\n\r\n}<\/pre>\n<p>Some of these cues rely on Real-Time Facial Expression Estimation Technology services from OMRON. This technology combines the company\u2019s proprietary 3D model-fitting technology and an statistical classification method, based on a massive database of facial images.<\/p>\n<p>Facial expression values are represented as real values in the range (0, 1) normalized, i.e. so that add up to 1, same as the output from a soft-max function, often used in the final layer of neural networks applied to classification problems. This function is commonly used to highlight the largest values suppressing those which are significantly below the maximum value. In this condition, they are particularly easy to compare with one another and the final result is simply the maximum value of the array.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Deep-Convolutional-Neural-Network-using-Tensorflow\"><\/span>Deep Convolutional Neural Network, using Tensorflow<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Convolutional Neural Networks, likewise referred to as ConvNets or CNNs, are a kind of neural networks that have proven highly effective in image recognition and classification tasks, such as identifying faces or objects in images.<\/p>\n<p>The implementation is programmed making use of the TFLearn library on top of TensorFlow. CV2 is also used, to extract the contents of the image and reshape it to the correct format that is passed to the model\u2019s prediction method.<\/p>\n<p>The model employed has been used by several research studies and is based on a slight variation of the Alexnet model: the network starts with an input layer of 48 by 48, matching the size of the input data. This layer is followed by one convolutional layer, a local contrast normalization layer, and a max-pooling layer respectively. The network is finished with two more convolutional layers and one fully connected layer, connected to a soft-max output layer. Dropout was applied to the fully connected layer and all layers contain ReLu units. Also a second maxpooling layer is applied to reduce the number of parameters.<\/p>\n<p>The DCNN network model needs to be trained in order to learn, which is a step that may require many hours or several days depending on the resources. The training data used is the FER2013 dataset [5].<\/p>\n<p>The trained network returns the likelihood for each expression to be depicted by the user, the biggest appearing face, in a range from 0 to 1. The output with the highest value is assumed to be the current emotion from the basic emotions: happiness, sadness, anger, surprise, disgust, fear and neutral.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Googles-Machine-Learning-Cloud-Vision-API\"><\/span>Google\u2019s Machine Learning Cloud Vision API<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The neural net-based Machine Learning Platform provides machine learning services with pre-trained models that can be used through APIs with a JSON REST interface, either by making direct HTTP requests to the server or with the client libraries, offered in several programming languages.<\/p>\n<p>The <a href=\"https:\/\/cloud.google.com\/vision\" target=\"_blank\" rel=\"noopener noreferrer\">Cloud Vision API<\/a> exposes a method &#8222;images.annotate&#8220; that runs the image detection and annotation for configurable features in one or more images and returns the requested annotation information. The image data is to be sent base64-encoded or providing a Google Cloud Storage URI. The available annotations are: FACE_DETECTION, LANDMARK_DETECTION LOGO_DETECTION LABEL_DETECTION TEXT_DETECTION SAFE_SEARCH_DETECTION, IMAGE_PROPERTIES<\/p>\n<pre class=\"lang:default decode:true\">Results from HTTP POST to https:\/\/cloud.google.com\/vision\/docs\/reference\/rest\/v1\/images\/annotate \u2028for face detection annotations :\u2028\r\n\r\n{\u2028\r\n\r\n    \u201cjoyLikelihood\": enum(Likelihood), \u2028\r\n\r\n    \"sorrowLikelihood\": enum(Likelihood), \u2028\r\n\r\n    \"angerLikelihood\": enum(Likelihood), \u2028\r\n\r\n    \"surpriseLikelihood\": enum(Likelihood),\r\n\r\n}<\/pre>\n<p>The values indicate the likelihood of each of the emotions in the range:\u2028UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Microsofts-Cognitive-Services-Emotion-Cloud-API\"><\/span>Microsoft\u2019s Cognitive Services Emotion Cloud API<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Microsoft Cognitive Services, formerly known as Project Oxford, are a set of APIs, SDKs and services of Microsoft\u2019s machine learning based features.<\/p>\n<p>The <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/face\/\" target=\"_blank\" rel=\"noopener\">Face API <\/a>takes as input an image and returns information related to the faces found in it such as emotion scores among many other face attributes represented as real values normalized, in the range from 0 to 1. Images can be supplied in two ways: via a URL or as image data sent within the request.<\/p>\n<pre class=\"lang:default decode:true \">Results from HTTP POST to https:\/\/northeurope.api.cognitive.microsoft.com\/face\/v1.0\/detect?returnFaceAttributes=emotion:\r\n{\r\n\r\n\u00a0 \u00a0 \"anger\": number, \"contempt\": number,\r\n\r\n\u00a0 \u00a0 \"disgust\": number, \"fear\": number,\r\n\r\n\u00a0 \u00a0 \"happiness\": number, \"neutral\": number,\r\n\r\n\u00a0 \u00a0 \"sadness\": number, \"surprise\": number\r\n\r\n}<\/pre>\n<h2><span class=\"ez-toc-section\" id=\"Kairos-Emotion-Analysis-Cloud-API\"><\/span>Kairos\u2019 Emotion Analysis Cloud API<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>According to the documentation, Kairos\u2019 Emotion Analysis software measures total attention time, number of glances, blink detection, and attention span, can understand positive, negative and neutral sentiments and detect facial expressions including smiles, frowns, anger, and surprise. It can be integrated using their APIs and SDKs.<\/p>\n<p>The media uploaded can be an image or a video but the engine appears to be tuned to perform best on video to learn the baseline expressions of the person over time and adjust to compensate for a person\u2019s natural resting expression, which makes the emotion recognition from a single image not as accurate.<\/p>\n<p>The process involves an HTTP POST request to the API to create a new media object to be processed, to which the response includes the ID of the media uploaded. We need this ID for the second HTTP GET request which, in turn will return the results of the uploaded piece of media, the confidence for the set of emotions, not normalized, in the range from 0 to 100.<\/p>\n<pre class=\"lang:default decode:true\">Results :\u2028\r\n\r\n{\u2028 \u00a0 \u00a0\r\n\r\n    \"anger\": number, \u2028 \u00a0 \u00a0\r\n\r\n    \"disgust\": number, \u2028 \u00a0 \u00a0\r\n\r\n    \"fear\": number,\u2028 \u00a0 \u00a0\r\n\r\n    \"joy\": number, \u2028 \u00a0 \u00a0\r\n\r\n    \"sadness\": number, \u2028 \u00a0\u00a0\r\n\r\n    \"surprise\": number\u2028\r\n\r\n}<\/pre>\n<h2><span class=\"ez-toc-section\" id=\"Evaluation\"><\/span>Evaluation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34360 alignright\" style=\"color: #505d6d;\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/image14-225x300.jpg\" alt=\"\" width=\"339\" height=\"452\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/image14-225x300.jpg 225w, https:\/\/www.inovex.de\/wp-content\/uploads\/image14-768x1024.jpg 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/image14-1152x1536.jpg 1152w, https:\/\/www.inovex.de\/wp-content\/uploads\/image14-400x533.jpg 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/image14-360x480.jpg 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/image14.jpg 1200w\" sizes=\"auto, (max-width: 339px) 100vw, 339px\" \/><\/p>\n<p>We\u2019ve evaluated the performance of each of the solutions capturing images with Pepper\u2019s cameras using a) emotion-tagged photos from the Cohn-Kanade (CK+) database and b) real subjects in both cases for the emotions: happy, sad, neutral, surprised, and angry and extracted the predominant emotion detected by each algorithm from the results.<\/p>\n<p>In order to ensure some variation in the participants, among the 19 subjects there were people from different nationalities, females and males in the age range from 25 to 49 (mean=35, stdev=6). Also, in order to be able to make fair comparisons in every test the conditions were the same in terms of light, position, orientation and distance to the robot.<\/p>\n<p>The emotion recognition has been tested with posed expressions i.e. in absence of an underlying emotional state as well as with spontaneous reactions, i.e. congruent with an underlying emotional state by means of emotion elicitation techniques. These have been evaluated separately due to recent research that asserts the potential importance of dynamic aspects, such as speed of onset and offset of the expression as well as the degree of irregularity of facial movement for the encoding of spontaneous \u2028versus deliberate emotional facial expressions.<\/p>\n<p>The image shows the predominant emotion detected by each algorithm as a demo.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Results-and-conclusion\"><\/span>Results and conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>As well as in other research studies, not all emotions were recognized with the same accuracy. In general, the most easily detected emotion across all of the algortihms is a happy expression, with an extremely accurate detection rate that goes up to 100% in some cases while angry and sad expressions appear to be the hardest to identify. For both of those concrete cases, Pepper\u2019s recognition as well as the self-trained convolutional neural network performed a little bit better than the rest.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34362 aligncenter\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/graph-300x122.png\" alt=\"\" width=\"758\" height=\"308\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/graph-300x122.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph-1024x418.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph-768x313.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph-1536x627.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph-400x163.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph-360x147.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/graph.png 1878w\" sizes=\"auto, (max-width: 758px) 100vw, 758px\" \/><\/p>\n<p>The average results graph illustrates the same tendency for each of the solutions: database images could be correctly interpreted without major problems. The reason these expressions are recognized so accurately may lie in the fact that these expressions are &#8222;pure&#8220; according to experts, who characterized them as forms of the standard. It is notably more difficult to identify the posed expressions by the subjects and the rates are even lower for spontaneous reactions. However, Pepper\u2019s algorithm seems to be trained well enough to maintain a compelling success rate of almost 60% even with these more subtle or confusing movements.<\/p>\n<p>Still, Google\u2019s and Microsoft\u2019s services return values which can definitely be relied on, especially when it comes to distinguish a neutral look from an expression and spotting those images where a smile can be seen, which definitely seems to be their strength. An assumption is that these numbers would be even better if the images used had a higher resolution. To match the resolution of Pepper\u2019s algorithm the pictures taken had a resolution of 640 x 480px, which depending on the position and distance to the face might in some cases not be enough to precisely differentiate an emotion and chances are the scores for two of them are too close.<\/p>\n<p>As it can be appreciated, Kairos\u2019s algorithm performance using still images as media input can not reach such rates at all, not even by far. The way it seems, this has been tuned for best performance using video rather than static images, where some knowledge about the personal characteristics of the individuals can be learned over time. However, the expectations for the execution with still images was definitely higher. \u2028It is intriguing to note that the results obtained with the DCNN implementation do not differ that much from others, in all situations. In this case, the images it was trained on were certainly different from those obtained in the laboratory. This leads to the assumption that these kind of algorithms could perform surprisingly well when they are trained with the same specific data they will be later used with.<\/p>\n<p>To improve these results in automated emotion recognition, a possibility is to go for a multi-modal approach, i.e. that relies on several signals in addition to the facial expression. These relate to different aspects of the subject\u2019s communication, such as vocal expressions, which among others includes words, utterances and pauses, or physiological cues like heart rate and skin temperature or gestures. The combination of several sources would result in a more reliable output, decreasing the probability of misinterpreting signals.<\/p>\n<p>Being able to recognize human emotions is just the first step towards emotionally intelligent machines. This can be used to adapt the robot\u2019s behavior and thus improve the quality of human-machine interaction.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"References\"><\/span>References<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li>[1] P. Ekman, Universals and Cultural Differences in Facial Expressions of Emotion. University of Nebraska Press,1971.<\/li>\n<li>[2] P.Ekman,W.V.Friesen,andJ.C.Hager,\u201cThefacialactioncodingsystem,\u201cinResearchNexus eBook, 2002.<\/li>\n<li>[3] T. F. Cootes, G. J. Edwards, and C. J. Taylor, \u201cActive appearance models,\u201c IEEE Trans. Pattern \u2028Analysis and Machine Intelligence, vol. 23, pp. 681\u2013685, June 2001.<\/li>\n<li>[4] P. Lucey, J. F. Cohn, T. Kanade, J. M. Saragih, Z. Ambadar, and I. A. Matthews, \u201cThe extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expres- sion,\u201c in IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2010, San Francisco, CA, USA, 13-18 June, 2010, pp. 94\u2013101, 2010.<\/li>\n<li>[5] \u201cChallenges in representation learning: Facial expression recognition challenge.\u201c http:\/\/www.kaggle.com\/c\/ challengesin-representation-learning-facial-expression-recognitionchallenge. Accessed: 2017-01-23.<\/li>\n<li>[6] M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, \u201cCoding facial expressions with gabor wavelets,\u201c in 3rd International Conference on Face &amp; Gesture Recognition (FG\u201998), April 14- 16, 1998, Nara, Japan, pp. 200\u2013205, 1998.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Automatic emotion recognition is an emerging area which leverages and combines knowledge from multiple fields such as machine learning, computer vision and signal processing. It has potential applications in many areas including healthcare, robotic assistance, education, market survey and advertising. Another usage of this information is to improve Human Computer Interaction with what can be [&hellip;]<\/p>\n","protected":false},"author":57,"featured_media":13108,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[511,150,140,726],"service":[76,948],"coauthors":[{"id":57,"display_name":"Silvia Santano","user_nicename":"ssantano"}],"class_list":["post-21058","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-artificial-intelligence-2","tag-computer-vision","tag-machine-learning","tag-robotics","service-artificial-intelligence","service-robotics"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Affective Robots: Emotionally Intelligent Machines - inovex GmbH<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Affective Robots: Emotionally Intelligent Machines - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"Automatic emotion recognition is an emerging area which leverages and combines knowledge from multiple fields such as machine learning, computer vision and signal processing. It has potential applications in many areas including healthcare, robotic assistance, education, market survey and advertising. Another usage of this information is to improve Human Computer Interaction with what can be [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2017-06-19T08:31:45+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-01-08T07:27:42+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2300\" \/>\n\t<meta property=\"og:image:height\" content=\"876\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Silvia Santano\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild-1024x390.jpg\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Silvia Santano\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"18\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Silvia Santano\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/\"},\"author\":{\"name\":\"Silvia Santano\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/f2d17a49f3a806bc6cdb4902a32a0ef9\"},\"headline\":\"Affective Robots: Emotionally Intelligent Machines\",\"datePublished\":\"2017-06-19T08:31:45+00:00\",\"dateModified\":\"2025-01-08T07:27:42+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/\"},\"wordCount\":2894,\"commentCount\":2,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2017\\\/06\\\/pepper-artikelbild.jpg\",\"keywords\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\",\"Robotics\"],\"articleSection\":[\"Applications\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/\",\"name\":\"Affective Robots: Emotionally Intelligent Machines - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2017\\\/06\\\/pepper-artikelbild.jpg\",\"datePublished\":\"2017-06-19T08:31:45+00:00\",\"dateModified\":\"2025-01-08T07:27:42+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2017\\\/06\\\/pepper-artikelbild.jpg\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2017\\\/06\\\/pepper-artikelbild.jpg\",\"width\":2300,\"height\":876,\"caption\":\"Pepper Robot Title Picture\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/affective-robots-emotionally-intelligent-machines\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Affective Robots: Emotionally Intelligent Machines\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/f2d17a49f3a806bc6cdb4902a32a0ef9\",\"name\":\"Silvia Santano\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpegbd867eef3c3c053b3e6e1634ffac9ffc\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg\",\"caption\":\"Silvia Santano\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/ssantano\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Affective Robots: Emotionally Intelligent Machines - inovex GmbH","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/","og_locale":"de_DE","og_type":"article","og_title":"Affective Robots: Emotionally Intelligent Machines - inovex GmbH","og_description":"Automatic emotion recognition is an emerging area which leverages and combines knowledge from multiple fields such as machine learning, computer vision and signal processing. It has potential applications in many areas including healthcare, robotic assistance, education, market survey and advertising. Another usage of this information is to improve Human Computer Interaction with what can be [&hellip;]","og_url":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2017-06-19T08:31:45+00:00","article_modified_time":"2025-01-08T07:27:42+00:00","og_image":[{"width":2300,"height":876,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg","type":"image\/jpeg"}],"author":"Silvia Santano","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild-1024x390.jpg","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Silvia Santano","Gesch\u00e4tzte Lesezeit":"18\u00a0Minuten","Written by":"Silvia Santano"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/"},"author":{"name":"Silvia Santano","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/f2d17a49f3a806bc6cdb4902a32a0ef9"},"headline":"Affective Robots: Emotionally Intelligent Machines","datePublished":"2017-06-19T08:31:45+00:00","dateModified":"2025-01-08T07:27:42+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/"},"wordCount":2894,"commentCount":2,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg","keywords":["Artificial Intelligence","Computer Vision","Machine Learning","Robotics"],"articleSection":["Applications","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/","url":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/","name":"Affective Robots: Emotionally Intelligent Machines - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg","datePublished":"2017-06-19T08:31:45+00:00","dateModified":"2025-01-08T07:27:42+00:00","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2017\/06\/pepper-artikelbild.jpg","width":2300,"height":876,"caption":"Pepper Robot Title Picture"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/affective-robots-emotionally-intelligent-machines\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"Affective Robots: Emotionally Intelligent Machines"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/f2d17a49f3a806bc6cdb4902a32a0ef9","name":"Silvia Santano","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpegbd867eef3c3c053b3e6e1634ffac9ffc","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg","caption":"Silvia Santano"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/ssantano\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21058","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/57"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=21058"}],"version-history":[{"count":6,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21058\/revisions"}],"predecessor-version":[{"id":60326,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21058\/revisions\/60326"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/13108"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=21058"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=21058"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=21058"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=21058"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}