Joana Casanova

By

Joana Casanova

Marketing Consultant

Member of an External Project

From the roots of psychology and neurology, the area of Affective Computing has found the place to rapidly grow alongside with the advancement in technology. This interdisciplinary field of study uses computer science to identify, measure and interact with emotions. The AC market is still under study, but the prospects are high due to the diverse wide range of applications. Health, learning, security, safety, video games, marketing, and nearly any market, make experts forecast the value of the global affective computing on USD 140 billion in 2025. But how can computers identify our emotions, if sometimes not even other humans can? 

It all relates to the information possible to extract from a human. This data can come from questionnaires (PANAS, SAM, PAM, ESM), easy and cheap to get but lack reliability. People answer questionnaires according to their self-perceptions that are often distorted. And even if they aren´t, expressing clearly those perceptions can be a tough challenge. Due to their limitations, questionnaires are usually a complementary tool, and the substantial analysis relies on physical and physiological methods.

The first one relates to facial cues, speech sentiment, gestures, and body posture. As with any machine learning process, it works with algorithms. To get into more detail, in the case of facial cues, a database is developed with several facial expressions and their correspondent emotion (anger, fear, disgust, surprise, joy, sadness, or a mix of them). Most of the data is obtained from artificially induced emotions – gather a group of participants, expose them to a stimulus that will trigger a feeling (e.g. sad), record the facial expressions, and then, categorize the associated emotion (e.g. sadness). This way, when analyzing the millisecond facial expressions of a user, the computer runs the algorithm, and based on all the data already collected, it can recognize the emotion of the user.

The second one, physiological methods, can be more invasive. Examples of these technologies are electroencephalogram (EEG), which measures brain activity which gives us information on how weak or strong a reaction to a stimulus is; Electrocardiogram (ECG) and PPG, which gives data about heart activity and blood flow; Breathing sensors, with data on breathing rate; Galvanic skin response (GSR), and more. What can we get from all this data?

 Neuromarketing, the intelligent marketing, is a commercial marketing communication area. Knowing that 90% of the decision-making is guided by the sub-conscience, marketeers have been interested in the application of this approach to ads. Scientists use high-tech, such as NeuroCap EEG, to measure brain activity, and eye-tracking software, to understand how consumers react to different ads. The mirror neuron is responsible for the success of this type of marketing, as the behavior of commercial provokes a “mirrored” behavior on the observer itself. The consumer is induced to get full-minded on the experience and that initiates sensations such as laugh, sadness, pleasure, and rage. The importance of such things is that they can alter the way the consumer perceives and memorizes a brand. This field of study is in development, and it allows big companies to save millions in advertisement.

In 1997 Rosalind Picard, Professor of Media Arts and Sciences at MIT, published the book that introduced AC to the world. Her research, extended to autism, has shown the multiple good consequences these technologies generate. Teachers could get better feedback about learning engagement, doctors could better track mental states and bodies, and airport security could be better informed about passengers and their potential threat to flights safety. Nevertheless, the problems come from how expensive, invasive (data privacy), and inaccurate these systems can be.  

Some studies have already raised issues about facial cues assumptions, the example of the article Why our facial expressions don’t reflect our feelings. In this paper by BBC, it is pointed how the assumption that certain facial expressions mean specific emotions at a global level, cannot be admitted as certain. While researching emotions and facial expressions in Papua New Guinea and Mozambique, it was found that “study participants did not attribute emotions to faces in the same way Westerners do”, which allows us to reflect on accuracy. If emotions in Europe and Papua New Guinea are differently expressed, European facial coding systems would be highly unfair if used to enforce the law in Papua New Guinea citizens. This would imply a system that does not account for cultural differences, a concern since we live in multicultural countries.  

Also, in a study conducted by Joy Buolamwini, M.I.T. Media lab, face recognition tools were tested to evaluate how well machines could identify faces. The results indicated that gender was misidentified in 35% of darker-skinned females in a set of 271 photos, oppositely from the 1% of lighter-skinned males in a set of 385 photos. Based on these numbers, we can tell systems were not well-programmed. More than being inaccurate, the system was “precisely inaccurate”, meaning there is a systematic error related to the skin color, gender and ethnic features. This miscategorization happened because databases hadn’t been fed with enough diversified faces. This type of problem could lead to unfair decisions, due to the low degree of development in such databases.  

Moreover, companies, like HireVUE, claim they have software’s capable of helping with recruitment, and studies in law enforcement have been conducted. Being Affective Computing, in the early stages of development applying AC techniques to such fields is dangerous and invasive. Low accuracy becomes less a question of delivering inefficient systems, and more a matter of threatening general safety, freedom, and fairness. Privacy is also a concern. In the EU, citizens are individually protected, by articles 6 and 9, as their consent is needed for the collection of unique/personal data. Therefore, guaranteeing individuals are deliberately sharing their privacy, in form of data, gives higher protection and value to citizens’ wants. Nevertheless, some companies have already collected personal data without consent, which goes against individuals’ rights. 

Summing up, Affective computing is a USD 140 billion market as it can revolutionize medical diagnosis, learning systems, security, user experience, and much more. Nevertheless, the costs can be high if used precociously. The law must be adapted to the new technologies in a way that protect citizens against unfairness. Systems need to be optimized to minimize the amount and degree of mistakes. If they are not accurate, then they don´t work, and cannot be applied to multicultural societies. “A.I. software is only as smart as the data used to train it”.

References:

https://www.marketsandmarkets.com/PressReleases/affective-computing.asp

https://www.sciencedirect.com/topics/computer-science/affective-computing

https://web.media.mit.edu/~picard/

https://www.bbc.com/future/article/20180510-why-our-facial-expressions-dont-reflect-our-feelings

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *