Emotions & Machines
Should AI be able to determine our feelings? To interpret emotion? What ethical guidelines exist around this?
Following on from our ongoing commentary on inclusive, multi-disciplinary design in AI, this post will focus on Emotion AI.
Attribution & Context in AI
What is Emotion AI
The VC Emotion Problem | HumeAI & ChatGPT4.0
Inclusive design for Emotion AI
1. The Fundamental Challenges of Attribution & Context
In occupying a space of intersections of culture and disciplines (tech, policy, finance culture, ethics, research), and in coming from a global north-south context; I care how we design for multi-cultural, intersectional possibilities.
In its power to create both divisions and new universalisms, it really does matter where AI can lead us. Maybe AI is still not very “good” at what it does. But it is definitely very important.
I will speak a lot about Attribution & Context in my work and throughout this substack. Who determines meaning, where data comes from, and the landscape in which is is framed. This is not discussed enough, and it is critical we have more insights and windows in.
“We have now accepted after 60 years of AI that the things we originally thought were easy, are actually very hard and what we thought was hard, like playing chess, is very easy” Alan Winfield, Professor of robotics at UWE, Bristol [Can Artificial Intelligence understand emotions?]
Artists, creative coders, AI researchers & philosophers have been interrogating this space for a long time, and I have been deep in this narrative. So I will also share some of those examples below.
2. What is Emotion AI
Emotions are a complex space. And AI blurs the line between human and feeling.
Emotion AI concerns me when we think about both attribution & context. Which emotions are we looking at? where does data come from? who is interpreting it ?how is it weighted? what’s going on in the black box?
Also known as Affective Computing, a field which dates back to the mid 1990s, Emotion AI refers to the branch of Artificial intelligence that aims to process, understand, and even replicate human emotions. With AI becoming increasingly sophisticated, we have moved beyond the early days of sentiment analysis and facial recognition, to more complex, multi-sensory emotion recognition technologies.
Unpacking Context, as documented by a creative technologist Creative technologist Kyle McDonald , Voice In My Head 2023 with Lauren McCarthy
Exploring the implications for AI to listen and intervene in your social experience in real-time, augmenting your personality. The piece begins with an onboarding session where you place a bud in your ear, and the voice asks you to reflect on the inner voice you were born with. What if it could be more caring? Less obsessive? Less judgmental? More helpful? What
if you could change your inner monologue?
3. The VC Emotion Problem | HumeAI & ChatGPT4.0
Should AI be able to determine or channel [all of] our Feelings? This is a problem in VC.
Venture Capital is doubling down on this market. There appears to be a reliance on mostly unverifiable ethical commitments from engineering teams at startups and within big tech. It is expected that the global Emotion Detection and Recognition market will be worth upwards of $27B by 2027. Probably why, as with a lot of AI investment, the hard questions seemed to be pushed to after the fact.
Responsible investment in Emotion AI
If Venture Funding does not become more ethics-driven or at the very least nuanced in its approach to Context & Attribution, neither will the tech sector.
Initiatives led by (Many) Responsible AI venture coalitions, like the examples below, have become meaningless.
Top VC Firms Sign Voluntary Commitments for Startups to Build AI Responsibly
Underwriting Responsible AI: Venture capital needs a framework for AI investing
Venture Coalition Commitments Encourage Responsible AI Development in Absence of Regulatory Guidance
Recent examples of Emotions & AI
Hume AI
Hume AI recently raised a funding round of $50M Series B and released its new Empathic Voice Interface. This Series B round was led by EQT Ventures and joined by Union Square Ventures, Nat Friedman & Daniel Gross, Metaplanet, Northwell Holdings, Comcast Ventures, and LG Technology Ventures.
Source: Hume AI 2024
In terms of the outcomes Hume AI says it hopes to maximise: What is emotional well -being? Who defines this?
Here is a long and extensive article on this issue from VentureBeat, New startup shows how emotion-detecting AI is intrinsically problematic I think worth reading.
But more importantly, thinking about why and how venture funding followed after this sort of insight and criticism. What sort of criteria and parameters would a startup need to present in order to be able to launch a product like this at scale? I’d love to see answers.
ChatGPT4.0
With AI venturing into more human attributes, there is little to no understanding or mapping for how human well-being is going to be accounted for through this design process.
Our concerns above may be giving far too much credit for the sophistication of the Hume product, but ChatGPT4.0 is now in the market for feelings, too. The release of ChatGPT4.0 was marked by the updated voice protocols, which problematically sounded just like Scarlett Johansen's, but she had never agreed to lend her voice to.
The interface is now better at sounding human-friendly, kind, thoughtful, and fun… in ways that are great for subverting platform users. Personally, I am not really against a non-mean-sounding AI.
But what I find more problematic are the ways in which GPT, with its new interface, is going beyond just written English, at a planetary scale, undermines cultural differences in tone, and language; and most importantly, the risk of extreme bias in measuring people’s emotions.
4. Inclusive design for Emotion AI
Context matters matters within Emotion AI. With all the vulnerabilities and challenges that human emotions present, allowing an algorithm to determine feelings without context is not acceptable.
Read this beautiful piece, The Automation of Empathy by Grant Bollmer.
Trevor Paglen Machine Readable Hito, détail | detail, 2017.
Photo : permission de | courtesy of the artist & Metro Pictures, New York
Trevor Paglen’s print Machine Readable Hito (2017) is composed of hundreds of images of artist Hito Steyerl’s face. Inspired by the unlikely muse of Microsoft Azure, each image has Steyerl making a different facial expression, and each is captioned with the output of computational algorithms designed to detect age, gender, or emotion.
I feel a little bit schizophrenic when I speak about AI.
Suhair 1: AI-stuff (LLM) is not that good yet
Suhair 2: Gen AI platforms and companies continuously tear the world apart.
Suhair 3: The truth lies somewhere in the middle of all of this.
We are in a beautiful new era where algorithms can create new universalities for making, designing, and building. AI will augment and foster new forms of human creativity; it will extend the mind and body into new directions in the creative sector and beyond. We can build for more inclusive futures.
Inclusive AI is a process, not an outcome. It requires a constant dialogue with Purpose, Landscape and Intention. It requires engagement with diverse perspectives, e.g., age, gender, race and ability, across all touchpoints within the AI ecosystem, from design, development, and deployment, to stakeholders.
The truth is nuanced and involves iteration, inclusive design and ethical frameworks.
Iteration and experimentation have manifested amazing outcomes in technology; but venture investment has to mature to avoid constant the hype of the next big thing and play a role in holding AI, as an ecosystem, accountable to all of its stakeholders.
Thanks to Ve Dewey for research, insight & editing on this post.



