“My name is Alexa and I’m not your personal assistant” — Workshop at Mozfest (London 27/10/19)

Alexa Steinbrück
6 min readNov 8, 2019

--

For this year’s Mozfest, the annual event of Mozilla that celebrates the open web, I delivered a workshop with the title: “My name is Alexa and I’m not your personal assistant”. We looked at voice assistants like Alexa, Siri, Cortana and Google Assistant, questioned the status quo and collectively explored alternative designs.

The participants came from various backgrounds: Designers, developers, artists, educators, writers and marketing people. Twenty-five people were closely packed in the little space on a sunny Sunday morning.

The 90-minutes workshop contained:

  1. An input part, where I shared my research in the area of personification of AI systems
  2. A hands-on creative part, where participants developed alternative designs in small groups

The fact that Amazon’s voice assistant has my name is a funny coincidence, but being a woman working in tech who also studied Artificial Intelligence at university (before the hype), it’s also a strong motivation to critically examine these systems.

Background

Illustration by Tamara Siewert

Today’s voice assistants are given a pseudo-human design by their makers: They speak like humans, they (pretend to) have opinions, preferences and even hobbies.

The majority of today’s voice assistants are gendered female, be it by their name, their voice or more subtle characteristics of their speech. Voice assistants are used in millions of households and some people say voice technology will replace text and screen-based media very soon. Right now is the time where standards for these technologies are established. We have to make sure that these standards don’t perpetuate negative stereotypes about gender and “female roles”.

The workshop is a call to action: Let’s don’t accept the status quo of these design decisions that are made by a very small group of people. Let’s bring diverse people together to use this new medium in a way that feels empowering, transparent and/or even poetic.

Let’s re-imagine, re-design these tools: Can we imagine a non-female persona as an assistant? Can we approach AI in a non-anthropomorphic way? I believe there is a huge potential for creative exploration beyond a human-like persona.

As I am a software developer myself with some background in AI, one thing that is dear to my heart is helping increase AI literacy in the world. I think the personification/anthropomorphising of voice assistants is counteracting this AI literacy because it suggests that Artificial General Intelligence (AGI) is already a reality, whereas the systems of today are very sophisticated examples of narrow (weak) AI. One of the goals of this workshop is to demystify the inner workings of voice assistants and show where in the pipeline AI techniques are applied.

Workshop Design

Activity 1 — Warm-up

We started with a (personal) reflection on our subjective relationship with voice assistants and chat bots: Think of your last interaction, how did it make you feel? How human-like do you think the system is and why? Participants assigned a number from 1 to 10 and pinned it on the wall. This way a collective visualization emerged.

I am always surprised by how mixed people’s emotions are regarding voice assistants. Among the positive emotions “intriguing” was a frequent response. But, contrary to the companies narrative of “simplifying people’s life” a majority of people express discontent with the voice assistants (“dehumanising”, “frustrating”).

Input part — “Personified machines”

I presented my research findings: We looked at how Alexa, Siri, Cortana and Google Assistant express their individual personality and pseudo-humanity. We looked behind the scenes and discovered groups of dedicated people (“personality teams”) that are shaping and scripting voice assistant’s personalities.

Then we examined the anatomy of a modern voice assistant: How are sound waves transformed into meaning? Which part of the processing happens directly in the device (smart speaker) and what is sent to the server? Where is the Artificial Intelligence (AI) in them, and what are the different AI techniques in detail?

Activity 2 — Speculative design

Designing and developing voice assistants is a complex endeavor. Just to name a few of the aspects involved:

Who are you designing for (the user), what task should it fulfill (the purpose), how does it speak (conversational design), how does it depict itself (representation), what happens to the user’s data (privacy), which platform/infrastructure is used (e.g. Amazon).

This workshop is focused on the representation of voice assistants. Let’s think of this step as the design of a persona (*): Personas are fictional characters that are used to represent the (future) user of a product. But instead of using a persona to represent the user, we will represent the voice assistant itself. This is a step prior to prototyping. The goal here is not to develop something functional (yet), but to explore the realm of alternative representations of voice assistants.

(*) Comment: The term “persona” might be a bit ironic in this context, where we criticize the personification. In the absence of a better word let’s accept non-persons as personas too.

Participants formed in small groups. To facilitate the creative exploration I provided some starting points:

And also a list of concrete prompts:

  • Does it express a gender?
  • Does it have preferences or hobbies?
  • Is it please-all or polarizing?

We hereby leaned on the excellent pioneering work of Josie Young and Charlotte Webb/The Feminist Internet: Feminist Design Tool

Participants soon immersed in intense discussion and brainstorming. In addition to pens and papers to draw on, I provided each group with “smart speaker” props made out of cardboard, that served as a canvas on which the participants could draw their imaginary representation.

Results

At the end of the session, each of the groups presented their ideas. I was positively surprised by the imaginativeness of the outcomes. Many groups decided to think in the direction of a non-anthropomorphised alternative entity, that doesn’t bother its user with a shallow imitation of humanness. Instead, it reveals its otherness by borrowed features from the realm of nature (e.g. “the mountain”).

It was also interesting to discover parallels between the work of different groups, e.g. there was the idea of a shifting personality that can adapt to the user’s preferences.

It just takes an hour and a group of diverse people to challenge the status quo of personified, gendered voice assistants with powerful and poetic ideas. Each of the results would be a great candidate for further development and prototyping. I am eager to try out each one of them!

--

--

Alexa Steinbrück
Alexa Steinbrück

Written by Alexa Steinbrück

A mix of Web Development, Machine Learning and Critical AI discourse. I love dogs and dictionaries.