Let’s hope “The AI Dilemma” never gets turned into a Netflix series

Alexa Steinbrück
11 min readJun 28, 2023

--

The new campaign by the “Center of Humane Technology” is dripping with sensationalist X-risk AGI hype and pseudo-science. That’s not how we should educate the public about the problems associated with AI.

In their one-hour talk which has been viewed by more than 2.6M people on Youtube, Aza Raskin and Tristan Harris request the audience to stop for a moment and take a deep breath. Raskin closes his eyes, and you hear the intimate sound of his breath through the microphone. Then with the serenity of an esoteric mentor, he instructs the people in the audience to practice kindness towards themselves:

“It’s going to feel almost like the world is gaslighting you. People will say at cocktail parties, you’re crazy, look at all this good stuff [AI] does (…) show me the harm, point me at the harm, and it’s very hard to point at the concrete harm. So really take some self-compassion.”

“AI Dilemma” is the new campaign by the “Center of Humane Technology” (CHT), a non-profit founded by Aza Raskin and Tristan Harris in 2018 to educate the public and advise legislators about the harmful impact that technology can have on individuals, institutions, and society. In their early years, their primary focus was on social media and the so-called “attention economy” which led to their major involvement in the Netflix documentary “The social dilemma” in 2020.

The new campaign focuses on the risks associated with Artificial Intelligence — or as they call it “Gollems”— a tongue-in-cheek acronym for a group of AI technologies they summarize as “Generative Large Language Multi-Modal Models”. “Gollems” include systems like ChatGPT and like.

According to Raskin and Harris, there have been two contact points of humanity with AI so far:

The first contact was due to the proliferation of social media and recommendation algorithms (“curation AI”). The results of this first contact had been disastrous to the psyche of the individual and the functioning of society: Information overload, addiction, doom scrolling, polarization, fake news, etc.

The second contact of humanity happens now in the year 2023 with “creation AI” — or as they call it “Gollems”. And this is where the Center for Humane Technology steps in: “We should not make the same mistakes as with social media”. Raskin and Harris want to warn us about the dangers of these “Gollem”-type AIs before they get integrated everywhere and become “entangled” with society. “We can still choose the future we want”.

Raskin and Harris have consulted many experts in the area of AI safety about what the actual problem is with this “Golem AIs”. “We’re talking about how a race dynamic between a handful of companies of these new Golem class AI’s are being pushed into the world as fast as possible” they say. “The reason we are in front of you is that the people who work in this space feel that this is not being done in a safe way”.

So they’re all about slowing down and democratic dialogue. That sounds good for now. Even if it is still a little unclear where they actually locate the problems of these “Gollem” systems.

“50% of AI researchers believe …” wrong numbers, no context.

There is a sentence that they show 3 times throughout their talk — it seems to be the backbone of their alarming argumentation:

“50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI”.

This jaw-dropping number appears to result from a 2022 survey conducted by an institution called “AI Impacts” (hardly readable in the small print on the slide). Raskin and Harris don’t give the audience any contextual information about these numbers.

It is utterly irresponsible to speak of “50% of AI researchers” given that it is literally just 80 people from a biased survey.

But it is even worse: These numbers are plain wrong, and the survey methodology is deeply flawed as has been explained here and here. How the survey was conducted: The organization approached 4271 AI researchers who published in a specific year at NeurIPS conference. Only 738 people agreed to participate in the survey (you can see already that it is a biased dataset). And only 162 of those actually answered the very specific question that the quote is referring to. It is utterly irresponsible to speak of “50% of AI researchers” given that it is literally just 80 people from a biased survey.

Raskin and Harris also don’t further elaborate what “uncontrolled” means or in what way the extinction of humanity would actually happen.

We’re really NOT talking about the AGI apocalypse. Really, really, really not!

One of the preposterous things about their talk is that they really, really, really want us to know that they are not talking about the “AGI apocalypse” (13:08, 42:20), also known as the “AI takeoff” scenario. They describe this scenario as follows: “AI becomes smarter than humans in a broad spectrum of things, it begins the ability to self-improve, then we ask it to do something — you know the old standard story of ‘be careful what you wish for because it will become true in an unexpected way’ — you wish to be the richest person so the AI kills everyone else”

But remember, this is not what they are here to talk to us about!

This is very confusing, because the “AI apocalypse” or “AI takeoff” scenario is exactly what motivated the flawed survey they love so much that they show it 3 times.

The organization behind the survey has the discrete name “AI Impacts” and belongs to the “Machine Intelligence Research Institute” (MIRI), formerly called “Singularity Institute for AI” located in Berkeley, California.

“As part of the broader Effective Altruism community, we prioritize inquiry into high impact areas like existential risk” can be read on their website, as well as a listing of their sponsors, among them the Future of Life Institute (FLI) and the Future of Humanity Institute (FHI). The FHI was founded by Nick Bostrom, the author of the famous book “Superintelligence” and inventor of the term “existential risk”.

So what are the actual problems with AI?

Here’s a list of things they consider problems due to AI:

Reality collapse, Fake everything, Trust collapse, Collapse of law contracts, Automated fake religions, Exponential blackmail, Automated cyberweapons, Automated exploitation of code, Automated lobbying, Biology automation, Exponential scams, A-Z testing of everything, Synthetic relationships, AlphaPersuade

While some of these points represent actual and even short-term dangers to Western society and will make the internet a more chaotic and hostile place, their listing seems quite arbitrary and, most importantly, it’s a very privileged and very white view on the effects of AI on humanity.

The fact that they call social media the “first contact of humanity with AI” is plainly ignorant: People, especially marginalized groups have much longer been affected by AI algorithms in negative ways. The automation of inequality is a reality: Biased algorithms in areas such as policing, social welfare, finance and recruiting have had and are still having huge impacts on the realities of real human lives.

They also completely ignore the production conditions behind AI and present it as something that comes out of research and then just needs to be deployed by companies. This view ignores the environmental impacts of training these huge models, as well as issues with data privacy and IP.

“The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.” as Ted Chiang puts it in the New Yorker.

Last but not least, their argumentation does not even make sense. They don’t explain how the types of problems that they care about (synthetic media, manipulation) would lead to the extinction of humanity.

The audience must take a deep breath now

Background: X-Risk doomerism — a well-funded brand of “AI safety”

There are 2 camps for how academics, politicians and tech people think about the risks and harms posed by AI. And there’s quite a rift between them.

On the one hand, there is the group commonly referred to as AI ethicists who are concerned with the risks and impact of AI systems in the here and now. Take for example AI researcher Timnit Gebru, a former ethicist at Google, and her paper that focused on the problems with large language models, such as their environmental and financial costs as well as biased and discriminatory outcomes and potentials for deception.

On the other hand, there is a growing group of people whose main concern is that superintelligent AI might terminate our human civilization. This group often refers to itself as “AI safety” people. A big stakeholder of this ideology is the “Effective Altruism” community, a predominantly white male group of people who have increasing influence in politics.

A more extreme version of “Effective Altruism” is “Longtermism” a dangerous ideology which prioritizes the long-term future of humanity and de-prioritizes short-term problems. Their goal for humanity is to become “technologically enhanced digital posthumans inside computer simulations spread throughout our future lightcone” (Aeon).

Climate change and the increasing gap between the rich and the poor are seen as negligible problems. Nick Bostrom, the author of the book “Superintelligence”, called alleviating global poverty or reducing animal suffering “feel-good projects of suboptimal efficacy”.

Both Effective Altruism and Longtermist movements are backed by big money: Among many tech billionaires, Peter Thiel, Elon Musk and Sam Bankman-Fried have pumped money into Effective Altruism organisations. In 2021 the Effective Altruism movement was backed by $46 billion in funding.

Effective Altruism has an increasing impact on AI research: OpenAI was funded by Elon Musk and Peter Thiel. And last year, Sam Bankman-Fried proposed 100.000$ at the AI conference NeurIPS for papers on the topic of “AI safety”. Timnit Gebru summarizes:

“Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.” (Wired)

More “AGI apocalypse” rhetoric and suggestions

Intended or not, the “AI dilemma” campaign is dripping with fear-introducing suggestions of existential risk (X-Risk) and the AGI apocalypse (something they are not talking about, remember?)

  1. They compare the danger of AI with the danger of nuclear weapons multiple times.
  2. The narrative of “emergent capabilities” portrays the AI models as ticking bombs: “Suddenly”, GPT learned to speak Persian — “And no one knows why” Raskin breathes in the voice of a scary storyteller.
  3. The phrase „Silently taught themselves research grade chemistry” suggests that these models have the autonomy to teach themselves and reinforces the myth that AI algorithms have agency.
  4. The phrase “They make themselves stronger” reinforces the “AI takeover” myth. They say this makes them more dangerous than “nukes”.
  5. When explaining RLHF (Reinforcement Learning from Human Feedback) they say it’s “about “How do you make AIs behave”. And they compare it to clicker training for dogs. This metaphor is already problematic because they compare an ML model to an intelligent animal, but they go further: When you go out of the room, the dog will do what it wants. This suggests that if you leave AI systems alone (“uncontrolled”), they will forget what you tell them and go rogue: “As soon as you leave the room they’re gonna not do what you ask them to do” (33:23). This is such a wrong and deeply problematic narrative, suggesting that these models have a free will on their own and need to be “tamed”.
  6. Lastly, they chose the cheeky acronym “Gollem” to describe AI technology — in Jewish folklore, Golem refers to a human-like being created from inanimate matter. That’s another reference to AGI.

Conclusion

The “AI Dilemma” campaign is amalgaming X-Risk style alarmist rhetoric with a quite one-sided (social-media rooted) perspective on AI risks, especially regarding generative models. They mention many here and now risks, such as the proliferation of fake content and the speed in which these models are released to the public without enough assessment of the safety of the results. But Raskin and Harris are ignoring the long history and presence of negative effects of AI technology on marginalized groups.

Raskin and Harris are “tech designers turned media-savvy communicators” (Wired). They are masters in storytelling and persuasion. Tristan Harris started his career as a magician and then studied “persuasive technology” at Stanford. It is quite surprising though that the critique of persuasion and manipulation through social media has been a core theme in their work, however they apply persuasion mechanisms happily themselves.

It’s important to note that Raskin and Harris are no AI specialists. They fall victim to the same hype and misleading AI narratives as the general public does. Especially when these narratives are backed by big money. As mentioned earlier, the lobby behind X-Risk AGI doomerism is strong.

They say that AI is an abstract topic, and we are lacking metaphors to help us think about AI. This is something I can 100% agree with. They say that they want to provide the audience with more metaphors that are grounded in real life to give “a more visceral way of experiencing the exponential curves we are heading into”. If visceral means scaring people and then asking them to do breathing exercises to turn down their blood pressure, this is bad.

The “Social Dilemma” has shown that there is an audience for their flavour of one-sided populist technology criticism. But this is no education. It is highly questionable if tech criticism needs to be “entertaining”, and we should question who benefits from this framing.

Some ideas on how to stay informed on problems and risks associated with AI

If you want to broaden your knowledge about the real harms of AI, here’s a list of things you can do:

--

--

Alexa Steinbrück
Alexa Steinbrück

Written by Alexa Steinbrück

A mix of Web Development, Machine Learning and Critical AI discourse. I love dogs and dictionaries.

Responses (2)