Day 2 of Reshape Forum at Hochschule für Gestaltung Schwäbisch Gmünd (photo: eignerframes)

Opening speech at “Reshape forum for Artificial Intelligence in Art and Design” (May 2023)

Alexa Steinbrück
10 min readJun 5, 2023

--

In spring 2023 I had the opportunity to curate a conference at the Hochschule für Gestaltung Schwäbisch Gmünd as part of my researcher position at AI+D Lab and KITeGG.

From May 10–12, 2023, the third KITeGG summer school took place there. Under the title “reshape — forum for AI in Art and Design” we invited numerous international experts to get an overview of the many ways in which AI is relevant for designers.

The following text is a speech I gave at the opening of the conference on May 10 in the auditorium of the HfG!

reshape (1)

reshape is the name of a function in the Python programming language, or more precisely in the NumPy library, which is used in virtually all AI programs.

What you can do with NumPy is number crunching: it contains functions for working with vectors, i.e. lists of numbers and matrices. In machine learning, the whole world is mapped into numbers, words as well as images, sounds and movements. And these vectors are what goes into a neural network. The reshape function can change the shape of these vectors, for example, turn a 1-dimensional vector into a 2-dim vector, or a 3-dim into a 1-dim vector.

The reshape function “changes the shape without changing the content” says Numpy’s documentation.

Source: https://www.w3resource.com/numpy/manipulation/reshape.php

reshape (2)

The conference you are at right now is also called reshape. Our slogan “reshape the landscape of art and design” — fits easily into the rhetoric we are surrounded by more often lately when it comes to AI: “disruption”, “revolutionize”, “blowing up”, “turn upside down”, “massive news” — We are experiencing a new AI hype right now.

It’s a bit surprising and feels like Déjà vu, because the last wave of AI hype/AI summer was not that long ago, circa 2014 with the breakthrough of the Deep Learning technique.

At the center of this new AI summer is “Generative AI” — systems like Stable Diffusion or ChatGPT that are capable of generating realistic artefacts like images or text.

Karl Sims: Evolved Virtual Creatures

Generative AI is nothing new

“Generative AI” has also been around for a long time, and artists and designers have always worked with it (albeit with changing technical foundations):

  • In the 1980sHarold Cohen & his program “AARON”. The first time that AI technologies were introduced into the world of computer art. The program could understand and generate colors and shapes. In this picture we see Cohen coloring the generated shapes by hand.
  • In the 1990s: Karl Sims “Evolved Virtual Creatures”. Sims used evolutionary/genetic algorithms
  • From 2015: Deep Learning early adopters: Addie Wagenknecht, Alex Champandard, Alex Mordvintsev, Alexander Reben, Allison Parrish, Anna Ridler, Gene Kogan, Georgia Ward Dyer. Golan Levin, Hannah Davis, Helena Sarin, Jake Elwes, Jenna Sutela, Jennifer Walshe, Joel Simon, JT Nimoy, Kyle Mcdonald, Lauren McCarthy, Luba Elliott, Mario Klingemann, Mike Tyka, Mimi Onuoha, Parag Mital, Pindar Van Arman, Refik Anadol, Robbie Barrat, Ross Goodwin, Sam Lavigne, Samim Winiger, Scott Eaton, Sofia Crespo, Sougwen Chung, Stephanie Dinkins, Tega Brain, Terence Broad and Tom White.

The artist & researcher Memo Akten was one of those “early adopters”. His video work “Learning to see” (2017) is a good example: Here you can see how Akten feeds his video input into a neural network in real-time, which then interprets this image data. A kind of semantic style transfer.

Memo Akten: Learning to see (2017)

The work of artists has also always been an experiment with the shortcomings, the gaps and the glitches of these technologies, which often came directly from academic AI research and were “misappropriated” by them.

They often had to have a deep technical understanding of these systems to be able to bend and twist them in such a way. Even if algorithms were available as open source, designers had to really dig into it deeply.

Transform a low-fidelity website sketch into functional HTML (possible with GPT-4)

A revolution of accessibility

What the current revolution is all about is a revolution in accessibility and availability. Technologies have opened up to a breadth of people.

They have been given a new interface that is usable by everyone, and that interface is called natural language.

This AI hype now even feels almost justified! Concrete consequences are already noticeable in various areas. It is an enormous push of innovations with an unbelievable speed.

A few innovations from the last 2 months (March — April 2023):

  • VQGAN-CLIP: Here is a comparison of the quality of generated images 1 year ago — and the state of the art in April 2023 called “stable diffusion XL”.
  • NVIDIA Video generation: https://www.youtube.com/watch?v=3A3OuTdsPEk Here a resolution of 2000x1000 pixel is achievable.
  • GPT-4 has been released. This language model is multimodal, i.e. it can “understand” images (for example: sketch of website → working website code, or even derive complete recipes based on photos of meals).
  • The Llama language model from Meta (comparable to GPT-3) runs on laptop CPU, smartphone and even on a Raspberry Pi (Link)

At the last KITeGG summer school (November 2022, HfG Offenbach), Stable Diffusion (released in August 2022) was the technology and the revolution everyone was talking about: Finally, everyone could generate arbitrary images simply by a text description.

Three months later, on November 22nd, ChatGPT came out and has since turned everything upside down. With ChatGPT, we have experienced the “Stable Diffusion Moment”, only for Large Language Models (LLMs).

Ok wow: GPT-3 competitor Llama from Meta even runs on a Raspberry Pi

Large Language Models (LLMs)

Large Language Models are neural networks with several billion parameters that have been trained using large amounts of text. LLMs find patterns in these large amounts of text and learn the statistical probability with which a word follows the next word.

This technique sounds as mundane as the autocomplete function on our cell phones, but it adds an astonishing level of complexity:

LLMs can summarize texts, translate, generate essays, write scientific papers, write working code, and generate ideas. You yourself will probably be able to tell your very own story of how you used ChatGPT and were surprised!

Some even go so far as to say that these systems are capable of “reasoning” — one of the holy grails of AI research.

These surprising capabilities have led to a number of people now claiming we have achieved or are at least close to achieving “AGI” (Artificial General Intelligence). And from this point in the discourse, it is hard to distinguish from science fiction. People talk about AI as if it were a being with its own will, and its own agenda.

You may have heard about the Open letter “Pause Giant AI Experiments” signed by prominent people. It calls for large AI labs to pause their research so that society and regulators can keep up.

This letter has drawn a lot of criticism; Emily Bender, a well-known AI researcher in the field of Natural Language Processing, wrote on Twitter that it was dripping with AI hype and myths. It also stems from an ideology called “Longtermism”, who have a very specific agenda for the “future of humanity”.

If we suspect a real counterpart, a thinking being, in these AI systems of automated pattern recognition, then perhaps it is like animals looking into a mirror. The journalist James Vincent calls this the “AI mirror test”.

GIF: Xavier Hubert-Brierre via Tenor

Do we pass the “AI mirror test”?

The mirror test is used in behavioral psychology to find out whether a creature/animal has ego consciousness. There are a few variations of this test but the core question is: Does the creature recognize itself in the mirror or does it think it is another creature?

We as humanity are collectively facing a mirror test right now, and the mirror is called Large Language Models.

“The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.”

Source: “Introducing the AI mirror test, which very smart people keep failing”, James Vincent, The Verge 02/2023

If you compare this kind of discourse about AI with the (technical) reshape concept from the beginning, there are worlds in between! And between these 2 extreme poles are now also designers and artists.

I think it’s a pretty tumultuous time for creatives right now.

How does this all feel for designers?

On the one hand, we would like to see AI as a tool: Image generators are handy tools to visualize ideas or create renderings. Language models can help interaction designers code so they can build prototypes faster, etc. And new tools/integrations/improvements are popping up every day. The speed of innovation is immense.

On the other hand, we are repeatedly confronted in the media with the narrative of a general intelligence, a precursor to superintelligence, that solves complex design tasks with more effectiveness and creativity than one could ever do oneself. Being “replaced by AI” has recently become a concern of creative professionals like designers and software developers.

It really is a paradox: on the one hand, technology promises “superpowers for creatives”, on the other hand, those same creatives fear for their relevance and future.

What does the Reshape Symposium want?

With this conference we want to take a more differentiated look at the various points of contact between design and AI technology, away from the newly strengthened AI myths and black-and-white views of AI replacement.

We would like to take a differentiated look at AI systems and their technical properties instead of speaking in general terms of “an AI”.

We ask ourselves: Where does the responsibility of designers lie, what is their role, and how can they influence the course of AI development?

The conference will look at these issues from 3 axes:

  • Designing for AI (Designing AI systems)
  • AI for Design (Creative AI)
  • Teaching AI (to creatives)

Designing for AI (Designing AI systems)

This involves the design of AI-based interfaces and products, for example systems that work via gesture recognition or voice input, or the design of generative interfaces themselves and integrations.

What are the challenges and opportunities here? What is the broader social context of these technologies? What do designers need to know in order to design these systems responsibly?

We have prepared a series of talks to address these questions:

  • Nadia Piet — First thing in the morning, Nadia Piet talks about practices to design the user experience for AI-based systems and interfaces
  • Catherine Breslin — Next up is a talk by Catherine Breslin on conversational design, how machines and humans are conversing, and how LLMs will change the future of voice assistants
  • Ploipailin Flynn — Then Ploipailin Flynn talks about the dark side of pattern recognition and how social patterns like racism are notoriously reproduced by AI-based systems, and design strategies to deal with it
  • Emily Saltz talks about synthetic media, AI-generated artefacts that will become more and more a part of our everyday lives, and what that means for product design

AI for Design (Creative AI)

How can AI technologies be integrated as tools in the creative toolbox of artists and designers? How can AI serve idea generation and foster human creativity instead of narrowing and flattening it? How do these tools “fit in the palm of your hand”?

Here we look forward to a talk on Thursday afternoon by design studio oio, who work out of a chic tiny house in the middle of London. In their workflows, they focus on “post-human collaboration” and develop products and tools for a “less boring future”.

Also, Tom White — one of those early adopters of AI technologies for creative use, will tell us about his experiments with Machine Vision and his latest projects.

Teaching AI (to creatives)

That is the main question of the KITeGG project: How can the topic of AI be communicated to creative people, especially to design and art students? What knowledge, and skills are important?

On which level of complexity do we move? From technical basics (we remember the Python function reshape) to high-level concepts and ethical issues like bias, privacy or IP rights: What level of depth is realistic to achieve?

How to develop an intuition for AI usecases? How to convey the ability to consciously assess the benefits and risks and decide when AI-based technologies should not be used?

To tackle these topics, we have prepared 2 panels: “AI Industry” and “KITeGG — Learnings from 1 year of AI education at design schools”.

And not to forget the workshops from earlier this week, the results of which will be presented on Friday.

I am looking forward to the upcoming 2.5 days of the conference with you!

--

--

Alexa Steinbrück

A mix of Frontend Development, Machine Learning, Musings about Creative AI and more