AI at the art school — critical and creative AI research at the XLab

Alexa Steinbrück
6 min readMar 9, 2021

--

The University of Art and Design BURG Giebichenstein in Halle (Germany) is known for its great workshops. There are no less than 20 workshops — including a textile manufactory, workshops for wood, metal and plastics, a screen printing shop and a digital workshop with a considerable number of different 3D printers.

Since summer 2020, this range has been expanded to include the XLab, the lab for artificial intelligence and robotics. In the XLab, we, Alexa Steinbrück (AI) and Simon Maris (robotics), explore the creative applications and critical implications of these technologies. We are a focal point for students who want to learn about and integrate them into their design practice.

The toolbox of artificial intelligence (more specifically machine learning) offers a wealth of techniques and processes: From image and text generation, to gesture and object recognition, to speech synthesis. Our lab is open to students from all fields of study. We see tremendous potential for the creative use of AI techniques in many creative areas:

In the field of digital animation, the task of motion tracking, which usually requires renting a special studio for this occasion, can be replaced by gesture recognition based on neural networks (e.g. PoseNet). All that is needed is a simple video recording as input and the computing power of a simple laptop.

Gesture recognition with machine learning also offers interesting approaches in interaction design to develop new forms of sensory interaction. Last year, the University facilitated a workshop in which our students were able to explore this use case with hands-on prototyping using Arduino and Google Collab.

In the field of (artistic) image production, the generative potential of neural networks such as GANs offers promising possibilities for the creation of synthetic imagery. Much has already been written about the almost infinite capacity of these models to generate images that look so realistic as if they came from the set of training data. The website thispersondoesnotexist.com shows the results of a StyleGAN that was trained with a data set consisting of photos of real people.

The “XYZ does not exist” paradigm has become so popular (some even call it “GAN-ism”) that there are countless offshoots (thischemicaldoesnotexist.com, thisresumedoesnotexist.com, thisxdoesnotexist.com). Seemingly realistic at first sight, they develop an Uncanny-Valley-ish effect at second sight when very strange details (those ears!) stand out.

Less known are so-called conditional GANs (cGANs), neural networks that require a concrete input image to generate an output image. This allows more (artistic) control over the result and is less like a filter than a real semantic transformation. Popular applications of this technique are the transformation of a running horse into a zebra or the video experiments of Memo Akten from his series “Learning to see”.

Similar techniques are used to create so-called deep fakes — a branch of “synthetic media” that is of interest to communication scientists, communication designers and artists alike. The fact that today one can create deep fakes oneself with relatively little computer knowledge is a very new, fascinating and perhaps also disturbing development.

There is also much to discover in the areas of text and language. Language models such as GPT-2 can be used to generate text and adapt it to one’s own datasets by leveraging transfer learning.

With such an almost infinite range of possibilities, where to begin? With making! “Through the hand to the head” is the saying I’ve heard many times since I started working at BURG, and it seems to be a true maxim.

To facilitate this, we offer so-called Getting Started workshops: Here we introduce tools and programs to the students, with which they can just get started without much prior knowledge and sometimes even without programming skills.

Our favourite tool here is RunwayML, a groundbreaking desktop software developed by three students at Tisch School of the Arts in New York to democratize AI and make it accessible to creatives. RunwayML is like an app store for creative AI: each app is a pre-trained machine learning model that performs a specific task, such as recognizing objects in an image or generating images based on text or even simply cropping the background from a video. If you have your own datasets, you can even train models yourself using the computing power provided by RunwayML’s servers.

Whereas in the past you had to rent your own servers, download code from Github and interact with the command line, RunwayML now offers a very comfortable workflow and hides all this complexity behind a simple GUI. This saves a lot of frustration, especially for novice programmers. It ensures a lot more focus on the actual artistic work.

This democratization is also a goal for us at the XLab. We want to let students work with AI, encourage them to simply train models themselves. That’s how you understand: AI is not witchcraft. There is no such thing as “an AI” or multiple AIs — AI is a field of research, and this field is concerned with narrow AI. None of the existing AI systems has even come close to human intelligence or general knowledge we are told about in science-fiction (also called “strong AI” or “AGI”). We should not get hung up on thinking of AI as an autonomous system and talking about “an AI that is creative,” but about how these narrow but astute AI systems can benefit us artistically, what their limits and flaws are, how we can hack and extend them!

As with anything hyped, one should always ask oneself the question of meaning: Why do I want to work with it? Do I even need to work with it? Rebecca Fiebrink, a pioneer in the field of creative AI and the creator of the Wekinator, sums it up:

“When and why is it creatively useful to find patterns, make predictions and generate new data?”

In addition to the question of meaning, various ethical questions also arise. Last year, creative AI educator Lia Coleman published a highly recommended handbook in PDF format, a kind of zine on Responsible AI Art. She challenges artists to ask themselves a few questions:

… Where does the training data come from and how diverse is it, who is represented and who is not? Where does the machine learning model come from, who developed it? How much of an impact does training the model have on the environment? Can the training process be shortened or made more effective..?

A page from the Responsible AI Art field guide (Coleman, Saltz, Leibowicz)

These questions seem like a marathon at first, but it’s worth thinking about!

I’m looking forward to the coming months with our students. I look forward to wild experiments, hacks, and weird misappropriations. I look forward to proud students who have programmed something themselves for the first time. I look forward to exciting collaborations with long-established workshops such as Jacquard Weaving. I look forward to critical conversations about socio-political issues, power structures, bias, gender, and what future we actually want for our lives with or without technology!

Let’s connect!

If you’d like to join our journey and follow our discoveries, feel free to subscribe to our Newsletter, where we create a monthly list of exciting developments in AI research and artistic applications.

You can also follow the XLab on Twitter!

Website of the XLab (in German): https://www.burg-halle.de/hochschule/einrichtungen/burglabs/xlab/

The BurgLabs are platforms for cross-disciplinary research at the University of Art and Design Burg Giebichenstein. In three laboratories they deal with material-technological questions in a researching and designing way in order to shape futures. With the SustainLab, the BioLab and the XLab, they focus on the fields of sustainability, biotechnology, artificial intelligence and robotics. The focus is on the relationship between the natural and man-made environment. The project is funded by the European Union and the state of Saxony-Anhalt.

--

--

Alexa Steinbrück

A mix of Frontend Development, Critical AI, Machine Learning technicalities, Creative AI and more!