Home Community The creative way forward for generative AI

The creative way forward for generative AI

0
The creative way forward for generative AI

Few technologies have shown as much potential to shape our future as artificial intelligence. Specialists in fields starting from medicine to microfinance to the military are evaluating AI tools, exploring how these might transform their work and worlds. For creative professionals, AI poses a novel set of challenges and opportunities — particularly generative AI, using algorithms to remodel vast amounts of knowledge into recent content.

The long run of generative AI and its impact on art and design was the topic of a sold-out panel discussion on Oct. 26 on the MIT Bartos Theater. It was a part of the annual meeting for the Council for the Arts at MIT (CAMIT), a gaggle of alumni and other supporters of the humanities at MIT, and was co-presented by the MIT Center for Art, Science, and Technology (CAST), a cross-school initiative for artist residencies and cross-disciplinary projects.

Introduced by Andrea Volpe, director of CAMIT, and moderated by Onur Yüce Gün SM ’06, PhD’16, the panel featured multimedia artist and social science researcher Ziv Epstein SM’19, PhD’23, MIT professor of architecture and director of the SMArchS and SMArchS AD programs Ana Miljački, and artist and roboticist Alex Reben MAS ’10.

Play video

Panel Discussion: How Is Generative AI Transforming Art and Design?
Thumbnail image created using Google DeepMind AI image generator.
Video: Arts at MIT

The discussion centered around three themes: emergence, embodiment, and expectations:

Emergence  

Moderator Onur Yüce Gün: In much of your work, what emerges is frequently an issue — an ambiguity — and that ambiguity is inherent within the creative process in art and design. Does generative AI show you how to reach those ambiguities?

Ana Miljački: In the summertime of 2022, the Memorial Cemetery in Mostar [in Bosnia and Herzegovina] was destroyed. It was a post-World War II Yugoslav memorial, and we desired to work out a option to uphold the values the memorial had stood for. We compiled video material from six different monuments and, with AI, created a nonlinear documentary, a triptych playing on three video screens, accompanied by a soundscape. With this project we fabricated an artificial memory, a option to seed those memories and values into the minds of people that never lived those memories or values. That is the sort of ambiguity that may be problematic in science, and one which is fascinating for artists and designers and designers. Additionally it is a bit scary.

Ziv Epstein: There’s some debate whether generative AI is a tool or an agent. But even when we call it a tool, we want to keep in mind that tools should not neutral. Take into consideration photography. When photography emerged, plenty of painters were frightened that it meant the tip of art. Nevertheless it turned out that photography freed up painters to do other things. Generative AI is, after all, a special sort of tool since it draws on an enormous quantity of other people’s work. There’s already artistic and inventive agency embedded in these systems. There are already ambiguities in how these existing works shall be represented, and which cycles and ambiguities we’ll perpetuate.

Alex Reben: I’m often asked whether these systems are literally creative, in the best way that we’re creative. In my very own experience, I’ve often been surprised on the outputs I create using AI. I see that I can steer things in a direction that parallels what I might need done by myself but is different enough from what I might need done, is amplified or altered or modified. So there are ambiguities. But we want to keep in mind that the term AI can also be ambiguous. It’s actually many various things.

Embodiment

Moderator: Most of us use computers each day, but we experience the world through our senses, through our bodies. Art and design create tangible experiences. We hear them, see them, touch them. Have we attained the identical sensory interaction with AI systems?

Miljački: As long as we’re working in images, we’re working in two dimensions. But for me, not less than within the project we did across the Mostar memorial, we were capable of produce affect on quite a lot of levels, levels that together produce something that is larger than a two-dimensional image moving in time. Through images and a soundscape we created a spatial experience in time, a wealthy sensory experience that goes beyond the 2 dimensions of the screen.

Reben: I assume embodiment for me means with the ability to interface and interact with the world and modify it. In one in all my projects, we used AI to generate a “Dali-like” image, after which turned it right into a three-dimensional object, first with 3D printing, after which casting it in bronze at a foundry. There was even a patina artist to complete the surface. I cite this instance to indicate just what number of humans were involved within the creation of this artwork at the tip of the day. There have been human fingerprints at every step.

Epstein: The query is, how can we embed meaningful human control into these systems, in order that they might be more like, for instance, a violin. A violin player has all styles of causal inputs — physical gestures they will use to remodel their artistic intention into outputs, into notes and sounds. Straight away we’re removed from that with generative AI. Our interaction is essentially typing a little bit of text and getting something back. We’re principally yelling at a black box.

Expectations

Moderator: These recent technologies are spreading so rapidly, almost like an explosion. And there are enormous expectations around what they will do. As a substitute of stepping on the gas here, I’d wish to test the brakes and ask what these technologies should not going to do. Are there guarantees they won’t have the option to meet?

Miljački: I hope that we don’t go to “Westworld.” I understand we do need AI to resolve complex computational problems. But I hope it won’t be used to interchange considering. Because as a tool AI is definitely nostalgic. It could actually only work with what already exists after which produce probable outcomes. And meaning it reproduces all of the biases and gaps within the archive it has been fed. In architecture, for instance, that archive is made up of works by white male European architects. We have now to work out how to not perpetuate that sort of bias, but to query it.

Epstein: In a way, using AI now’s like putting on a jetpack and a blindfold. You’re going really fast, but you don’t really know where you’re going. Now that this technology appears to be able to doing human-like things, I feel it’s an awesome opportunity for us to take into consideration what it means to be human. My hope is that generative AI could be a type of ontological wrecking ball, that it will probably shake things up in a really interesting way.

Reben: I do know from history that it’s pretty hard to predict the longer term of technology. So attempting to predict the negative — what won’t occur — with this recent technology can also be near not possible. For those who look back at what we thought we might have now, on the predictions that were made, it’s quite different from what we even have. I don’t think that anyone today can say for certain what AI won’t have the option to do someday. Similar to we will’t say what science will have the option to do, or humans. One of the best we will do, for now, is try and drive these technologies towards the longer term in a way that shall be helpful.

LEAVE A REPLY

Please enter your comment!
Please enter your name here