Science
MIT News
Q: Why did you write this paper?
A: Generative AI tools are doing things that even a number of years ago we never thought could be possible. This raises a number of fundamental questions on the creative process and the human’s role in creative production. Are we going to get automated out of jobs? How are we going to preserve the human aspect of creativity with all of those latest technologies?
The complexity of black-box AI systems could make it hard for researchers and the broader public to know what’s happening under the hood, and what the impacts of those tools on society might be. Many discussions about AI anthropomorphize the technology, implicitly suggesting these systems exhibit human-like intent, agency, or self-awareness. Even the term “artificial intelligence” reinforces these beliefs: ChatGPT uses first-person pronouns, and we are saying AIs “hallucinate.” These agentic roles we give AIs can undermine the credit to creators whose labor underlies the system’s outputs, and may deflect responsibility from the developers and decision makers when the systems cause harm.
We’re attempting to construct coalitions across academia and beyond to assist think concerning the interdisciplinary connections and research areas obligatory to grapple with the immediate dangers to humans coming from the deployment of those tools, corresponding to disinformation, job displacement, and changes to legal structures and culture.
Q: What do you see because the gaps in research around generative AI and art today?
A: The best way we discuss AI is broken in some ways. We’d like to know how perceptions of the generative process affect attitudes toward outputs and authors, and likewise design the interfaces and systems in a way that is de facto transparent concerning the generative process and avoids a few of these misleading interpretations. How will we discuss AI and the way do these narratives cut along lines of power? As we outline within the article, there are these themes around AI’s impact which might be necessary to think about: aesthetics and culture; legal points of ownership and credit; labor; and the impacts to the media ecosystem. For every of those we highlight the large open questions.
With aesthetics and culture, we’re considering how past art technologies can inform how we take into consideration AI. For instance, when photography was invented, some painters said it was “the top of art.” But as a substitute it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the trendy art movement. We’re saying generative AI is a medium with its own affordances. The character of art will evolve with that. How will artists and creators express their intent and magnificence through this latest medium?
Issues around ownership and credit are tricky because we want copyright law that advantages creators, users, and society at large. Today’s copyright laws may not adequately apportion rights to artists when these systems are training on their styles. In terms of training data, what does it mean to repeat? That’s a legal query, but additionally a technical query. We’re trying to know if these systems are copying, and when.
For labor economics and inventive work, the thought is these generative AI systems can speed up the creative process in some ways, but they may remove the ideation process that starts with a blank slate. Sometimes, there’s actually good that comes from starting with a blank page. We don’t understand how it’s going to influence creativity, and we want a greater understanding of how AI will affect the various stages of the creative process. We’d like to think twice about how we use these tools to enrich people’s work as a substitute of replacing it.
When it comes to generative AI’s effect on the media ecosystem, with the flexibility to supply synthetic media at scale, the chance of AI-generated misinformation have to be considered. We’d like to safeguard the media ecosystem against the potential of massive fraud on one hand, and folks losing trust in real media on the opposite.
Q: How do you hope this paper is received — and by whom?
A: The conversation about AI has been very fragmented and frustrating. Since the technologies are moving so fast, it’s been hard to think deeply about these ideas. To make sure the helpful use of those technologies, we want to construct shared language and begin to know where to focus our attention. We’re hoping this paper generally is a step in that direction. We’re trying to begin a conversation that might help us construct a roadmap toward understanding this fast-moving situation.
Artists repeatedly are on the vanguard of recent technologies. They’re fiddling with the technology long before there are business applications. They’re exploring how it really works, they usually’re wrestling with the ethics of it. AI art has been happening for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I believe it’s critical to uplift the voices of the artists and other creative laborers whose jobs might be impacted by these tools. Art is how we express our humanity. It’s a core human, emotional a part of life. In that way we consider it’s at the middle of broader questions on AI’s impact on society, and hopefully we are able to ground that discussion with this.