We are used to consuming art in very specific forms, and as artificial intelligence shakes that up, we must move towards a renewed legal and cultural understanding of it. Human beings are renegotiating their place as both creators and consumers of art, alongside a dependency on artificial intelligence as it enters the technology we use regularly. What are the questions we should be asking regarding its sourcing, performance and perceptions?
In October 2017, Rutgers University’s ‘Artificial Intelligence Artist’ AICAN’s work began being exhibited at mainstream art venues across the world. Its creator, the scientist that coded AICAN, gave the algorithm complete credit for each artwork, citing it exactly the way artists are. The lab that produced AICAN argues for it as almost an independent artist – programmed to learn aesthetics from pre-existing art while also ensuring no output is too similar to an existing style. It follows psychologist Colin Martindale’s principle of least effort, proposing that anything too new will not appeal to viewers. The artwork, therefore, has to tread the line between being exciting, yet not radically different from what is generally considered good art. Ultimately, perhaps Andy Warhol was right: “art is what you can get away with”.
Generative AI is founded on the concept of training a system to analyze a dataset of artwork, learn its elements and style, and produce an image in response to instructions. Various artists and creative collectives have raised concerns regarding the ethics of this process, with their art being used to train systems without their consent, or the ability to opt out. The bolstering of copyright safeguards in relation to such tools are still in question. While this objection to generative AI is founded on intellectual and creative property rights, there is a wider opposition to the entity of AI art itself – and whether it deserves to be labeled so.
In popular debate on whether AI-generated art should reside under the banner of real art, parallels are often drawn to photography, a socially accepted artform that also happens to rely on a mechanical device. The difference, according to AICAN creator Ahmed Elgammal, is that by virtue of being able to produce an unpredictable outcome, AICAN is not a mere tool. While all generative AI systems are fed with images to analyze, the results can still be unexpected, arising from what the system has learnt by itself. However, the notion that AI can threaten the creativity fuelled by human consciousness overestimates what AI is capable of producing. It functions within initial parameters set by human beings; and until it can be taught to intend to express itself or feel inspired to create, it is not an autonomous creator of art.
Creatives who generate art in a world that now includes AI as an active participant see it as involving “more negotiation than experimentation”. It is this back and forth between the human and the machine, the consciousness and the code, the result and the viewer’s eye, that determines both acceptability and value.
Consider widening the lens from traditional paintings to the realm of all things aesthetic – music, non-static media, fashion – and it is hard to ignore the mediating role of artificial intelligence. Just two weeks ago, streaming service Spotify released its much-awaited ‘Wrapped’, an eye-catching algorithmic curation of a user’s listening stats over one year. Across social media platforms, people were sharing what Spotify’s machine learning learnt about them, reveling in a worldwide celebration of data analytics. Comparably, Instagram’s explore page, oft-visited for fresh, personalized content also runs on a common machine learning method. Tracking what users already interact with, the system is able to recommend more pictures, videos and accounts that they might enjoy. While the AI at play here isn’t involved in generating a novel piece of art, it is analyzing, collecting and presenting an aesthetic experience that we consume.
Aesthetics, in its purest form, relates to the perception and enjoyment of beauty. When we are confronted with art, whether it is packaged as an AICAN painting, or a Spotify playlist, or an Instagram feed, our experience of it lies in what it makes us feel – awe, wonder, disgust, confusion, an endless array of emotional responses. In fact, this reaction of the audience has also found its way into emotion-based algorithms. ArtEmis is a dataset of more than 400,000 written responses from people describing the feelings that a painting invokes in them, along with an explanation of why it is so. Based on these descriptions, an AI system was trained to produce textual emotional responses to images of art. The computer, then, can view an image it has never seen before, and generate a description of the emotions a human might feel when they see it.
As artificial intelligence gains more intelligence on what were previously seen as authentically human experiences, how is it changing the way we navigate art?
Perhaps the answer lies in the philosophy of Deleuze and Guattari, and their concept of ‘becoming’ – a change that is neither progressive or regressive, where one entity incorporates elements of the other, altering both the elements as well as the entities themselves. More importantly to the case in question, embracing their idea of ‘double becoming’: where inasmuch as AI art is modeled on existing notions of what is aesthetic, with successive iterations, it is possible that these aesthetics become like the AI generated art.