We Have Always Been Artificially Intelligent: An Interview with Joanna Zylinska–Claudio Celis and Pablo Ortuzar Kunstmann

Q. Thank you very much for accepting this invitation and for taking part in this interview. We were very impressed by your new book, AI Art: Machine Visions and Warped Dreams (Open Humanities Press, 2020). It’s a really interesting contribution to the discussion on the topic, in particular the post-humanist framework that you develop to explore issues such as art, technology, creativity, and politics. We would like to begin by asking you to please tell us a little bit about this book. What was the main motivation behind it? What are its major claims?

A. Thank you for inviting me. It’s a pleasure and an honour to speak to you. So, to begin with, I would like to give you a bit of context and explain where the book came from. It all started from me witnessing a real outpouring of computer-made artefacts at different art festivals and events over the last few years. For example, computer-generated paintings that resembled Vincent van Gogh’s work or an abstract modernist masterpiece, or Microsoft ordering the production of a ‘new Rembrandt’, or AI-generated music. This kind of deluge of so-called Artificial Intelligence (AI) art has been taking place against a wider debate about AI unfolding in society, where the public has shown fascination with what AI can do and create, but also fear connected with the automation of the labour force (and the potential elimination of people from their jobs), or even the annihilation of the human species. For me, the stories that accompany this rise of AI and the talk about AI’s creativity, and the potential threats have been at least as interesting as the actual artefacts that are being produced under the label of ‘AI art’. So my book responds to this moment, to this production of many different things labelled, rightly or wrongly, ‘AI art’. My aim was to interrogate this term and explore the confluence of ideas, beliefs and socio-political forces behind it. And to do so, I wanted to go beyond the question of whether computers can be creative or not. When people think about AI and computer art, this is the question that often gets raised. Now, I do make an attempt to answer it in the book, but I also try to show why this is not the bestquestion to ask – and that it will not really get us anywhere. Instead, I propose to ask some other, and I think more important, questions. Should the recent use of AI in image making and image curation encourage us to ask some bigger questions about the very purpose of artistic production? Does it encourage us to interrogate, once again, what art is for? Who is it for? Who is the artist now? What is the nature of the art market and the art institution? Can technology and AI challenge the status quo in any way? Does AI create new conditions and new audiences for art? And what will art after AI look like? Who will it be for? So you could say that the book is intended as a provocation, or as an invitation to a discussion.

But I do more than just ask questions. The book also carries a critique, but not in the sense that I want to tell people that AI is really bad and that we should just be afraid. My position does not involve a total rejection of AI as a technology or as a concept. On the contrary, the book comes from a place of fascination for me, a fascination with the possibilities, narratives, stories and technical incarnations of AI. But at the same time, I’m quite suspicious about the current social and political claims about AI, and also about the role of art in validating those claims. As part of this, I’m concerned about some dominant forms of AI aesthetics (with their frequent visual banality), but, more importantly, about the service that this kind of banal (although garish) art often gets put to, which is legitimating platform capitalism through the application of psychopolitics. This form of art ends up enacting what Franco ‘Bifo’ Berardi has called neurototalitarianism, where our minds and bodies are being colonised by GANs’ [Generative Adversarial Networks’] visual acrobatics while being put into a strange state of euphoric stasis. Hence, the book offers a critique of this form of capitalism by asking to what extent AI art is mobilized to enact and enforce this particular political formation. Finally, it also asks if we can do things otherwise – in art and in politics.

Q. In connection to the previous question, in what way do you feel that this new book relates to your previous works, particularly Nonhuman Photography (MIT Press, 2017) and The End of Man (University of Minnesota Press, 2018)?

A. I am pleased to see you establish the link between these seemingly different texts. The primary concern of my work over the recent years has been the constitution of the human as both a species and a historical subject. What I’ve tried to do in all of those books is use the geological probe of deep time by looking at the emergence of the human in conjunction with technologies such as tools and other artefacts, but also communication in its various modes (language, storytelling, ethics, art, and of course media). So, in an attempt to challenge what we would call ‘human exceptionalism’ without giving up on my own species-specific curiosity about humans, my work tries to zoom in on the signal points of the human, such as intelligence, consciousness and perception. With this, I want to interrogate the link between human and nonhuman forms of intelligence, including now the promises and threats offered by AI. It’s this set of interests that connects these three books: Nonhuman Photography, The End of Man (whose subtitle is A Feminist Counterapocalypse), and AI Art. That counter-apocalyptic dimension is shared by all three projects. On the one hand, I do engage in my work with stories about human extinction, expiration and disappearance as a result of climate change, technology (be it in the form of robots and cyborgs, or in the form of artificial intelligence and new forms of consciousness), and, last but not least, we should also mention the virus, because viruses come to us via all sorts of technologies. (For example, the Covid-19 virus is clearly tied to the technology of the airplane and global communications and connections, and it poses a very real threat to our health, our wellbeing, our existence.) So these concerns are present in all three books.

I adopt in all of them what could be called a planetary perspective but also one that finds its anchoring in the socio-political concerns of the here and now. This is where the feminist and critical race theories come in. Rather than talk about ‘Man’ across History, I prefer to anchor my interrogation in the specific ecological and economic crises of today. This involves realizing that the gendering and racialization of the apocalyptic narratives need to be interrogated. Because the apocalypse does not happen to everyone in the same way and at the same time. Many people, many groups, have already experienced different forms of apocalypse and extinction. This type of awareness of different stories, of different forms of precarity, of different groups (racial, regional, gendered, etc.), comes to the fore in my work. It also requires a certain slowing down in the interrogation of the apocalypse. The source of this interrogation is a network of intertwined natural and technological forces – which we cannot really decouple. Another thing that combines all these projects is the importance I give both to perception as a mode of encountering the world and to the role of images in our (media) culture. Photography is one of the genres I look at, but I’m also interested in film, in the relationship between still and moving images, and in Virtual Reality- and Artificial Intelligence-produced images that go beyond ‘traditional media’. Wrapping up this answer, you could say that I’m trying to combine philosophical inquiry with artistic practice (including my own, as these three books all include works from my own art practice). And I see this hybrid mode of inquiry as more conducive to the interrogation of such complex issues – issues that need to be thought about in a rigorous philosophical manner, but that also need to be sensed and encountered at an affective level as part of the same cognitive space.

Q. You mentioned this idea of perception and images. Something that we find very interesting is the distinction between a representational level of images and a performative level. And we believe that both levels are very important to consider when thinking about the politics of images today. The problem is that theory has been too concerned with representation and hasn’t paid enough attention to performativity. But on the other hand, performativity can be considered an application of power structures. What are your thoughts about this distinction and its importance to reflect on the politics of images today?

A. In fact, I’ve been thinking about this distinction recently while reading some texts from computer science, neuroscience and theories of perception. You may be aware that this very same distinction troubles those disciplines as well, disciplines that shape the philosophical side of the field of AI. Cognitive sciences that are informed by – and that also inform – computer science use a representational model in which objects and things are seen to be out there in the world, and the perceiver (be it the human or the machine) just goes and finds them, i.e. perceives them. Machines may indeed perceive better than humans when it comes to pattern recognition, medical examination, etc. But there are other theories coming from biological research that are challenging this idea of objects being out there in the world for us to see, however this ‘us’ is understood. They are adopting a more performative model of perception, in which perception is seen as a process which involves a body moving through the world. And it is only in that process, in that encounter, that perception takes place and that objects are constituted as objects. Many of the more conventional theories of machine vision, even if they use the notion of neural nets, are still premised on the idea of representation. So I’m interested in looking at these other scientific trajectories that bring in that notion of perception as being ‘in the world’. In this sense, objects are ‘performed’ by an agent who is moving through the world. This understanding would require us to go beyond the static idea of vision as recognition. Obviously the perceiving agent does not have to be human. There is a lot of interesting research on animal perception (in terms of colour spectrum, the field of vision, depth of field, etc.). But the perceiver could also be a machine. I would therefore suggest that to some extent all forms of perception are performative. I think representation becomes a shortcut that we sometimes use to explain something to ourselves. But at a deep ontological level, I’m much more interested in the performative model of perception.

Q. One claim from your book that we find very appealing is that art and technology have always been connected (they even share the same etymological root). Although many people would agree with this by now, the conclusion that you make that hence intelligence has always been artificial seems more daring. Could you please develop this idea? What are the main philosophical influences behind this claim?

A. This idea goes back to the recently deceased French philosopher Bernard Stiegler and his concept of ‘originary technicity’: the belief that humans have always been technical, that we have emerged as technical beings through simple technologies such as flint stones, fire, clothing, language. These ideas build on Gilbert Simondon’s notion of subjectivity emerging through technology. So there is not such thing as a pre-technological human. The human in their cognitive, corporeal and affective capacities has been produced in conjunction with technology. This idea is also present in second-order cybernetics and its notion of the system. All of this has led me to think about intelligence as something that is not only limited to humans and that is also a product of a technical relation. And when we think of intelligence in this way, language also appears as a technology, that is, as both a signal of intelligence and a producer of intelligence. Also, if we look at the development of art, art has always been produced with the help of all sorts of technologies and machines (apparatuses, neurological enhancements, hallucinations, dreams, viruses, cultural conventions, etc.). So, the reason I’m introducing this concept as a scaffolding for my own thinking is because I want to depart from this idea of the artist as a singular genius, sitting in his (and it is indeed often ‘his’) garret or studio, and producing, being creative, from the bottom of his soul. Instead, I want to show that creativity and the production of art are premised upon a form of intelligence that is always linked with technology – and that goes back thousands of years. We also find this idea revisited in an interesting way in the work of Brazilian-Czech philosopher Vilém Flusser and his explanation of the conditions of possibility for the production of art and photography, and also of human freedom, within systemic confinement. Flusser recognizes that we are to some extent machines and that we are subject to the operations of apparatuses (with these apparatuses being both socio-political and technical), and yet within this idea he tries to seek conditions of freedom. And, last but not least, feminist and post-colonial critiques have shown us that not all systems, not all machines, are ‘born equal’. Hence, in recognizing our entanglement with technologies we need to interrogate what particular technologies do. The technology of policing, for example, is executed very differently for different people. And it is also different, let’s say, from the technology of education (although this technology in itself can be both productive and oppressive). Cybernetics offers us a systemic view where all dimensions are entangled and communicate with each other. But feminism and post-colonial critique show how we need to stop and examine particular moments within this system.

Q. What about the idea of the cyborg in Donna Haraway?

A. Well, Haraway has always been very important to me. And what has been particularly important is her thinking through figurations. She took the concept of the cyborg from technoscience and redeployed it in her own ironic political gesture, trying to create a figuration that allowed her to think about technology differently. And obviously what happened then with her work was that suddenly the cyborg was not just this Arnold Schwarzenegger-like robot breaking through the world, trying to either kill us or save us. Dogs could be cyborgs too. There is an expansion here of the problem of who our kin are. The question is therefore not which machines are cyborgs and which are not, but rather what kinds of relations exist between humans and technology – and how these relations are deployed. The cyborg can become a critical feminist and socialist figuration that asks serious questions about the economic conditions of technology, about who is excluded and who is included in the technological setup, about who produces technology but cannot use it, etc. So the cyborg can be a critical tool, allowing us to examine gendered, racial, and economic inequalities. It is not so much a question of what the cyborg looks like but rather of what it does and what it can undo as a critical term.

Q. Following on this novel take on human intelligence as artificial intelligence, we would like to address one of the key issues in the book: that of creativity. We think that your claim that the question as to whether machines are creative has been wrongly posed is one of the book’s key contributions. You suggest that instead we should be asking what human creativity is in the first place. And here you refer to Whitehead’s notion of creativity as an organism’s exchange with the environment. Could you please explain the main consequences of shifting from a ‘humanist’ notion of creativity towards an ‘environmental’ one?

A. To begin with, I should say what I’m bouncing against here. A very truncated concept of creativity is used in some forms of  AI art, which is reduced to the repetition of the same. I mean by this forms of AI creativity which produce style-transfer works that look like copies of van Gogh, or a new Rembrandt. So, while going against this idea of ‘oh, this is amazing, the computer has painted something that looks like the work of a Grand Master’, I was interested in looking at other, more systemic, theories of creativity. The reason I was interested in doing this was because I thought that this truncated model of creativity was just producing works that were both mindless and pointless. I described them in the book as a form of ‘Candy Crush’. This model of creativity ends propping up the companies of platform capitalism (e.g. Google and its artist programme.). But it closes down on any actual creativity, which for me involves looking for things that could push the system to get out of sync, to open itself up. Of course, we want and need some systems to run correctly, but we also want to open up others. Again, not all systems are born equal, have equal tasks and equal forms of embeddedness. For example, second-level cybernetics missed out on interrogating more deeply the cultural dimension of systems. It did recognize the existence of cultures, but it failed to grasp those cultures’ agency – as well as their transmissibility across generations. So, the idea that I’m going against is that of creativity understood as repetition of the same, but also as absolute novelty: ‘creation ex nihilo’, the way God supposedly created the world. To do so I rely on A.N. Whitehead’s idea of creativity as something that occurs in the environment, which is what psychologist James Gibson calls ‘affordances’: possibilities emerging from the environment. This links with our earlier discussion about perception, in the sense that things happen in the encounter, or mutual unfolding, between the organism and the world. Here I rely on the work of my ex-colleague at Goldsmiths Mark d’Inverno, who (together with Arthur Still) claims that AI research would benefit from adopting the concept of intelligence based on attentive inquiry into the relationship between the human and the environment. So the questions that we need to ask are: for whose benefit are we designing?; how do we ensure that AI does not just become the next step in making the environment more subjugated?

Creativity and design in relation to the environment still require that we ask certain questions which relate to our human responsibility for what we can and can’t do. We need to recognize the nexus of forces, beings, agents, demands, and to act from within that nexus. Rather than having a model of the human as someone that stands outside the world and of technology being a mere tool, we need a more dynamic and entangled model of creativity. This can also be found in the work of cognitive scientist Margaret Boden, who has suggested that being creative means diverging from the established path that we carve out and then follow each day. This is seen in her idea of ‘transformational creativity’. But again, I don’t think that we can explore this without bringing in politics. You could say that Boris Johnson and Donald Trump have diverged from the established path. Some people would describe their actions in terms of ‘creative destruction’. This is why there are other concepts and other frameworks that need to be brought in. Not all sorts of creativity should be valued in exactly the same way. Not all forms of divergence from the track are the same. Also, the logic of Silicon Valley is often very much in the vein of ‘let’s break things and see what happens’. So divergence from the set path and creativity as looking for alternative paths and solutions have to be brought together with concepts and philosophies ‘of the world’, with political models of the world. Creativity needs to be considered in those terms. It can’t be considered as an abstract concept. Because if it is – and it indeed often is in places like Silicon Valley – the theory of creativity as ‘absolute novelty’ sneaks in through the back door and we end up with neolibertarian theories of politics and economics. And I certainly don’t want that.

Q. This takes us to another of our favourite ideas in your book: the call for a post-humanist art history and art theory. Can you please tell us a bit more about what this new approach to the study of images looks like? Is it an interdisciplinary approach? What are the main consequences at stake here?

A. Again, the reason for proposing this idea is to acknowledge the presence of nonhuman elements in the production of all artworks, from cave painting through to the works of so-called Great Masters. These works have already been produced in conjunction with a variety of nonhuman agents (drives, impulses, viruses, drugs, various organic and non-organic substances) but also with all sorts of networks and infrastructures (from the mycelium network through to the internet). Recognizing that art is always produced in those relationships is important in order to depart from the idea that art just happens in the artist’s head or soul. This involves acknowledging the different forces and influences acting upon the artist. And I don’t think that this diminishes the human or takes away from the idea of creativity. But it allows us to reposition the nature of our inquiry and ask: what does it mean for the human to be creative? For me the answer to this question is an ethical and political interpellation. I do believe that even though we recognize different technological, machinic and physiological constraints on the human, degrees of freedom and emancipation are still possible for us. But to figure this out, the model of creativity as ‘creation ex nihilo’ needs to be challenged. Instead, we need to position the artist as being ‘in the world’, always already feeding from the link with other human and nonhuman elements. We also need to remember that art is always already a form of extractivism. The question then is how to make this extractivism a little bit more ethical, a little less self-centered and self-aggrandizing, and a little bit more ‘world-aggrandizing’.

Q. In relation to this post-humanist art theory, what can you tell us about the notion of authorship? What happens to the idea of ‘the author’ in this new type of algorithmic art? Can we speak about the ‘transfer of intentionality’ from the author to these complex technologies?

A. Well, the first question you would need to ask is whether intention ever fully belonged to the author. If we accept the idea that intelligence and agency are already partly technological, then this author was never fully human anyway, but was always partly machinic. So, if we are talking about a transfer, maybe this transfer has already happened even before any artefacts start being made. If we recognize this, then we must stop pitching the human against the machine and instead begin to analyze their co-evolution. But to recognize this doesn’t mean that everything becomes indistinguishable and that there is no longer any difference between humans and machines, that there is just one big flow of matter and energy. (There are philosophies that suggest that, but this is not where I’m going.) Even though I recognize the entanglement between humans and technology, what we need is a new vocabulary, a new model of thinking about that transfer. So it is not just a straightforward transfer, but it’s more a recognition of a certain form of intelligence and agency of the machine that is already there. On the other hand, I think that we should also be critical of some of the more commercially-driven promises of how machines will do all these amazing things for us: save the world, eliminate poverty, etc. All these types of ‘technological solutionism’ basically adopt the language of capitalism with its strategies of public relations. And this is something we need to be suspicious of. So, to recognize machinic agency, which is partly in us already, doesn’t mean that we will go on and say: ‘yes, machines will do our jobs and we will just watch’. We don’t know if machines will be able to reflect on ethical and political issues. Ethics as reflection on theories of good, on how we want to live, is a discourse that is important and meaningful for us humans. Yet even though we have had different forms of ethics for a very long time, we still haven’t really agreed on how to live. If we had, the world wouldn’t be in such a mess! This kind of recognition of the entanglement between technology and humans still poses the human with the task of having to provide an ethical account of that machinic relationship and intentionality (where intentionality is distributed, rather than simply transferred).

Q. Also regarding this critique of the idea of the author, you distinguish between an author that is ‘above the world’ and a robotic artist that is ‘of the world’ and ‘in the world’. Could you please explain the difference between these three levels and how it connects to the issue of authorship?

A. Here I was in conversation with an essay by Michel Foucault about the function of the author. And Foucault has been very important for undermining this idea of the author as a solitary genius who can stand above the world and produce his work from there. Foucault has shown that the author should rather be understood as an ‘author function’. It’s a temporary snapshot of agency that incorporates not only that particular human being whom we call Pablo Picasso, or Tracy Emin, or whomever, but also the whole network of forces from families and support workers through to studios, technologies, alimentation, drugs, etc. The nexus between all these items is what we call the author function. But also through that we can talk about Whitehead’s idea of creativity as taking place ‘in the world’. This proposes a different idea of creativity and art, one that removes some responsibility from the artist for the process of creation, and which departs from the narcissistic fantasy of ‘I make these singular interventions, and they are amazing, and they make me wealthy and famous, etc.’ For me, work that recognizes the artist’s embeddedness in the world is more interesting – and it’s also more meaningful politically and culturally. This does not mean that artists cannot bring in different modes of thinking, of understanding the world, or of solving problems. But recognizing that embeddedness ‘in the world’ is also a form of taking responsibility ‘for the world’. All of this means departing from art as narcissistic egotism. And we need to remember that art is not the same as activism (although I recognise that for some people there is a thin line between the two). In any case, for me art has to be underpinned by a sense of responsibility. What this responsibility entails and how this responsibility is enacted is already a task for a particular artist or art collective to figure out. Art needn’t be didactic or prescriptive – or, worse, moralistic – but it does require artists to recognize their embeddedness in the world.

Q. Finally, we would like to conclude with two questions regarding the social and political implications of your work for AI art. We are very interested in the relation between AI technologies and human labour under capitalist conditions of production. In what way do you think that AI art can explore this relationship?

A. Well, this is something that I tried to explore with my ‘View from the Window’ project (https://vimeo.com/344979151) included in the book, which was based on Amazon’s Mechanical Turk. For this project I hired one hundred MTurkers from Amazon’s MTurk platform, which is a sort of low-cost digital labour market. You can employ people to do some very simple tasks that would be too costly to program computers to do, tasks such as filing in surveys, labelling images, etc. So I asked these workers to each take an image from their window for me, to show and thus rematerialize this invisible labour environment. This was an attempt to acknowledge crowdsourcing as an inherent feature of art making, but also as something that has become more visible in the age of the internet. But what is different with MTurk, of course, is paying people – very little – for that production of art. And this problematic gesture of creating with other’s people labour, which is also a very exploitative form of labour, was meant to cast light on the work of MTurkers – and on the wider conditions of labour and creativity today. (Incidentally, Amazon has called this platform ‘artificial artificial intelligence’, an inside joke suggesting repetitive machine-like labour activity that is not even worth automatising.) Today, ‘corona-capitalism’ has shown that we are all Mechanical Turks in the Amazon, Microsoft, and Zoom factories. And even those of us who find ourselves in relatively middle-class occupations are always threatened with becoming obsolete – because there is another MTurker who will do the same job of cultural production (education, theory, film reviews) cheaper and faster. This sense of precarity and obsolescence, the realisation that creativity can become easily outsourced, has been with us for a long time but it has been made more visible under ‘corona-capitalism’. And it is also something that has been of concern to me as part of this book. Again, the point here is not to hang on to the vestiges of humanism according to which humans are said to create better, but rather to hang on to the things that are essential for our human survival – and for living a ‘good life’. And this involves creating conditions to get a fair remuneration for labour or establishing limits between labour and leisure. Of course, for many people ‘corona-capitalism’ has only exacerbated the precarious labour conditions under which they have been living for a very long time. In light of all this, we need to investigate how AI and other forms of algorithmic technologies are being mobilized to create more and more precarious spaces for us, while using the language of ‘rationalization’, ‘down-scaling’, ‘necessary shortages’, etc.

Leave a Reply