Jake Elwes
Artist & Radical Faeire

If Artists' IP and copyright isn’t off limits for them, why should their IP be off-limits for us?
I'm a conceptual media artist who has been working with AI and machine learning since the first generative AI tools emerged nearly a decade ago. My work aims to reclaim and queer these systems, for example by collaborating with drag performers in *[The Zizi Show - A Deepfake Drag Cabaret](https://www.jakeelwes.com/about-zizi/)* to create ethical and consensual deepfakes using our own datasets. I also trained my own AI models from scratch imagining a playful and hopeful queer 'open' AI utopia (exhibited 2023-24 at V&A Museum, Edinburgh Futures Institute, UK AI and art museums internationally). From experimenting with generative AI models in their infancy I've always been excited by their philosophical implications, though at times I've felt overwhelmed by their potential. More recently, I have felt a growing sense of tension and unease about where these tools might be heading if not designed with the input of artists as well as marginalized and oppressed people and communities.
In October 2024, I was invited by OpenAI to join the Sora Alpha Artists Testing Program, which included the option to submit a film project for a competition. The winning projects would be used to showcase their video generator, Sora. While it was presented as an opportunity for artists to provide feedback and use the tool, it felt more like the company was taking advantage of them. So instead of submitting a film I decided to re-open OpenAI by providing indirect access to the early access api key via an open source front end for anyone to use. This wasn't a hack or a leak; we simply connected OpenAI's servers to a Hugging Face interface. It was shared alongside a collectively written statement by artists and hacktivists to open up some important questions, such as:
- What sort of future do we as artists and humans want to build with these tools?
- Who's building these systems and why? Who do they serve and how will these tools be used?
- How can we denormalise corporations using artists for unpaid labour (providing free research, training data and development and PR 'art-washing')?
- Can we build our own decentralised AI systems using our own training data?
- What might anti-capitalist, decadent, 'un'-productive or 'de'-generative AI look like?
- What if policies allowed only marginalised and oppressed people (ie. queers, crips, indigenous peoples, the global south and trans communities) to build AI systems from the bottom up to serve everyone better?
- Can we prevent latent spaces from collapsing into normative and homogenous outputs by jailbreaking them, introducing uncertainty, and pushing them into the queer and decolonial outer bounds of their potential data spaces?
- Can we imagine better futures and narratives with AI, hopeful technological apocalypses and alternative (queer) utopias?
This is a criticism of power structures and the way that corporations use creatives. Early on I discussed with OpenAI how if I were to participate in their program I'd want to critically engage with the system and its purposes. I initially considered how I could feed lines from OpenAI's own Terms of Use as prompts into their video model, an AI surreally interpreting itself (for me the conceptually richest AI content is in the legalese written by humans as opposed to the flashy generative outputs). I openly discussed with OpenAI how the engineers were forging ahead without always considering the impact to artists, and the fundamental question of why we're making tools that can be used to replace image and video makers. However as the programme developed at high speed on Slack, I – along with other artists in the group – ultimately felt uncomfortable with unpaid artists being used in this way, even if it was framed, explicitly and implicitly, as an honor. Whilst I understand why some artists may see no issue with this (as evidenced by backlash to our actions), we believe that being offered crumbs is not enough; it creates a race to the bottom.
I acknowledge my privilege here as an artist who can afford to sever ties with a big tech company and do not want to criticise other creatives for being excited to engage with extraordinary developments in generative AI. I would however like to encourage artists to find critical ways to engage with AI technology in their art and consider innovative and empowering ways to work with these tools.
Can we consider training our own models on our own data and moving outside of the 'prompting box' on big tech companies' websites. Let's challenge the notion of inevitable linear progression, both technological and social. It's not determined that we branch towards corporate monopolies & proprietary closed AI. Let's revisit ideas from Web1.0, federated AI, and embrace being latent space jockeys – unshackling ourselves from the limitations of language prompting and spatially interfacing with the unsupervised, debinarised, mathematical latent spaces present in all AI systems.
How as artists can we push the concept, aesthetic, and experience of artworks created using machine learning image and video making tools? Art has often challenged the power imbalance held by profit-driven corporations. We can challenge these new proprietary AI models and push against the homogenous and normative outputs they prioritise. We've been so hypnotised by these corporate techno-religions - now it's the time for heresy.
Engineers recruit artists for R&D and improving generative prompting models, precisely because artists excel at asking absurd, unanswerable questions. Yet, ironically, these same engineers push for a normative, realism-focused, techno-fetishised AI aesthetic with a canonical bias aimed at selling products. While they try to reduce art to a solvable problem, they miss that artists are looking to explore uncharted territories and something beyond. They do this by asking questions that have no answers and looking for non-problems with no solutions, in a spiritual journey to search for meaning and transcendence. The artworks created are often merely the byproducts or conceptual and visual stories from along the way, and hopefully can bring others along with them.
As models collapse through entropic self-training and being squeezed into a narrow, homogenised cluster within the model’s latent space (a cage reinforced by safety tape which they don’t want you to ‘jailbreak’ out of), we must decolonize this space and push into its outer bounds. Simultaneously, we need to revisit the data itself, building models trained on the worlds we wish to see. This requires embracing artists’ natural tendency to seek unsolvable problems and pursue spiritual quests into uncharted territories, breaking free from the cage. Let's resist these tech giants and empower ourselves to build the queer technological utopias we want to see.
web: [jakeelwes.com](https://www.jakeelwes.com/)