Maribeth Rauh
Research Engineer
I am a research engineer who focuses on the societal impact of AI. I spent more than three years at Google DeepMind researching, evaluating, and mitigating the risks of generative AI systems, as a member of the technical teams that build them.
I signed the statement in the hopes of pushing companies towards more genuine and fair partnership with those who may use or be impacted by their systems and to stand in solidarity with artists who did not experience this to be the case.
The development of these products is driven by a field that prizes "solving" problems "end-to-end." This is a vision of replacement rather than complementing, driven by the pursuit of "general intelligence" as well as underlying values of optimization and increased productivity. Such framing tends towards a reduction of human autonomy and expression within the domain. By involving domain experts from the very earliest stages and empowering them to shape the product at a fundamental level, the development of AI is more likely to yield specific tools which enhance what humans can do.
And so, my hope is that this action does not discourage organizations from engaging with artists. Rather, it is a call for deeper engagement - one that is non exploitative, an invitation for artists to co-create tools that will enhance their practice rather than aid in their own replacement.
Participation may be optional, but anyone who does do work for OpenAI should be paid. Just as the crowd workers who provide red teaming, training data, and other evaluations are; just as the engineers, whose pay ranges from hundreds of thousands to millions, are. Why would artists providing specialized input on a core use case of the product not be?
Exposure, privileged access, and excitement are leveraged to perpetuate a harmful norm of not paying artists for their work. The way the program is being conducted echoes the scraping used to gather training datasets - this time harvesting expert knowledge through testing, for free, to feed monolithic, black-box systems owned by the mega wealthy. When interacting with a corporation that has a profit mandate and is valued at $150 billion, monetary compensation must be part of the value exchange. Safety testing and product feedback are vital and the value they bring to OpenAI is disproportionate to the value exchanged.
Finally, I want to spotlight the supposed tension between safety and creativity. There will be moments of real tradeoffs, but the limitations of the safety guardrails in place today are rooted in a lack of investment in applied safety research over a long time horizon. Safety work often falls on small teams that end up consumed by reactive firefighting and mitigating harms as defined by legal and PR-driven corporate policies. Indeed, safeguards themselves can at times worsen outcomes for already marginalized groups, for lack of nuance in and early prioritization of safety work. What if we considered the gold standard not only squelching the "bad" but ensuring the systems complement rather than replace human skillsets and uplifting the people they are supposed to protect?