We, the undersigned, pledge to never use generative AI for any part of our academic research.
We are defining generative AI as systems that produce plausible, yet artificial, new data based on older, analyzed data. Unlike machine learning for specific applications, generative AI is developed and marketed for a potentially unbounded set of use-cases and domains.
Generative AI is being hyped by for-profit corporations, such as OpenAI, Adobe, Google, Microsoft, and Apple, being placed prominently in web browsers, word processors, and email clients. It is also increasingly being promoted by academic publishers and services. Its wide-scale adoption seems to be inevitable.
Unless we refuse.
Our refusal to use generative AI for our research stems from several fundamental problems with generative AI.
Generative AI is destroying the planet
We refuse to have our research contribute to the climate disaster.
In the context of the climate crisis, using a technology that is forecast to use over 1000 TWh of energy by the end of the decade actively inhibts the process of decarbonization that is urgently required, and which most nation states have commited to undertaking under the Paris Treaty. Over 80% of global energy still comes from fossil fuels, so every Watt used for powering new technologies like AI isn’t replacing fossil fuels. In addition, these technologies increase pressure on drinking water supplies around the world. Finally, there are real harms associated with of material extraction, refinement, and manufacturing of the hardware used by generative AI.
Generative AI use will amplify research fraud
Generative AI can be used to commit research fraud at an industrial scale. Research fraud is already a serious problem across multiple fields. Generative AI-based fraudulent research is indistinguishable from good-faith attempts to incorporate generative AI into research. This is the academic version of "AI slop."
Its use will undermine the creative processes of research
Research requires writing. The writing process involves invention, drafting, refinement, and editing. While each of these stages can be simulated with generative AI, they are all essential to the process of discovery. Offloading these cognitive processes to unthinking machines removes human ingenuity from the research process. In addition, offloading these practices means we will see our critical abilities atrophy. If we don’t exercise our abilities to critically read, synthesize ideas, craft arguments and counterarguments, or infer meaning from unfamiliar linguistic patterns, we will lose those abilities.
This extends to other research outputs, such as illustrations, figures, presentations, or computer code. Again, each of these can be simulated with generative AI, but the cost to our ability to critically engage with the world would be too high.
In addition, as academics we are modeling research for our students, both undergraduate and graduate. They are now being trained in a world where almost every digital action they take includes a pitch for the use of generative AI: email clients and PDF readers offer to summarize texts; word processors offer to generate text; ChatGPT offers to “chat” through research problems. Our refusal of these systems shows our students that it is possible to say “No!” to this seemingly overwhelming consent to generative AI. Here we take inspiration from writing instructors' call to refuse generative AI.
Its use will legitimize the corporate theft of creativity and concentrates power in the hands of a few
Using corporate generative AI normalises the idea that corporations should be able to take anything they want and profit from it. While current copyright laws are themselves immensely problematic, the idea that multibillion (or now trillion) dollar corporations should be able to profit by taking millions of individuals’ creative, literary and other works is deeply disturbing. This extends to research, which is already exploited by highly profitable corprations (e.g., Elsevier, Taylor and Francis, and Sage) who are either actively building generative AI systems or are selling our research to companies such as OpenAI.
The appropriation of the work of researchers shifts academic work from a decentralized system, where millions of researchers work on problems, to a concentrated system where a handful of corporations control the means of inquiry. Generative AI systems are black boxes, which means scientific research based on them will be impossible to replicate.
Beyond research, the synthetic creation of text, images, or video destroys the market for artists and creators – the very people whose work trained these models in the first place. There is little reason to believe it would not severely undermine research jobs, as well.
Generative AI reproduces biases, inequalities, and homogeneities
Contemporary generative AI programs exhibit biases. This includes biases against people with disabilities. They additionally reinforce covert negative stereotypes. Offloading our thinking onto generative AI runs the risk of perpetuating materially harmful ideas.
Related to this, generative AI contributes to and perpetuates standardized writing and creativity – it operates as if there is a “correct” way to write. If generative AI use becomes ubiquitous in research, the voice of researchers will be silenced in favor of the flat monotone of machines, and research will no longer be innovative.
In addition, these systems have been built upon global inequalities. Using generative AI is an endorsement of the de facto torture of mostly Kenyan workers who are paid starvation wages to curate LLM training corpuses.
We are responsible and accountable for our research
Generative AI is well-known for producing errors (or so-called “hallucinations”). In response, the corporations making and promoting generative AI have argued that their systems are to be used at our risk. The logic goes like this: if a person uses generative AI to produce something, the person – not the model – is responsible for its contents. So be it! Given the problems discussed here (so-called hallucinations, bad data, biases), we will take responsibility by refusing to use these systems. If we are to be held responsible for our research, then let us make sure it is in fact our research.
Our refusal will not just affect how we write, but also the peer review process. Using generative AI for articles that undergo peer review would require peer reviewers to comment on the textual output of machines rather than the ideas of their peers. Our refusal is a courtesy to our peer reviewers, who should not be asked to critically assess the outputs of machines - they should be reviewing the work of humans.
An Exception
We do make one key exception: we may choose to use generative AI in order to better develop critiques of it. We may find it necessary to use these systems in order to understand them.
However, we will approach this possible use with extreme caution, especially in light of the environmental costs, privacy violations, and other harms discussed here.
We are Not Against Technology
We are not opposed to the use of digital tools to do research. Far from it! However, as critical thinkers who use technology, we are assessing a specific technical system – generative AI – and we have decided to refuse its use in our work.
We want our research to be for and by people, to be part of a world full of debate, art, conversations, and information created by people, working together in processes that aid us in creating beauty, connection, and worthwhile knowledge. We don’t want to produce more junk that clogs up the world with material that is unreliable, useless, and just there to make money for universities, proprietary journals, or tech corporations.
This is why we refuse to use generative AI for our research.
Signatories
- Robert W. Gehl, York University
- Sy Taffel, Massey University
- Riley Valentine @cyborgneticz@scholar.social, Independent Researcher
- Sky Croeser, Curtin University
- Jonne Arjoranta, University of Jyväskylä
- Christina Dunbar-Hester, USC