Blacklisted: Why Using AI Will Ruin Your Professional Writing Career
or, why to kick the plagiarism machine to the curb
“I understand that I can’t use Gen AI to write my book,” one question on the r/writingtips subreddit read last week, “But is it OK if I use AI to help me plan my novel? Anything AI spits back out to me, I rephrase into my own words. I’m using it like I would a critique partner.”
The answer is, in short, no, but not for the reasons you might think.
I recently made a few social media accounts for the Archetypist podcast/Substack and have been trolling around some of the writing subreddits. One question, I’ve noticed, has come up again and again: Can I use AI if I want to professionally publish my novel?
I already posted an article a few months ago about Why AI won’t make you a good writer and Ted Chiang (who I reference quite a bit in that article) wrote his own, much better article for the New Yorker a few weeks ago called "Why AI Isn't Going To Make Art.”
Both articles focus on AI through an ethical/creative/technological lens. While I’d love to roast AI once again, it’s clear that the ethical argument simply will not move some writers to abandon the plagiarism machine. There is another reason, though, why novelists and short story writers absolutely should not use AI in any part of the writing, brainstorming, or editing process: many presses are now including clauses in their publishing guidelines stating that any use of AI will result not only in a rejection, but a ban from submitting to that press in the future.
Now, it needs to be said that I am not an industry professional. I am not an agent, editor, or publishing house, nor do I have any real insider information about why any press would include this in its submission guidelines. It is, however, not at all difficult to extrapolate the reasoning behind the decision.
A) Most, if not all, Large-Language-Models are trained on stolen work. Even though the ideas you have for your novel may be your own and you may not be using AI to generate ideas or the actual text of your book, you are still using a computer program that only functions because massive amounts of text were fed to it, and the developers of that AI program did not pay for the rights to use that text. The traditional publishing industry published a good number of those books. Meta even tried to buy Simon & Schuster to train its AI model:
According to the recordings, Ahmad Al-Dahle, Meta’s vice president of generative AI, told executives that the company had used almost every book, poem and essay written in English available on the internet to train models, so was looking for new sources of training material.
Employees said they had used these text sources without permission and talked about using more, even if that would result in lawsuits. When a lawyer flagged “ethical” concerns about using intellectual property, they were met with silence.
B) There are at least two lawsuits against openAI from groups of authors who are convinced that their work was stolen to train ChatGPT. Again, writers who use ChatGPT were not the ones who stole these author’s work, but they are using a program that, without the theft of labor, would simply be unable to function.
If these lawsuits are successful, it opens up a can of worms for publishers regarding copyright. Would these authors then have some sort of a claim over a book created by a program that uses their work to be a viable tool? What if part of your book closely resembles one of the thousands of novels used to train AI? Most publishers don’t want to touch that with a flaming two by four. Also, personally, I don’t think I’d want to look any of these authors in the face and tell them I used a program that quite literally stole from them so I could break into the industry. Read the lawsuit in full here. Excerpt below:
3. Unfairly, and perversely, without Plaintiffs’ copyrighted works on which to “train” their LLMs, Defendants would have no commercial product with which to damage—if not usurp—the market for these professional authors’ works. Defendants’ willful copying thus makes Plaintiffs’ works into engines of their own destruction.
4. Defendants could have “trained” their LLMs on works in the public domain. They could have paid a reasonable licensing fee to use copyrighted works. What Defendants could not do was evade the Copyright Act altogether to power their lucrative commercial endeavor, taking whatever datasets of relatively recent books they could get their hands on without authorization. There is nothing fair about this. Defendants’ unauthorized use of Plaintiffs’ copyrighted works thus presents a straightforward infringement case applying well-established law to well recognized copyright harms.
Because of both of these reasons, many presses are including clauses like I mentioned above. Here are a few for quick reference:
Asimov & Analog
Statement on the Use of “AI” writing tools such as ChatGPT
We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.
Clarkesworld
Statement on the Use of “AI” writing tools such as ChatGPT
We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.
Flash Fiction Online
AI-GENERATED SUBMISSIONS: We are committed to publishing stories written and edited by humans. We reserve the right to reject any submission that we suspect to be primarily generated or created by language modeling software, Chat GPT, chat bots, or any other AI apps, bots, or software. We reserve the right to ban submissions from accounts, emails, or users who we believe or suspect have submitted AI-generated content.
Berkley Open Submission Program
May I submit a project if I have used AI in the creation of that project, whether in the outlining or writing of my manuscript?
No, authors may not make submissions that have used AI in their creation.
Andrea Brown Literary Agency
Prepare and polish your complete manuscript and/or artwork. Submit your best work. Note that we only accept human-created submissions; no AI-generated work will be considered.
The Association of American Literary Agents
Association of American Literary Agents (AALA) firmly believes that, unless clearly and explicitly granted to the publisher in the author/illustrator’s publishing agreement, the right to license a work for the training of generative AI rests with the creator or copyright holder and should not be used without consent and compensation.
AALA favors the establishment of licensing mechanisms and marketplaces to the extent that they facilitate an opt-in model for copyright holders.
These are only a few examples, but similar clauses are becoming pervasive in the industry. I did notice that the Big 5 publishers don’t seem to have an AI policy anywhere that I can find, but that may be a function of the fact that they don’t generally allow for unsolicited submissions. The Berkley open submission program is an imprint of Penguin-Random House, so it’s fair to make the assumption that if one imprint has this guideline, all of them should.
Additionally, it’s worth noting that if the Association of American Literary Agents follows its own code of ethics, they wouldn’t accept work that is in any way associated with AI, as none of the creators or rights holders for the work used to train the language model were ever consulted.
But what about other writing software, like AutoCrit?
AutoCrit recently rolled out its own LLM called Inspiration Studio. I don’t know much about it, but I would assume it has the same issues that other LLMs do: stolen work, etc.
AutoCrit's original service, though, isn’t AI. It’s a regular algorithm and totally fine to use for your book. However, tools like AutoCrit do not have the ability to discriminate between author intention and grammatical rules. All suggestions generated by the algorithm should be weighed by the author.
So, you’re saying that if I used A.I. IN ANY WAY, I can’t get my book published?
Again, I’m not a publisher so I can’t say for sure, but based on the information I’ve found online: yes, using AI may exclude your work from publication in the big 5 and it will definitely exclude your work from publication from Clarkesworld, Asimov, and the other presses listed above.
But the Author’s Guild put out a statement saying that as long as I disclose that I used AI, it’s ok to use it for brainstorming
We’re in a tough time in the industry right now. The Author’s Guild extends membership to both traditionally published authors and Self-pubbed authors who have met certain sales goals. Some readers may not mind that a work was assisted with AI.
However, the Author’s Guild cannot dictate to the big 5, or any publishing house, how to run their business or slush pile.
The Archetypist’s best practices: Leave AI in the dirt and simply write your book.
wonderful. just do the hard, hard work of creating our own stuff and let the chips lie.
or is it let the chips fly? and are those cheddar or salt and vinagar chips cuz if they are i am definitely not letting those darlings fly anywhere but into my mouth. Sorry, love chips. Sorry, sorry.
I recently read a fairly major author who talked about how she uses AI to plan her books, write all the marketing copy, write marketing plans. She said she even has it write some characters whose voices she has trouble writing. Another few years and I believe she'll be letting it generate everything. In the same article, she complained about the terrible things people say to her about it, and how she's had to put all her comments behind a paywall. Then I read articles like these. You know AI was trained on *all fanfiction*, right? That's why everything it spits out sounds like a third grader wrote it?