In 2022, I had the privilege of attending BosKone and seeing Ted Chiang (Story of Your Life, Exhalation) speak on a panel about AI technology. The panel itself consisted of a Raytheon employee, a PhD. in motivational psychology, a few writers, and of course Ted.
The motivational psychologist introduced himself, spoke on how they’re using psychology to train AI models and assess the health of business leadership.
The Raytheon employee introduced himself, and extolled the virtues of how, because of AI technology, Raytheon’s missile targeting systems were more accurate than ever, and can hit specific targets with an extremely narrow margin of error, thereby limiting collateral damage and civilian casualties.
Then Ted introduced himself and said something along the lines of: “Let’s call AI what it is: a highly complex, efficient algorithm that gives the impression of intelligence, but ultimately falls short of true intelligence.”
And that statement proved to be both gasoline and a lit match for an absolute barn-burner.
Much to the dismay of the Raytheon employee, enterprising executives trying to cut costs by cutting out the creative process, and tech-bros everywhere, Ted is right: despite its name, AI is not actually “intelligent.”
The American Psychological association defines intelligence as follows: n. the ability to derive information, learn from experience, adapt to the environment, understand, and correctly utilize thought and reason.
At first glance, AI does seem to fit this definition. It can certainly derive information from a prompt, adapt when given feedback, and use its algorithm to do “helpful” things like write cover letters or generate images with too many fingers. As Ted Chiang pointed out in his panel, however, AI can only adapt to stimuli when that stimuli fits the system through which it was designed. It is not self-learning and adaptive to any stimuli that it may encounter. A language-model AI will always be a language-model AI. It cannot, for example, teach itself to drive a car, operate machinery, or pilot a missile without a human first updating its source-code to give it the ability to do these things.
Lab rats, on the other hand, have been taught to do extraordinary things, things that, arguably, the “designer” of rats never intended, or even imagined, rats could do: Enriched environment exposure accelerates rodent driving skills - L.E. Crawford, L.E. Knouse, M. Kent, D. Vavra, O. Harding, D. LeServe, N. Fox, X. Hu, P. Li, C. Glory, K.G. Lambert
“Let’s call AI what it is: a highly complex, efficient algorithm that gives the impression of intelligence, but ultimately falls short of true intelligence.” -Ted Chiang
So, to quote Ted again: “you could say that the average lab rat is more “intelligent” than AI.”
Even though driving rats may sound absurd, it’s worth noting that the scientists did not “edit” these lab rats. They did not enhance their brain structure or their gene code. Conceivably, any rat in the world has the capacity and intelligence to learn this skill and execute it in order to find food if placed in the correct environment. In other words, the rats adapted to a new stimulus that was not in the original rat “source code” as set out by God or biology or the Flying Spaghetti Monster. AI, on the other hand, cannot even consistently generate images free of embarrassing artifacts, like arms in the wrong places, extra fingers, or warped backgrounds, a task for which it was specifically designed to do.
So, why are we allowing it a place at the creative table?
The answer falls at the intersection of misunderstanding what AI is, frustration at the state of the greater publishing industry, and the emotional toll of learning a skill as complex and intricate as writing a novel. Over the past year, I’ve seen supporters of AI cite two reasons as to why they view it as an enhancement to their writing process:
They believe that AI gives them access to quality feedback and writing advice/prompts, thereby accelerating their career trajectories, writing/editing timeline, and allowing them to generate more work at a faster pace.
They view AI as an anti-capitalist/anti-establishment act of resistance for writers who cannot afford an MFA education, professional editor, workshop residency, or developmental editor and believe that the feedback that they gain will accelerate their career trajectories by cutting out editors.
I also feel that I have to address the biggest strike against AI: using it is theft. There have been multiple lawsuits against Chat GPT, Microsoft, Google and others regarding data scraping and the use of private information, like unpublished novels hosted on Google Drive, as well as famous bestsellers, as fodder to train AI models. If this is true and proven in a court of law, then the use of AI in any form is inexcusable and worthy of harsh condemnation.
Setting aside the theft of intellectual property (which feels like an absurd sentence to write, but go off, Silicon Valley, I guess), the other two reasons listed above boil down to the writer believing that:
AI is a teacher worth listening to.
The theft of labor from professionals both inside and outside the “traditional” publishing industry is acceptable collateral for commercial success.
We are at the crossroads of several very strange cultural moments. The first is a misunderstanding and devaluing of education and what it is for. The second is mistaking the free access of knowledge as a sufficient substitute for experience. And the third is, of course, AI.
When I started teaching at the university level, I was assigned three courses: a core composition class for non-English majors, a 300-level intro to creative writing course, and the advanced 400-level “fiction writing” course. Quite quickly, especially in the composition class, I encountered a kind of contractual sentiment from certain students: “I’m here. I’m attending class. I (or my parents) are paying you, so I deserve an A.”
That’s not really what education is about. It is not a hoop to jump through to start the next phase of your life (although, I understand that sentiment because that’s how it’s popularly portrayed). Inevitably, I’d have a bold student raise their hand in the first few weeks of class and say something like “No offense professor, but I’m a business/nursing/biology/accounting major. Why am I in this class?” To which, I’d respond with stories about how a friend of a friend is a NASA scientist working on the Europa Clipper, who consistently gets funding for his projects while his coworkers, who cannot express themselves in writing, are passed over. Or how businesses need plain-language writers to develop copy for the websites. Or how I ended up helping my boss’s boss optimize their job postings to make it more accessible for applicants.
In my upper-level creative writing courses, I ran into a different sentiment: “I really want to be a published author. Please tell me how to do that so I can achieve my dreams as quickly as possible.” I thought the same thing when I was in undergrad. I thought “learning to write” looked like doing the exercises to gain the skills to sound “professional.” But that’s not at all how it works.
In the “real world” this is where the breakdown occurs between educators/industry professionals and aspiring writers, too. The common sentiment is “All this information is available on the internet/through Chat GPT, for free. So what good are editors (or creative writing professors)?” And, begrudgingly, I admit that it’s true: if you plug your manuscript into an AI language model, it will spit out suggestions on how to tighten your sentence structure, remove passive voice, punch-up weak vocabulary, and fix your grammar. On the surface, it is a ruthless, emotionless editor that gives the impression of competence. Receiving all that editorial feedback instantly, for free, in the comfort of your home, without having to expose your story to anyone except the friendly-looking interface that runs on spicy math — and did I mention it’s free? — feels good, man.
But, what I (and most new aspiring authors) didn’t understand when I was first starting out, is that learning to write is a long process, much longer than the general public realizes, and there are no shortcuts. Any teacher that tells you something different is either lying, delusional, or trying to get you to buy their Masterclass. There are no tricks of the trade, although there are axioms. “Focus on emotion, summarize facts.” “Good writing is the heart in conflict with itself.” "Show, don’t tell.” “Avoid adverbs.” “Grimace is a word that just means ‘to have a facial expression.’ stop using it to denote discomfort.” These axioms are pithy and digestible, but simply knowing them is not enough. Books are sold on voice and luck, not on how many adverbs you don’t have.
And this is the crux of it: AI knows these axioms. It will follow these axioms to the letter. It will mark every instance where you’ve violated them in your manuscript. But it does not understand them. It cannot articulate the reasoning behind the axiom in the context of an individual author’s work. It cannot build a mentoring relationship with an aspiring author and encourage that author to develop their voice in new ways — perhaps ways that shun the axiom to enhance a character’s voice. It cannot analyze your progression as a writer and tailor its feedback in context of your writing journey. It cannot comprehend character arcs, theme, or the development of a magic system. It cannot articulate how your work engages in dialogue with the popular or classic lexicon and offer recommendations to help it better fit or critique that lexicon. It cannot do this because it is not intelligent. It is a complex algorithm that gives the impression of intelligence which breaks down under even light scrutiny. It simply spits out what it thinks are the answers to your prompt and pasted story and moves on to the next prompt. It is, at best, glorified spell-check and at worse a garbage-class charlatan hoping to use your work to further its own “career."
When you pay an industry professional, you are not paying for an axiom, you are paying for a personalized, tailored critique of your work. You are paying for context. It’s the difference between plugging your symptoms into WebMD and seeing a good specialist that believes you when you tell them your symptoms. One might actually diagnose you. The other will shrug and say “idk, probably cancer, though.”
I know that the industry is deeply frustrating. I know that everything is expensive and no aspiring author has the cash to pay a developmental editor thousands of dollars to look at a book that may or may not be published. I know it feels hard, and it is, because publishing is hard and that’s just the nature of the beast. The good news is that every published author learned their craft from somewhere. It did not spring innate into their psyche like a gift from the almighty. It took work, time, and the application of knowledge that is readily available in craft books and on the internet.
I used to tell my students: an MFA won’t get you published. A developmental editor won’t get you published. Networking won’t get you published. AI won’t get you published. The only things that will get you published are tenacity, perseverance, a growth mindset, and luck. So put your head down, do your time, write your 1 million words, put in your 10,000 hours, read widely both in and outside your genre, approach your work with criticism and compassion, hope for the best, and never stop trying to improve.
And whatever you do, don’t run your work through artificial “intelligence” (quotations derogatory) for critique. You’re probably better off putting a lab rat through an MFA program.
Hey Jake,
When I saw the title of this piece I thought, "Not another AI is garbage rant." I saw our mutual mentor make a similar comment on Facebook recently. However, your commentary is very inciteful and it's on the money, as far as learning from an AI goes. 100% agree...but...
(Not popular opinion to follow)
But I would also say that it's not going away, in fact it is proliferating at an astonishing rate into everything. The genie is out of the bottle and it's not going back in, however it was made. I just saw the ad for the new Google AI that is a new search algorithm yesterday.
I hear the concerns about the theft of IP to make these. And it's a real thing I'm sure for a lot (most) of these. Does that mean we have to boycott the use of AI forever? All these things are rolling through courts and there is money to be made, so they will prevail eventually. I would be willing to bet they will only continue to reach into everything that can possibly have something automated, more thoroughly and more and more robustly and it's barely even started. I can't imagine where it will be in five years. Will it ever be actually intelligent? I don't think it will happen in my lifetime, but maybe not long after I'm gone, at the rate our computing power continues to grow. Who knows.
I've used Autocrit for more almost 10 years now, it was simply a software program, not touted as an AI back then, that told you how many repeated words you have, or pacing or dialogue issues, or a thousand other things. This may not be exactly what you are talking about, as it's not specifally Ai, but it's certainly doing much of the same stuff. I used it as a first pass editing tool, to find flaws in my manuscript that could be easily fixed. I didn't use it as a teaching tool. And honestly I barely ever use it, but I look at it as one tool in the tool bag. It doesn't replace a real person reading the manuscript. I use many beta readers as well and I also send my manuscripts to a real life editor when I'm ready for that phase.
Confession time. I have played around with ChatGPT out of couriosity. There are more and more AI tools (not Chat GPT) that are specific to different focus areas, like marketing, especially marketing, and even ficiton writing as you allude to in this piece albeit not directly. One of things AI does well is summarize. Doing synopsis' of different lengths is a pain in the butt, and only really useful when you are hunting an agent, but an AI can pop it out in seconds. It can help you with a back cover blub same way. I think of AI as a little bit like wikipedia, it's a starting point, they put all the references at the bottom of the page to look up the source material. You still have to edit and put your spin on it.
AI is going away and maybe it's garbage now, but it's only going to get better and easier to use.
Your post was great. I look forward to more.