Can ChatGPT edit fiction? 4 professional editors asked AI to do their job – and it ruined their short story
We are professional editors, with extensive experience in the Australian book publishing industry, who wanted to know how ChatGPT would perform when compared to a human editor.
- We are professional editors, with extensive experience in the Australian book publishing industry, who wanted to know how ChatGPT would perform when compared to a human editor.
- To find out, we decided to ask it to edit a short story that had already been worked on by human editors – and we compared the results.
The experiment: ChatGPT vs human editors
- The story we chose, The Ninch (written by Rose), had gone through three separate rounds of editing, with four human editors (and a typesetter).
- The first version had been rejected by literary journal Overland, but its fiction editor Claire Corbett had given generous feedback.
- We had a wealth of human feedback to compare ChatGPT’s recommendations with.
- By comparing it with human examples, we tried to determine where and at what stage in the process ChatGPT might be most successful as an editorial tool.
Round 1: the first draft
- (Authors submitting stories to magazines and journals generally don’t give human editors a detailed, prescriptive brief.)
- Interestingly, ChatGPT did not pick up that the story was now published and attributed to an author.
- Nor did it define the genre, which is one of the first assessments an editor makes.
- And the advice for more foreshadowing, dialogue and description, along with shorter paragraphs and an alternative ending, was generally sound.
Stage two: AI (re)writes
- Could you please suggest places in the story where the pace needs to speed up or slow down?
- Could you please suggest places where there is too much imagery and it needs more action storytelling instead?
- Could you please suggest places in the story where the pace needs to speed up or slow down?
- Could you please suggest places where there is too much imagery and it needs more action storytelling instead?
- ChatGPT also changed the text from Australian English (which all Australian publications require) to US spelling and style (“realization”, “mom”).
What did the human editors do?
- The biggest problem is that final transition – I don’t know how to read the narrator.
- For me stories are driven by choices and I’m not clear what decision our narrator, or anyone else, in the story faces.
- It’s entirely possible I’m not getting something important, but I think that if I’m not getting it, our readers won’t either.
- It incorporates intellectual, creative and emotional capital – all gained from lived experience, complemented by technical skills and industry expertise, applied through the prism of human understanding.
- (After all, the author doesn’t have to do what we say – ours is a persuasive profession.)
Round 2: the revised story
- Next, we submitted a revised draft that had addressed Claire’s suggestions and incorporated the conversations with Nicola.
- Again, it didn’t pick up that the story had already been published, nor did it clearly identify the genre.
- It was a laborious process: the 2,500-word piece had to be submitted in chunks of 300–500 words and the revised sections manually combined.
- Read more:
'The entire industry is based on hunches': is Australian publishing an art, a science or a gamble?
Round 3: our final submission
- In the third and final round of the experiment, we submitted the draft that had been accepted by Meanjin.
- This time, we followed up with separate prompts for each element we wanted ChatGPT to review: title, pacing, imagery/description.
- ChatGPT came back with suggestions for how to revise specific parts of the text, but the suggestions were once again formulaic.
- There was no attempt to offer – or support – any decision to go against familiar tropes.
Sometimes editorial expertise shows itself in not changing a text. Different isn’t necessarily good. It takes an expert to recognise when a story is working just fine. If it ain’t broke, don’t fix it. It also takes a certain kind of aerial, bird’s-eye view to notice when the way type is set creates ambiguities in the text. Typesetters really are akin to editors.
The verdict: can ChatGPT edit?
- But we recommend editors and authors don’t ask it to give individual assessments or expert interventions any time soon.
- A major problem that emerged early in this experiment involved ethics: ChatGPT did not ask for or verify the authorship of our story.
- Human editors demonstrate their credentials through their work history, and keep their experience up-to-date with professional training and qualifications.
- In Rose’s case, her oceanic allegory about difference, with a nod to the supernatural, was turned it into a story about a fish.
ChatGPT is ‘like the new intern’
- AI suggestions can be scrutinised – and integrated or dismissed – by authors or editors during the creative process.
- But when used by human editors, it’s like any other tool – as good, or bad, as the tradesperson who wields it.
- Renée Otmar is affiliated with the Institute of Professional Editors, the Australian Society of Authors, Writers Victoria, Small Press Network and Life Stories Australia.
- She is an Honorary Research Fellow in the Faculty of Health, Deakin University.