The Essential Human in the AI Loop

AI is now part of our everyday lives, from our homes to our workplaces, and even our healthcare facilities. But how useful and reliable are AI outputs for writers and editors? To quote author Debbie Emmitt, why is ‘…the value of human editors... at risk of getting lost in the noise’? And how can human intelligence enhance the technology?


Illustration: James Hutchinson + AI

How Does AI Work?

According to Google's AI Overview, 'AI works by using computer systems to simulate human-like cognitive functions'. According to IBM, however, that's rather misleading. The article What is an AI model? states that '...an AI model is defined by its ability to autonomously make decisions or predictions, rather than simulate human intelligence’. AI models make predictions based on algorithms. The large language models (LLMs) that generate text are trained on extremely large datasets, and they learn statistical patterns in order to predict the next word in a sentence.


Inherent Risks of Using AI

If you’ve used ChatGPT, you may have noticed the disclaimer at the bottom of the page: ‘ChatGPT can make mistakes. Check important info’. What kind of mistakes might those be? Here are some examples:

  • Human language is often ambiguous, and AI models might misinterpret tone or meaning.
  • Unless they are connected to real-time data, models are based on information that may be outdated.
  • If the datasets used to train the models contain inherent errors, gaps or biases, AI outputs will reflect them.
  • Even when data is accurate, models might misunderstand a question or miscalculate.

As the BBC reminds us in the 2023 article What is AI, how does it work and why are some people concerned about it?Computers cannot think, empathise or reason. In the words of Rui Queirós de Faria, Assistant Editor at Edições Afrontamento, ‘…digital tools can be used to perform simple tasks’, but it’s important to limit ‘…their use, as they can be counterproductive and undermine editorial standards’.


Homogenised Content

LLMs present us with likely solutions, not the best solutions. AI outputs for writers and editors are therefore useful indications of what most people would write (rather than should write) in a particular context. If you want original content, don’t rely on AI too much, because by definition what it generates is the opposite. In the words of authenticity/ sensitivity reader Davina Bhanabhai, AI is no replacement for ‘…individual lived experiences and human perspectives’.


CIEP Conference 2025 (CIEP)

CIEP Conference 2025

The theme of this year’s Chartered Institute of Editing and Proofreading (CIEP) conference was ‘The Value of the Editorial Profession. One of the speakers was editor, linguist and translator Dr Sara Kitaoji, who commented on the use of technology for editing:

If an editor is mainly correcting spelling and putting commas in the right place, or if a translator is merely substituting words or sentences from one language into another, then a machine could certainly be far more efficient and accurate than a human. But I know I do much more than that, and most importantly my clients know this, tooI focus on developing genuine, meaningful connections with authors, colleagues and peers to foster mutual learning, trust, rapport and solidarity’.

Human editors, therefore, add value by developing meaningful relationships with the people whose content they are optimising. When reviewing an author’s work, an editor makes assumptions based on a contextual understanding of the language and punctuation used, but can also raise queries directly with the author. A mutual understanding can inform future collaboration.


AI and human cooperation. Designed by Freepik

Human in the Loop

Data scientist and engineer Robert Monarch recognised the importance of human intervention for AI in his 2021 book Human-in-the-Loop Machine Learning. In his foreword, Christopher D. Manning stresses the importance of ‘…building AI technology that effectively cooperates and collaborates with people, and augments their abilities’. Likewise, editor and indexer Magda Wojcik spoke at the CIEP Conference 2025 about how retaining human involvement ‘…means achieving what neither a human being nor a machine can achieve on their own… embracing the natural evolution of editorial practices’. Human intervention increases accuracy, introduces judgement, and trains models to be neutral and unbiased.

Recent Decline in the Use of AI

In September, the investment knowledge centre Apollo Academy reported that ‘AI adoption has been declining among companies with more than 250 employees’. This is the first recorded decline since 2023. Conor Cawley of tech.co attributes this to ‘the AI return-on-investment being a lot more lackluster than promised’. This doesn’t mean that people will stop using AI, but it does indicate that companies are starting to realise the associated costs and potential risks. AI can be a useful tool, but only when utilised by humans who are experts in their field. To quote Magda Wojcik once again, Rather than replacing editors, AI can become a tool that enhances editorial work – but only when guided by human expertise’.


Recommended Reading: 

Comments

Popular posts from this blog

Search Engine Optimisation (SEO): an Introduction

Welcome to The Last Post!