But the paper has forbidden staff from incorporating AI-generated text into copy except in limited circumstances, which require the sign-off of top Telegraph editors and the legal department.
The guidelines reveal significant concerns within Telegraph Media Group over the legal and editorial risks of using AI for editorial purposes – including a fear that sensitive information entered into chatbots may surface elsewhere.
Other major UK and US publishers, including The Guardian, Financial Times, BBC, Associated Press and Reuters, have previously published their guidelines and principles for using generative AI, a list of which can be found here.
Telegraph warns staff AI-generated text may be subject to future copyright claims
The Telegraph’s managing editors sent the policy to staff on Tuesday morning, saying the guidelines were “intended to be broad and very high level” with “specific guidance in the future for various business use cases”.
Saying AI will be “an increasingly valuable tool” for the business, the editors warned their journalists it also “presents a fundamental challenge to our relationship with our readers.
“The trust and investment of readers in our content requires accountability of attribution: they must be confident about who has created what they are consuming on any Telegraph platform.”
Generative AI companies such as OpenAI have not disclosed what data they used to train the large language models (LLM) that power their chatbots. Because of this, the Telegraph editors said, “there is the danger of plagiarism as the engines rely on ingested content from multiple sources… It may be in the future that this content will be subject to copyright claims”.
As a result, Telegraph journalists filing copy generated even partly by ChatGPT “will be subject to the same sanctions as there would be for plagiarism”.
The only occasions on which The Telegraph will publish AI-generated copy, the editors said, “would be the use of AI generated content to illustrate a piece about AI”.
However such cases must be signed off by editor Chris Evans, a managing editor or a deputy editor, and head of editorial legal Rachel Welsh must also be notified.
Any AI-generated text that gets published must be “clearly signalled to the reader” and staff must make sure The Telegraph has the rights to use the content: “This must be checked with editorial legal and cleared.”
AI-generated images – even those created by an agency and supplied to The Telegraph – are subject to the same rules.
Because of the widely-acknowledged tendency of generative AI to “hallucinate”, or confidently state false information as though it were true, Telegraph journalists are to “assume as a starting point that any information gathered or created with a GenAI is false…
“You are the human-in-the-loop, and anything you produce that is based on GenAI output(s) is your responsibility and you are accountable for it both in terms of the law and company regulations.”
Telegraph editors: please don’t put our IP into ChatGPT
The Telegraph’s guidelines prohibit the use of generative AI tools for copy editing.
“There are too many instances of incorrect information being produced by AI engines,” the editors said. “Equally, platforms will not sufficiently understand the Telegraph’s style and precepts; neither will there be the required nuance to edit pieces optimally.
“In any case we should on no account be entering entire Telegraph pieces into third party AI services.”
The managers explained that, because generative AI companies have not explained where they obtain the information on which their models are trained, “you are expected to be mindful that anything you type into an GenAI-based tool may resurface elsewhere externally”.
OpenAI says it no longer trains its LLMs on text entered by users into ChatGPT. Nonetheless, Telegraph staff were asked not to enter significant amounts of proprietary copy into an AI, pre- or post-publication, and to avoid entering information that was “company confidential or editorially sensitive” or that contains personally identifiable information such as names or email addresses. Doing so, employees were told, “could breach data protection laws”.
What has The Telegraph permitted staff to do with generative AI?
Despite the prohibitions, Telegraph editors said the company “will adopt a more pragmatic and permissive approach to uses of AI for ‘back office’ activities that do not involve direct publication of AI output”.
- Generating story ideas (“but the journalist is responsible for critiquing the ideas and ensuring that if one is advanced it is coherent, relevant and correctly prosecuted”)
- Coming up with ideas for a story illustration (but the illustration suggested by the AI must then be commissioned separately)
- Suggesting headlines
- Research assistance (“but all links and ‘facts’… must be traced back… and verified”).
Staff were also permitted to use AI “to predict story development and lines that might be pursued on a long-running story”.
But the editors said that even when using AI-generated information in “non-publication contexts… employees bear the responsibility to ensure it is correct and appropriately used”.
Telegraph Media Group chief executive Nick Hugh said earlier this year that while there were opportunities in generative AI, “I’m much less pro-automated content generation”.
He was speaking on a panel alongside Guardian Media Group chief executive Anna Bateson, whose publication in June issued its own “three broad principles” on AI use. Like The Telegraph’s, The Guardian’s AI policy encourages staff to only include chunks of GPT-created text with “human oversight” of its claims and “the explicit permission of a senior editor”.
Email firstname.lastname@example.org to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog