87% of CNET’s AI content is detectable, new study shows
In an effort to help reduce costs and increase publishing frequency, media publications are turning to an unusual method to try and achieve their needs and write content for them, artificial intelligence (AI).
One such media publication is CNET, the tech news outlet. Recent developments have found that 78 published articles from CNET were developed by AI, and researchers from Search Logistics wanted to see whether the pieces were detectable by a public AI content detection tool.
To test whether the content was detectable, Search Logistics downloaded all of the articles identified that an AI had written, denoted by the author titled CNET Money.
The articles were then processed through Originality, a public access plagiarism and AI content detector. Each article was then given an Originality score and tabled for review later.
The study showed that 87.2% of CNET’s AI-generated content was detectable with a public tool, with 19.2% of the AI-generated content on CNET featuring more than 50% AI writing.
The study also had a few other findings. For example, 12.8% of CNET’s AI content was not detectable, with Originality not finding any AI content in 10 of the 78 articles that were tested from CNET’s AI content production process.
Expanding on that, 7.7% of CNET’s AI content featured more than 75% of AI writing, or 6 of the 78 articles studied.
The reason this study is important is because of journalism’s relationship with Google. Google has stated that they will label AI-generated content as spam, which can lower search traffic for any site using it.
Moving forward, any publications that use AI to help generate and publish stories run the risk of lowering the amount of traffic that comes to their sites, email deliverability and revenue alongside the other inherent risks of using AI content, such as its accuracy.
Prior to Search Logistics’ research, CNET published a response covering why it decided to use AI in some of its stories.
In the article, CNET explains that the goal of experimenting with AI was to see if the tech could help its staff of reporters and editors to cover stories from a full 360° perspective.
CNET wanted to see if the AI engine could quickly assist their publication staff in creating stories using publicly available facts, allowing the publication staff to focus on more deeply researched stories that are more time-consuming.
After the revelation that some stories written by the CNET Money editorial team were using this AI engine to assist them, CNET has since changed its byline and disclosure system to better explain to readers that an AI engine has written the story they are about to read.
CNET also acknowledged that while the AI engine is assisting in the writing of these stories, all stories, regardless of the writer, are reviewed, fact-checked and edited by an editor with topical expertise before publishing them.
The Verge, another media publication, has since reported that CNET has temporarily paused its publication team from publishing AI-written stories following a staff call from the site’s leadership.