Home United States USA — software I tested 9 AI content detectors – and these 2 correctly identified...

I tested 9 AI content detectors – and these 2 correctly identified AI text every time

50
0
SHARE

Two of the seven AI detectors I tested correctly identified AI-generated content 100% of the time. This is up from zero during my early rounds, but down from my last round of tests.
When I first examined whether it’s possible to fight back against AI-generated plagiarism, and how that might work, it was January 2023, just a few months into the world’s exploding awareness of generative AI.
This is an updated version of that original January 2023 article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector (this is a different URL than we published before), Writer.com AI Content Detector, and Content at Scale AI Content Detection (which is apparently now called BrandWell).
The best result was 66% correct from the GPT-2 Output Detector. I did another test in October 2023 and added three more: GPTZero, ZeroGPT (yes, they’re different), and Writefull’s GPT Detector. Then, in the summer of 2024, I added QuillBot and a commercial service, Originality.ai, to the mix. This time, I’ll also be adding Grammarly’s beta checker.
In October 2023, I removed the Writer.com AI Content Detector from our test suite because it failed back in January 2023, it failed again in October, and it failed in summer 2024. However, it now appears to work, so I’m including it in the test suite. See below for a comment from the company, which their team sent me after the original article was published in January.
I’ve re-run all the tests to see how the detectors perform today. While I had two strong successes, the big takeaway seems to be just how inconsistent the results are from one AI checker to another.
Before I go on, though, we should discuss plagiarism and how it relates to our problem. Merriam-Webster defines “plagiarize” as “to steal and pass off (the ideas or words of another) as one’s own; use (another’s production) without crediting the source.”
This definition fits AI-created content well. While someone using an AI tool like Notion AI or ChatGPT isn’t stealing content, if that person doesn’t credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.
In this experimental article, I’ve asked ChatGPT to help out. My words are in normal and bold text. The AI’s words are italicized. After each AI-generated section, I’ll show the results of the detectors. At the end of the article, we’ll look at how well the detectors performed overall.
Here are the test results for the above text, which I wrote myself:
Human-written content: 7-of-9 (77%) correct
Last time: 5-of-7 (71%) correct
Before we move on, let’s discuss what I’ll consider a “correct” analysis. If the score is numeric, anything above 80% will be counted as a correct analysis, and that’s being generous. If the score has a summary, I’ll look at the summary and make a judgment.
With that, let’s dig in some more.
Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarism
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism.
One of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism.
For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source.
Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.
AI-written content: 4-of-9 (44%) correct
Last time: 4-of-7 (57%) correct
Remember, text shown in italics was generated by the AI.
Show me at least three ways that I can detect ChatGPT was used to produce a piece of text
There are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use:
Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text.
In this set of tests, only the italicized content above was run through the checker suite.
AI-written content: 5-of-9 (55%) correct
Last time: 5-of-7 (71%) correct
List three plagiarism checkers that can detect ChatGPT text
It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.

Continue reading...