3 main concerns of OpenAI’s text classifier
A new round of humans against the machines is here: OpenAI has recently released a beta version of the classifier, aimed at determining human- and AI-written content. This was their reaction to educators’ discussions about ChatGPT and its possible impact on academic integrity.
But, there are its buts.
#1. Low accuracy
And by low, we mean “Can’t be trusted. At this stage, at least.”
The developers themselves admit the tool can detect correctly only 26% of the text, composed by AI. This makes the classifier fail to detect artificial content in 74% of cases. At the same time, it “succeeds” at labeling 9% of texts, prepared by real people, as machine-generated.
Here’s one of the examples of how the tool misses identifying an AI-written text:
OpenAI AI Text Classifier – A GPT finetuned model to detect
One of the reasons for the high number of false positives and negatives can be the dataset behind the technology, as the classifier was trained only on the texts of the same topic, created by AI and people.
#2. Making student’s work available to the public
To make large language models like GPT-3 smarter and more precise, you need to “feed” them more content.
OpenAI in general and its products in particular use the datasets from public access. Thus, when you submit your student’s paper to their classifier, it can be added to that dataset. From that moment, the assignment goes live and becomes potentially available to thousands of other users whose prompts will be specific enough.
Next time some other student asks ChatGPT to generate an essay with a similar title or instructions, they can get a paraphrased or even an exact match of the paper you’ve submitted.
#3. Lightning-fast learning
Of AI, not people, unfortunately. The main concern here is that GPT-3 and other AI models are learning from each prompt. The more people interact with them and correct the outputs, the better results they deliver next time.
Same story with your students’ papers: getting more inputs, ChatGPT can produce more human-like assignments. As a result, it may be harder to spot the text’s origin.
Any solution to that?
There are already some AI-generated text identifiers coming, with different precision and approach to text utilization.
Here at PlagiarismCheck.org, we continue working on a solution to help you distinguish whether it’s your student who has written the paper, or the machine.
Stay tuned, and we’ll let you know as soon as we’re ready to provide you with early access to the tool.