Chatbots seem to have become a magic wand for the students, who aim to receive the best grades with minimum effort. However, the teachers can use special tools to find out whether the students did the homework themselves or used AI-writing models, such as ChatGPT or Gemini. In particular, TraceGPT by PlagiarismCheck.org can be integrated into any LMS, such as Moodle, Canvas, or Google Classroom. Therefore, a teacher can immediately learn about the percentage of AI in each written work submitted for evaluation, just as they learn about plagiarism using a plagiarism detector.

Can you tell if someone used ChatGPT?

AI detection tools analyze stylometry and many parameters at the level of sentences and the text in general. One can get a clear percentage result to understand not only if AI was used, but how much AI is in the student’s paper compared to the entire text. Also, one will know exactly which sentences were most likely created by AI, not by a person, because the tool will highlight them with color.

However, it is important to analyze the AI detection results. Here is how:

how do I know if student used ChatGPThow do I detect if student used ChatGPThow to detect when students use ChatGPT

What affects the verdict?

The basic model of the detector is based on the Perplexity metric. Human speech and writing are characterized by a higher level of spontaneity and creativity, while machine-generated texts are more predictable. We take into account other parameters like the style and structure of the text.

Here are the key features that influence the verdict:

  • The text is separately analyzed for features characteristic of human writing and AI patterns;
  • The text is converted into vectors, and the characteristics are calculated in general and separately for each sentence according to the appropriate formulas;
  • Depending on the set level of indicators and the degree of the weight of each of them, the % of reliability is determined.

How AI detectors work

The model checking for AI is trained to analyze the text for the traits characteristic of AI writing. One of the main is the perplexity or predictability of the text. Text-generating chatbots tend to opt for the most used words in the expressions they compose, trying to make the output meaningful and coherent, while humans are more creative and unpredictable in their word choices.

Compare:

Children go to school every day. – Highly predictable word usage.

Children go to school every autumn when the school year starts. – Less predictable, however, relatively common.

Children go to school every time the teacher asks them to help with the festival preparation. – Unlikely to be used; needs additional context.

Children go to school every glad to see them. – Unlikely to be used, irrelevant, and grammatically incorrect.

So, the model will analyze the text for AI-common parameters and flag the parts where AI-alike writing has been detected. The more such parts the text contains, the higher the AI-content percentage the tool shows.

Here comes the reason why AI detectors were believed to be biased against non-native speakers. It is true that the tools may take their texts for AI-generated more often, but it’s notA because the model has something against them. The thing is, the writing of non-natives is more predictable, as they may use less diverse vocabulary, tend to choose cliche phrases, and feel less free to play with expressions and be creative, producing unpredictable word combinations. Hence, their writing has high perplexity, which for AI detectors is a sign of AI cheating.

The situation described is a vivid example of why AI checkers can’t be used as the only judgment of the work, and only humans can evaluate someone’s writing. AI detectors, however, are helpful to bring attention to the signs of possible machine writing.

Another characteristic of AI writing is the usage of complex sentences and complicated constructions. The output often sounds a bit monotonous and abstract. Chatbots tend to generate sentences of approximately the same length, while human writing is more rhythmic. In addition, AI output often includes general phrases, trying to be polite and comprehensive; hence, the text may lack individuality and character. People are also capable of producing such texts, which is when false-positive results occur, but mostly AI detectors can catch bot-produced writing quite accurately.

Can AI detectors be trusted

The short answer is yes, but only if you use an AI detector for guidance rather than final judgment.

Even the PlagairismCheck.org AI checking tool, which distinguishes between machine-generated and human-written texts with 97% accuracy, doesn’t give you an unambiguous answer like “Don’t worry, it is for sure human-written paper” or “This text is 100% AI-generated.” Instead, the verdict will sound as “Most likely AI” or “Most likely written by a human,” leaving it for you to make the final decisions.

The thing is the only way to be 100% sure is to see the actual process of writing the text. In all the other cases, the AI-detecting model can determine how similar the paper is to what it knows about AI content. What can you do with the AI detection results, then?

  1. Trust your experience. If something feels off, and the AI detector says the text is probably AI-generated, it is definitely a sign to look deeper into the issue.
  2. Use a plagiarism checker. Chat GPT does not create content from scratch but uses available sources without citing them, so often, AI-generated texts have matches with already published pages.
  3. Check for Authorship, especially if you know the writing style of the text applicant, and it does not sound like theirs.
  4. Analyze the AI check results: if the detector flagged random words or sentences, there is probably nothing to worry about, as it is unlikely that someone used AI to generate them. However, if the checker highlights extracts or paragraphs, it may be a sign of cheating.

How can I tell if a student used ChatGPT?

Any AI text detector cannot give a 100% result because human and AI writing characteristics can be similar. Moreover, AI models improve, and chatbots learn to sound more like human. The use of additional tools to manipulate artificially generated text to pass it off as human is another factor complicating the AI presence detection.

Remember that the guidelines regarding AI and plagiarism should be subjected to the Integrity Policy and communicated to students, as cheating accusations can have serious consequences. To notify the student, additional verification and a personal conversation are required.

Tools like PlagiarismCheck’s Trace GPT are a solid first step to initiate a trusting conversation with a student. With a report from machine algorithms in hand, you can ask the student about their train of thought and the conclusions expressed in the work, paying particular attention to statements labeled as AI.

Use these starters for tough AI conversations:

  • Highlight strengths and weaknesses of the assignment.
  • Address students’ struggles, intentional or unintentional AI misuse.

Topics for discussion:

  • Time management to avoid AI misuse.
  • Remediating skill deficiencies.
  • Provide suggestions for review, revisions, and periodic check-ins.

Conversation jump starters:

  1. Let’s work together to improve areas where we see errors or AI misuse in your assignment.
  2. Explain your process of creating this assignment and identify areas for improvement.
  3. Review the AI writing and share your thoughts on why it was flagged as AI and what could be improved in those sentences.

With PlagiarismCheck.org, teachers detect AI misuse, and students perfect their writing, increasing trust and promoting honesty in academia. Join us now to see how it works!