Artificial intelligence is changing teaching and learning in ways we never expected. Students are turning to AI tools like ChatGPT to draft essays, respond to homework questions, and, in some cases, create entire research papers. And although this appears to be a significant jump in educational technology, it also creates a significant problem — how does a teacher maintain academic integrity when AI can produce the work for a student?
In an effort to combat this, many institutions have rallied around AI detection tools, which are designed to flag machine-generated content in student work. But do these tools really make a difference? And if they don’t, what should educators consider instead? As AI evolves, the efficiency of such detection tools is becoming ever more uncertain, putting schools and universities in a challenging position.
AI writing tools have made it easy for students to produce polished, well-structured essays in seconds. In academic settings, AI-generated text is appearing in:
To some, AI is merely a new tool — a calculator for math — that allows students to do more efficiently what they were already doing. But for educators, AI poses serious worries about originality, effort, and learning. Are students cultivating critical thinking and writing skills when they lean too heavily on AI?
Some schools have countered by banning AI tools altogether. Others have taken to AI detection tools to catch students using AI-generated text. However, as Leon Furze explains in his timely article on the subject, AI Detection in Education is a Dead End, AI detection tools may not be the silver bullet schools want.
AI detection, at least on the surface, seems like a rational answer to AI-generated assignments. If students are using AI to do their work, shouldn’t schools be able to discern and stop that? But, it turns out, these tools are hardly foolproof.
According to a recent preprint study on arXiv, cited by Leon Furze, which ran 805 writing samples through AI detection tools:
This used to be a grave ethical problem. How can students prove that their original work wasn’t written by AI when they are falsely flagged? The risk of false positives means that educators might be punishing students who did nothing wrong while students who use AI in a dishonest way might still go undetected.
Another problem? Different AI models are more or less detectable. The text produced by Google’s Bard is easier to detect as AI-written, but GPT-4’s output is considerably more challenging to catch — and even more so when it’s altered slightly.
Plagiarism detection tools have been used in schools for years to filter out whether the students are copying content from the web. But AI detection operates differently — and that’s where the problems start.
Unlike plagiarism checkers, AI detection tools offer no black-and-white evidence. Instead, they provide probability scores, allowing for discussion between students and educators. This adds stress and workload because it causes teachers to spend more time reviewing flagged assignments.
As Furze notes, teachers are already under enormous pressure regarding grading, moderation, and reporting. Detection disputes add to their workloads, only making their jobs harder.
Rather than bickering about imperfect AI detection tools, education can find a way out — AI-aware assessment strategies. And instead of banning AI or trying to catch students who are using it, schools could:
The AI direction is already becoming somewhat implemented by certain universities, including enabling AI-driven coursework where students are expected to show how they use AI, as opposed to just creating text. This emphasis should be translated into critical thinking and responsible AI use rather than the detectability of the output.
AI detection tools aren’t going anywhere, but how they’ll fit in education is uncertain. What happens next if AI-generated content becomes harder to detect?
The following are possible future approaches:
AI detection is a short-term fix, but in the long run, we need to rethink how we test learning in a world infused with AI.
AI content detection in education isn’t bulletproof. These tools have a hard time being accurate, at times flagging innocent students and missing AI-generated work.
Rather than detection alone, educators must redesign assessment practices. AI is here to stay, and the challenge is not only catching students using AI but making sure they continue to learn and develop critical skills.
AI detection tools like AI Checker can help, but the future of academic integrity isn’t about banning AI — it’s about adapting to it. The question isn’t just whether students are using AI, but how schools can teach them to use it ethically and effectively.