Kidsco

A BESTSELLING CURL CREAM, ADJUSTABLE DUMBBELLS AND MORE EDITOR-APPROVED TARGET FINDS

Al Roker's 3 Children: Get to Know Courtney, Leila, and Nick

by Kristine Bowman

Artificial intelligence is changing teaching and learning in ways we never expected. Students are turning to AI tools like ChatGPT to draft essays, respond to homework questions, and, in some cases, create entire research papers. And although this appears to be a significant jump in educational technology, it also creates a significant problem — how does a teacher maintain academic integrity when AI can produce the work for a student?

In an effort to combat this, many institutions have rallied around AI detection tools, which are designed to flag machine-generated content in student work. But do these tools really make a difference? And if they don’t, what should educators consider instead? As AI evolves, the efficiency of such detection tools is becoming ever more uncertain, putting schools and universities in a challenging position.

Academic Work Going the AI Route

AI writing tools have made it easy for students to produce polished, well-structured essays in seconds. In academic settings, AI-generated text is appearing in:

  • Secondary and tertiary-level essays;
  • Research papers;
  • Admission statements for college applications;
  • Take-home exams…

To some, AI is merely a new tool — a calculator for math — that allows students to do more efficiently what they were already doing. But for educators, AI poses serious worries about originality, effort, and learning. Are students cultivating critical thinking and writing skills when they lean too heavily on AI?

Some schools have countered by banning AI tools altogether. Others have taken to AI detection tools to catch students using AI-generated text. However, as Leon Furze explains in his timely article on the subject, AI Detection in Education is a Dead End, AI detection tools may not be the silver bullet schools want.

The AI Detection Problem in Education

AI detection, at least on the surface, seems like a rational answer to AI-generated assignments. If students are using AI to do their work, shouldn’t schools be able to discern and stop that? But, it turns out, these tools are hardly foolproof.

According to a recent preprint study on arXiv, cited by Leon Furze, which ran 805 writing samples through AI detection tools:

  • The average accuracy for AI detection tools was only 39.5%;
  • Accuracy fell to 22.1% when adversarial techniques (simple editing tricks) were applied;
  • False positive rates were high — 15% of human samples were wrongly flagged as AI-written.

This used to be a grave ethical problem. How can students prove that their original work wasn’t written by AI when they are falsely flagged? The risk of false positives means that educators might be punishing students who did nothing wrong while students who use AI in a dishonest way might still go undetected.

Another problem? Different AI models are more or less detectable. The text produced by Google’s Bard is easier to detect as AI-written, but GPT-4’s output is considerably more challenging to catch — and even more so when it’s altered slightly.

AI Detection vs Traditional Plagiarism Checks

Plagiarism detection tools have been used in schools for years to filter out whether the students are copying content from the web. But AI detection operates differently — and that’s where the problems start.

  • Plagiarism detection matches submitted work against existing databases — if a match is found, that’s flagged as plagiarism.
  • AI detection lacks a point of comparison — it merely estimates whether text appears AI-generated, resulting in greater confusion and disputes.

Unlike plagiarism checkers, AI detection tools offer no black-and-white evidence. Instead, they provide probability scores, allowing for discussion between students and educators. This adds stress and workload because it causes teachers to spend more time reviewing flagged assignments.

As Furze notes, teachers are already under enormous pressure regarding grading, moderation, and reporting. Detection disputes add to their workloads, only making their jobs harder.

Should AI Detection Give Way to AI-Aware Assessments?

Rather than bickering about imperfect AI detection tools, education can find a way out — AI-aware assessment strategies. And instead of banning AI or trying to catch students who are using it, schools could:

  • Educate when it is OK to use AI — help students know when and how they can use AI responsibly.
  • Reconceptualize assessments — prioritize assignments calling on personal insights, in-class writing, or oral components.
  • Demand transparency — students might hand in drafts assisted by AI alongside their notes tracing their thought process.

The AI direction is already becoming somewhat implemented by certain universities, including enabling AI-driven coursework where students are expected to show how they use AI, as opposed to just creating text. This emphasis should be translated into critical thinking and responsible AI use rather than the detectability of the output.

The Future of AI Content Detection Within Education

AI detection tools aren’t going anywhere, but how they’ll fit in education is uncertain. What happens next if AI-generated content becomes harder to detect?

The following are possible future approaches:

  • Promoting literacy of AI — how to use AI responsibly rather than banning it.
  • Grading systems with AI integrated — focusing on thought process as opposed to the end result.
  • Regulation and transparency — rather than trying to “catch” students using it, schools may ultimately need AI disclosures.

AI detection is a short-term fix, but in the long run, we need to rethink how we test learning in a world infused with AI.

Conclusion — A Shift in Academic Integrity

AI content detection in education isn’t bulletproof. These tools have a hard time being accurate, at times flagging innocent students and missing AI-generated work.

Rather than detection alone, educators must redesign assessment practices. AI is here to stay, and the challenge is not only catching students using AI but making sure they continue to learn and develop critical skills.

AI detection tools like AI Checker can help, but the future of academic integrity isn’t about banning AI — it’s about adapting to it. The question isn’t just whether students are using AI, but how schools can teach them to use it ethically and effectively.