The rapid proliferation of artificial intelligence (AI) on college campuses has spawned an escalating "arms race," with professors and students caught in a cycle of anxiety, false accusations, and increasingly desperate attempts to outsmart AI detectors. The introduction of generative AI tools has sparked concerns about cheating, with some institutions imposing strict penalties for suspected academic dishonesty.
To counter this, a growing number of students are turning to a new class of "humanizers" β software designed to detect and adjust text so that it doesn't appear as having been generated by an AI. These tools offer a range of features, from basic grammar checks to more sophisticated algorithms that analyze sentence structure and syntax.
However, the competition between AI detectors and humanizers has raised questions about the reliability and effectiveness of these tools. Many experts have criticized AI detectors for being overly simplistic and prone to false positives, which can lead to innocent students being accused of cheating.
To address this issue, some companies are launching new software that allows students to track their browser activity or writing history, providing proof that they wrote the material themselves. However, humanizers can sometimes outsmart these detection methods, making it difficult for institutions to distinguish between genuine student work and AI-generated content.
The situation has become so contentious that some students have taken matters into their own hands, using tools like Superhuman's Authorship to surveil themselves on Google Docs or Microsoft Word as they write. This allows them to track when they've used Wikipedia or Grammarly, reducing the risk of false accusations.
As the debate rages on, experts are calling for a more nuanced approach to addressing AI-related academic dishonesty. "The most important question is not so much about detection, it's really about where's the line," says Annie Chechitelli, Turnitin's chief product officer. "We need to have a conversation with students and faculty about what counts as acceptable use of AI in their work."
Ultimately, the shift towards more monitoring and surveillance of student activity may be unsustainable for many institutions. As Tricia Bertram Gallant, director of academic integrity at UC San Diego, notes, "If it's an unsupervised assessment, don't bother trying to ban AI."
To counter this, a growing number of students are turning to a new class of "humanizers" β software designed to detect and adjust text so that it doesn't appear as having been generated by an AI. These tools offer a range of features, from basic grammar checks to more sophisticated algorithms that analyze sentence structure and syntax.
However, the competition between AI detectors and humanizers has raised questions about the reliability and effectiveness of these tools. Many experts have criticized AI detectors for being overly simplistic and prone to false positives, which can lead to innocent students being accused of cheating.
To address this issue, some companies are launching new software that allows students to track their browser activity or writing history, providing proof that they wrote the material themselves. However, humanizers can sometimes outsmart these detection methods, making it difficult for institutions to distinguish between genuine student work and AI-generated content.
The situation has become so contentious that some students have taken matters into their own hands, using tools like Superhuman's Authorship to surveil themselves on Google Docs or Microsoft Word as they write. This allows them to track when they've used Wikipedia or Grammarly, reducing the risk of false accusations.
As the debate rages on, experts are calling for a more nuanced approach to addressing AI-related academic dishonesty. "The most important question is not so much about detection, it's really about where's the line," says Annie Chechitelli, Turnitin's chief product officer. "We need to have a conversation with students and faculty about what counts as acceptable use of AI in their work."
Ultimately, the shift towards more monitoring and surveillance of student activity may be unsustainable for many institutions. As Tricia Bertram Gallant, director of academic integrity at UC San Diego, notes, "If it's an unsupervised assessment, don't bother trying to ban AI."