Turnitin, an online plagiarism-detection service, has activated its AI-writing detection tool, one of a number of platforms being brought into the market meant to flag the possibility of academic fraud.
It comes at a time when generative AI tools, like ChatGPT, have made headlines as the next generation of transformative technologies – while also spawning concerns about how they could increase opportunities for students to cheat.
Other AI-detection programs that have surfaced recently include GTLR — created by the Watson AI lab at the Massachusetts Institute of Technology — and GPT-Classifier, created by OpenAI, the developer of ChatGPT. Learning platform Packback has also added an AI-detection tool to its existing program.
In a BestColleges survey, half of college students surveyed agreed that using AI tools on schoolwork is considered cheating or plagiarism. About 1 in 5 use them anyway, with 61 percent saying that they think AI tools will become the new normal.
Turnitin’s technology identifies the use of AI writing tools, showing how many sentences in a written submission may have been artificially generated.
According to the company, the tool works with 98 percent confidence.
“Turnitin’s technology has high accuracy and low false positive rates when detecting AI-generated text in student writing,” said Annie Chechitelli, chief product officer for Turnitin, in a written statement.
“To maintain a less than 1 percent false positive rate, we only flag something when we are 98 percent sure it is written by AI based on data that was collected and verified in our controlled lab environment.”
Not a ‘Definite Answer’
A recent test of the tool by a Washington Post tech columnist showed an inaccuracy in results, though. Of 16 samples of AI-fabricated and mixed-source essays that were submitted to the detector, more than half of them were partially wrong.
Turnitin correctly identified six of the 16, but failed on three, including claiming one of the students had help from AI when she didn’t. The remaining seven included misidentification of some portion of ChatGPT-generated or mixed-source writing.
If a detector flags content as possibly generated by AI, it should not result in an automatic accusation of cheating, Chechitelli told EdWeek Market Brief in response to the column. Educators should instead use their own discernment to determine if further review, inquiry, or discussion with the student is needed.
“It definitely needs a human, a teacher, to look at the results, and to analyze them, and to think about their students so they can start a conversation,” she said. “It should not be treated as a definitive answer.”
The technology’s capabilities will continue to be refined as more data is made available to increase accuracy, according to the company.
Turnitin’s AI detector is available as an integration within its existing products and solutions. That means additional steps for access to the detection tool are not required for its current users of 2.1 million teachers and 62 million students.
It’s going to be a bit of an arms race [for vendors] because the tool capabilities are going to be continuously evolving, and students’ proficiency with the tools will be evolving.Cem Dilmegani, principal analyst at AIMultiple
Although there is no option to turn off the AI detection feature, Turnitin has made exceptions to suppress the tool for a select number of customers with unique needs or circumstances, a company spokesperson said. Turnitin will continue to have conversations with customers and will adjust to reflect the evolving practices of educators.
“This just gives [teachers] the opportunity to learn in real time and see actual data in front of them to make a decision if this is something they want to incorporate into how they assess students,” Chechitelli said.
The company began working on its AI detection capabilities almost two years prior to the release of ChatGPT. It has now launched its tool as demand is on the rise, and overall familiarity with the implications of generative AI are increasing.
As more companies look to break into the AI-detection market, vendors will need to ensure that the evidence of their products hold up and that their security is reliable, industry analysts say.
Educators and district technology buyers will be looking for the most effective software to combat cheating as AI technology expands, and companies will need to keep up in a constantly advancing environment, said Cem Dilmegani, principal analyst at AIMultiple, which provides technology industry insights.
“It’s going to be a bit of an arms race [for vendors],” Dilmegani said. “because the tool capabilities are going to be continuously evolving, and students’ proficiency with the tools will be evolving.”
Image by Getty