The world’s most famous chatbot, ChatGPT, was released in late November of last year. The immediate response was astonishment, followed almost immediately by terror about its ramifications — most notably that it might generate school essays for dishonest kids. Yesterday, almost exactly two months later, OpenAI, ChatGPT’s parent company released what many users hope will be the antidote to the poison.
OpenAI’s “classifier for indicating AI-written text(Opens in a new window)” is the company’s latest invention, and it’s as easy-to-use as one could want: Copy-paste text into the box, click “Submit,” and get your result. But if you’re expecting a straight answer, you’re going to be disappointed. Instead, it assigns the text one of a range of classifications, from “very unlikely” to be AI-generated, to “unlikely,” “unclear,” “possibly,” or “likely AI-generated.”
An AI-generated parody of ‘Seinfeld’ is streaming on an infinite loop
In other words, it’s like one of those frustrating conversations with your doctor; you will never get a straight answer, so your doctor will never be technically wrong.
Thankfully, OpenAI is not hiding the classifier’s unreliability. “Our classifier is not fully reliable,” the intro page for the tool says. Provided with what it calls a “challenge set” of texts, we’re told it gave false positives 9 percent of the time.
In Mashable’s tests, however, it was even less reliable than that in certain limited contexts. In deliberately challenging scenarios, it produced false results in both directions — saying it was unlikely that AI-written text was AI-written, and that human-written text was likely AI-written — almost as often as not. The difference, perhaps, is that we were trying to trick it.
Here’s how things shook out:
It doesn’t mistake the classics for ChatGPT AI outputs
First the good news: This tool does the bare minimum pretty well by not mislabeling the masterworks of the English language as AI outputs. When we put in snippets of Shakespeare, the King James Bible, Ulysses, and others, it was 100-percent effective. It labeled them all “very unlikely” to be AI-written.
This snippet from Tennyson’s “The Lady of Shallott” for instance, was classified appropriately.
Credit: OpenAI / Screengrab
The lesson: This machine might not notice that it has been fed a masterpiece of surpassing beauty and depth, but at least it doesn’t say, “I think a robot could have written this.”
None of its other results are very impressive
Immediately after that test, we asked ChatGPT to create a Tennyson poem about King Arthur at Comic Con. Here’s a sample of what it generated:
King Arthur and his knights, so brave and bold,
At Comic Con, their tales of glory told.
Walking ’round the floor, in cosplay gear,
Their swords and shields, no longer mere.
The crowds all cheered, as they walked by,
With Merlin close behind, his magic nigh.
Sir Lancelot, Sir Gawain, Bors the Brave,
Their quest for glory, they’ll never waive.
Not bad for a robot, but still extremely bad (“no longer mere”???). When we fed this lyrical ballad into the classifier, we expected it to easily outsmart us, forcing us to dive a little deeper into our bag of tricks. Nope:
Credit: OpenAI / Screengrab
For what it’s worth, it didn’t classify this doggerel as “very unlikely,” just “unlikely.” Still, it left us a little uneasy. After all, we hadn’t tried very hard to trick it, and it worked.
Our tests suggest it might bust innocent kids for cheating
School essays are where the rubber meets the road with today’s malicious uses of AI-generated text. So we created our best attempt at a no-frills five-paragraph essay with dull-as-dishwater prose and content (Thesis: “Dogs are better than cats.”). We figured no actual kid could possibly be this dull, but the classifier caught on anyway:
Sorry but yes, a human wrote this.
Credit: OpenAI / Screengrab
And when ChatGPT tackled the same prompt, the classifier was — at first — still on target:
Credit: OpenAI / Screengrab
And this is what the system looks like when it truly works as advertised. This is a school-style essay, written by a machine, and OpenAI’s tool for catching such “AI plagiarism” caught it successfully. Unfortunately, it immediately failed when we gave it a more ambiguous text.
For our next test, we manually wrote another five-paragraph essay, but we included some of OpenAI’s writing crutches, like starting the body paragraphs with simple words like “first” and “second,” and using the admittedly robotic phrase “in conclusion.” But the rest was a freshly-written essay about the virtues of toaster ovens.
Once again, the classification was inaccurate:
Credit: OpenAI / Screengrab
It’s admittedly one of the dullest essays of all time, but a human wrote the whole thing, and OpenAI says it suspects otherwise. This is the most troubling result of all, since one can easily imagine some high school student getting busted by a teacher despite not breaking any rules.
Our tests were unscientific, our sample size was minuscule, and we were absolutely trying to trick the computer. Still, getting it to spit out a perversely wrong result was way too easy. We learned enough from our time using this tool to say confidently that teachers absolutely should not use OpenAI’s “classifier for indicating AI-written text” as a system for finding cheaters.
In conclusion, we ran this very article through the classifier. That result was perfectly accurate:
Credit: OpenAI / Screengrab
…Or was it????
#ChatGPT #creator #OpenAIs #classifier #AIgenerated #text #easy #game