About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.
At the time, I worried this was going to become
... Show more...About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.
At the time, I worried this was going to become a much bigger problem. That the fear of AI “cheating” would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I’d be wrong about that.
Turns out … I was not wrong.
I'm accused of being AI on other sites simply because I construct complex sentences with regularity -- and use emdashes.

About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Ku…
Techdirt
supersquirrel
in reply to Powderhorn • • •like this
Maeve, Drusas, Quantumantics, deliriousdreams and Hexanimo like this.
voxthefox
in reply to supersquirrel • • •like this
wagesj45, Maeve, Drusas, yessikg, Quantumantics, deliriousdreams and Hexanimo like this.
supersquirrel
in reply to voxthefox • • •like this
wagesj45, Maeve, Quantumantics, deliriousdreams and Hexanimo like this.
lmmarsano
in reply to Powderhorn • • •Other sites?
Happens here, too.
The best answer is troll them by imitating AI.
like this
Drusas and deliriousdreams like this.
Powderhorn
in reply to lmmarsano • • •KelvarCherry [They/Them]
in reply to lmmarsano • • •Bilb!
in reply to lmmarsano • • •apotheotic (she/her)
in reply to lmmarsano • • •Maeve
in reply to Powderhorn • • •mrmaplebar
in reply to Powderhorn • • •like this
deliriousdreams and Hexanimo like this.
Ooops
in reply to mrmaplebar • • •AI checkers for text (but the same is true for the ones pretending to spot AI pictures and videos) also don't work by definition.
The AI tries to make it's "product' perfect. It does not have the ability to spot its own mistakes and telltale signs, or it wouldn't make them in the first place.
So every AI check is actually cheating. In pictures and videos with hidden watermarks, in text with typical clues like the mentioned '–' or vocabulary more prevalent in AI texts that the average human work.
like this
Hexanimo likes this.
ORbituary
in reply to Powderhorn • • •I can't get Idiocracy out of my mind when I read this...
- YouTube
www.youtube.comdeliriousdreams likes this.
Rimu
in reply to Powderhorn • • •DragonTypeWyvern
in reply to Rimu • • •Megaman_EXE
in reply to Powderhorn • • •like this
Hexanimo likes this.
Powderhorn
in reply to Megaman_EXE • • •like this
Hexanimo likes this.
Megaman_EXE
in reply to Powderhorn • • •Hexanimo likes this.
Ooops
in reply to Powderhorn • • •That's more a matter of 95% of people not even knowing how to type a '–' with their standard keyboard layout.
like this
deliriousdreams likes this.
🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮
in reply to Ooops • • •like this
yessikg and deliriousdreams like this.
Ooops
in reply to 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 • • •like this
Drusas likes this.
🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮
in reply to Ooops • • •Powderhorn
in reply to 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 • • •Which is great for one application, but two spaces after each period would be hell to edit down to AP Style.
I mean, Ctrl+H and switching two spaces to one is easily doable, but that's not where I want to start the editing process.
AnarchistArtificer
in reply to Ooops • • •BarneyPiccolo
in reply to Ooops • • •ByteSorcerer
in reply to Ooops • • •I've heard a teacher using that as a test to see which students are using AI: If the student turn in a report full of em-dashes, then the teacher would put them in front of a laptop running Word and asks "can you please show me how you type those long dashes that you used all over your report?"
If they can't do it, then their report is considered AI-generated or plagiarism (which are considered equivalent by the school). If they could do it they would get the benefit of the doubt, but when I heard it he hadn't had a single student pass that test yet.
It's a better and likely far more accurate test than those complete bullshit "AI detectors".
Dupelet
in reply to ByteSorcerer • • •definitemaybe
in reply to Dupelet • • •Exactly the point.
I run teacher training on this stuff, and that's always a core part of the message: education is about relationships. Damaging your relationship with a student over an accusation of AI use is backwards; instead, come with curiosity.
Also, AI writes poorly, so you don't even need to call them out on it. And then when they (inevitably) include a source or fact hallucination, return the paper and explain that the error needs to be fixed, and why. That's your "in" to explain ethical use of AI.
BarneyPiccolo
in reply to Powderhorn • • •My son has gone back to college in his late 20s, after having a lot more experience in everything, including writing. He's become an excellent writer, but he has expressed that he's worried that his younger peers are such bad writers, that the profs will think he must be using AI.
I just told him to keep talking in class, and they'll figure out real quick that he really is that smart, and they won't question his writing. That already seems to be happening.
It's when the dummy shows up with a well written paper out of the blue, that their red flags go up.
theneverfox
in reply to BarneyPiccolo • • •Owl
in reply to Powderhorn • • •Once again the school system of a country makes the life of children worse.
Just have an AI write it for them then but tell it to use simple words (specify [the grade of your kid]-1) and leave out a comma or two, works like a charm every time.
millie
in reply to Powderhorn • • •Infinite
in reply to millie • • •its_me_xiphos
in reply to Infinite • • •You can still fake it. Have AI write the essay, you "write" a first draft and simulate edits here and there. You can also prompt AI to writer a first, second, and third draft and detail changes. Then you manually make them over time. Turn it in.
Look, this is a chance for teaching and grading to change. It needs to as the traditional methods which were failing from budget cuts, overuse of shit tools, etc, weren't working. Put learning, not evaluation, in the classroom and you can avoid AI abuse. I am an N of 1, but I'm telling you there are teachers out there who are amazing because they approach teaching without regurgitation and grade based progress. AI thrives at both.
Go grab Frier, read pedagogy of the oppressed, and then start researching contract grading.
CandleTiger
in reply to its_me_xiphos • • •And you have to tell us that, because mostly we haven’t seen such teachers and wouldn’t otherwise know they existed.
its_me_xiphos
in reply to CandleTiger • • •its_me_xiphos
in reply to Powderhorn • • •In my in person classes I used contract grading and weighed in class participation and case studies at 75% of their contract. The final was optional and was from a list of possible choices. I'd focus on providing mentorship and feedback, not grading them, simulating real world growth and learning. I had no AI problems and both I and my students generally loved it.
I taught one online class. It sucked. I hated it. Rampant AI and totally fabricated everything. Even reflection paragraph posts. I need to learn how to design an online class like my in person ones. Until then, never again.
Most of the AI users were student athletes. I can quantify this, so I'm not exagerating. They would miss classes for travel, turn in AI slop, and I would have to fail them over and over. That online class was 60% student athletes. I tried so hard to talk sense and be accommodating, but it was unabashed AI everything. It was bad.
The student athletes are getting more screwed than normal because they are just faking it through college and getting exploited by the NCAA for money.
Powderhorn
in reply to its_me_xiphos • • •