Do teachers spot #AI? Evaluating the detectability of AI-generated texts among student essays https://www.sciencedirect.com/science/article/pii/S2666920X24000109
"Generative AI can simulate student essay writing in a way that is undetectable for teachers.
•Teachers are overconfident in their source identification.
•AI-generated essays tend to be assessed more positively than student-written texts."
#AIEd #teaching #teacher #EdTech
"Generative AI can simulate student essay writing in a way that is undetectable for teachers.
•Teachers are overconfident in their source identification.
•AI-generated essays tend to be assessed more positively than student-written texts."
#AIEd #teaching #teacher #EdTech
Brian :python: :flask:
in reply to Doug Holton • • •At the risk of a "yeah, but..." style response, this focused on the ability of a teacher to pick out AI responses from an unknown group. I think the confidence interval is interesting and worth discussing, but I'm not reading random essays day to day - I'm reading writing from teenagers whom I have read for months now.
The citations didn't include any studies on the teachers' abilities in identifying AI-produced work in context (unless I missed it). Any thoughts?
Doug Holton
in reply to Brian :python: :flask: • • •@brianb It just reminded me of the article below that we are overconfident in our ability to "sense" if something is written by AI. But you're right, in context, we can tell when a student's writing style has suddenly changed, regardless if that was because of plagiarism, AI, or other things. Hopefully though a student isn't using AI for everything.
"Human heuristics for AI-generated language are flawed"
https://www.pnas.org/doi/10.1073/pnas.2208839120
rsp
in reply to Doug Holton • • •