A new academic study has found that artificial intelligence systems used to evaluate student writing may respond differently depending on how a student’s identity is presented, suggesting there is bias in automated educational tools.
The research, titled “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,” was published in March by a team from Stanford University. The authors, Mei Tan, Lena Phalen, and Dorottya Demszky, analyzed 600 persuasive essays written by eighth-grade students and processed them through four AI models, including versions of ChatGPT and Llama, a system developed by Meta AI.
Read Full Article: https://amgreatness.com/2026/04/28/study-finds-ai-writing-feedback-varies-by-students-race-gender/