“I will be running originality reports to see if this is your work. Don’t try to pull any quick ones on me,” say most teachers looking out for their student’s success. It is no secret that AI is pretty cool. It is helpful if used with good intent and it simplifies content when needed, but more often than not it is used as the easy way out. Some may say that AIs such as ChatGPT may have all the answers, but quite honestly, its inability to achieve sentience makes that statement far from true. For those reasons, teachers certainly should not use AI to grade.
It would be generally hypocritical for teachers to tell their students they cannot use AI to help them with work but do so on their own. Besides, the task of grading acts as a regulator for the workload teachers put on their students, ensuring no party has too much work to do. With the help of AI to grade things quickly, that buffer may no longer exist, making student life more difficult than it already is.
More importantly, AI has the inability to foster human feelings, also known as sentience. Take an English paper, for example, that has to do with conflicting motivations. A topic such as that will be heavily involved in human feelings, and have a factor of connotation to it. With AI limitations, a student who may write an award-winning essay that can really connect with readers on a personal level may receive a failing grade. With these two reasons alone, it should be evident AI does not have a place in dictating students’ hard-earned A’s.