Rage Against the AI Chatgpt4 Machine
I am not giving in.
I have gotten the impression I am supposed to. Not directly, but subtly.
But I won't. I will retire before I have to let students submit AI generated assignments, passing them off as their writing and original work.
Forgive the Fox News reference, but this is funny, sad, and informative:
Point: AI gets it wrong. Really wrong.
I have had a number of students try to pass off AI generated text (I don't call it writing--that is a human act, more on that below), and they do it in two ways:
Wholesale, in that they don't bother to edit or even look
at what has been generated; they just put it in a Word document, label
it their appropriate assignment, and submit, thinking I am too stupid to
notice that 1. It doesn't address the assignment or prompt, 2. It reads
blandly and inhumanely, 3. It has words in it I know they do not
understand the meaning of, and 4. It leaves tell-tale signs of AI
generations (if you ask it to write a letter, it creates a template with
bracketed spaces for insertions of names, etc.). In these cases, I have
told them, "you are a big fat liar and I don't care what happens to
your grade." That's harsh, but I don't buy the "Oh, they ran out of time
and they just don't know how to cope and they have so many stressors
and problems and they are so dependent on their phones and blah blah
blah....." They purposely said, "I don't have time to earn this so I
will steal it."
OR
They look at it, change a few words, do some edits and cuts, and maybe put in one or a few self-references or examples. This is harder to detect, but not impossible. It's also more plagiarism than using AI-generation. I do use the detector websites if I suspect it, but those are inconclusive and/or contradictory. One might say "100% human written" and another "94% AI generated "on the same passage.
In the last set I graded, seven students used AI on a specific assignment. Most did some version of the editing, either minor or major, but they all had the same phrases, structure, headings, and turgid prose style. Turgid meaning "tediously pompous or bombastic." In this case, a style that has no personality and does not take a stand on an issue, like a human. The text says "one side argues for this, and the other side argues for that."
But the text itself does not take a stance. A machine cannot take a stance; machine-generated text does not take a position. Humans rage against the machine. Machines don't rage against humans.
At least not yet, we hope.
Therefore, let me rage against the machine, or better, our mistaken view of writing.
We have accepted the view that writing is generation of text. Real writing is nothing but.
Writing is thinking. Writing is creativity, it is passion, it is self-consciousness, it is self-correction, it is understanding, is it re-calibration of thought. It is looking at the text and saying, "This is all wrong," and starting over again, the stereotypical crunching up the paper and throwing it in the wastebasket. It is procrastinating. It is deciding that the written word isn't the best medium. It is letting other people read it and say it doesn't work and being made and then realizing they are right. It is as Red Smith said, opening up a vein.
So,
yeah, it's understandable that if we want students to create text and
doing so is such a struggle, they are going to resort to Chatgpt4. I
mean, if creating text is our only concern. If we don't explain what they really should be doing in the writing process and output. If we don't get them to understand their writing is themselves, their expression of their unique identity.
Perhaps we should assign less writing in college but only writing that the students have to truly wrestle with, take ownership of, invest in. Perhaps we should assign writing and watch them do it in front of us (that would require physical presence in the classroom, another rant I could go off on but will wait until another day). Perhaps they should only write what they take ownership of--arguing for a true position that is meaningful to them and to which we respond in full faith to engage their arguments and world view. Again, that would require some work and empathy and interrogation of our own views rather than assuming we are the masters of critical thinking because of our positions.
I say these things because despite my anger and frustration and disappointment over those AI-generated assignments (which earned very low grades or zeroes), I plan to keep teaching for a while and have to deal with this technology that is already losing its shine and appeal. I did try it out with three prompts: explain a concept, write a letter, and brainstorm a list of topics. The most useful was the brainstormed list; the scariest was the essay explaining a concept because it was pretty good, if impersonal. The letter was too long, verbose, and off point.
My next is to ask it to write an essay about me, which it will do by pulling from website information. But that will be a while. Maybe Halloween when I really want to be scared.
Comments