GAI and Me

As we make our way through the semester, the lure of the module on Artificial Intelligence grows, especially for the teachers in the course. I noticed that a few people focused on the article by The Learning Network, “What Students Are Saying About ChatGPT.” I would love to see an update on this article about every six months to see what students are saying now…and now…and now. My personal experience in teaching college undergraduates last February was that they did not seem to universally be aware of ChatGPT. Some say that they really were aware and were trying to hide it, or felt uncomfortable talking about it because of caution or shame. That’s possible. I think especially the discomfort is a strong possibility, as no one, including students, really knows HOW to talk about ChatGPT in a healthy and effective way. I really just started trying to make an effort to do so this fall semester with my students.

My first response to ChatGPT in the press was annoyance. We’re all tired after years of crisis from the pandemic and …the world at large being rather chaotic, politically and economically and otherwise. As I began to read about it, my next general response was rage about the way that people talked about it–this was before I even tried the application myself. This fall I did a presentation at the SUNY Council on Writing conference about the manner in which writing teachers who are promoting the use of ChatGPT talk to instructors who are either resisting its use or undecided/indifferent to it. This came out of early research of books on Amazon and forays into educational social media where faculty and administrators were sharing their experiences and concerns. I was really put off by a general coerciveness in the literature and a condescension towards teachers (K-12 and beyond) who might want to support bans of ChatGPT and similar programs, or those who may think they are immune to the effects of it in their work. To put it frankly, much of the conversations had a bullying tone to them. I had a sense also that some of the promoters of GAI seemed a bit mesmerized by it, enchanted by it, as they used metaphors from science fiction to talk about GAI–either to say “this is not The Terminator” or compare it to HAL from 2001 A Space Odyssey, for example, which just felt rather childish to me. Maybe I’m naive, but I don’t think most teachers need to be convinced that ChatGPT is a danger to their families and friends or even that it is a sentient being; these are educated people we’re talking about, right? My undergraduates don’t think that ChatGPT is a sentient being, so why would teachers?

Much of what I perceived as coercion in the published work on ChatGPT, some of which contains excellent tips for its use by the way, was predicated on reiteration of the specter of job loss statistical predictions related to what is seen as a coming invasion of GAI in the workplace. Judy Estrin, an American entrepreneur, business executive, and philanthropist, and currently CEO of JLABS, LLC, a privately held company focused on furthering innovation in business, government, and nonprofit organizations,  published a short essay in Time, “The Case Against AI Everything, Everywhere, All at Once.” Obviously Estrin is not a technophobe, but she is someone especially privy to understanding how people in the Silicon Valley culture communicate to the public about innovative technologies. She referred to an article in The Guardian by Timothy Snyder to describe the politics of technological innovation promotion in Silicon Valley, and to warn how it is being used to shape the entire future of humankind around these technological innovations.

Estrin refers to the  current hype of AI as a “politics of inevitability,” based on Snyder’s political terminology. Troubling red flags and facts are put aside to be dealt with at a later time or for others to sort out, as indicated by phrases like “government needs to look into this further” or “educators need to sort out these problems.” Concerns are suppressed as the audience is shown the extraordinary benefits that will come in the face of what’s already inevitable. Those who are still reluctant may be urged to get with the program, be a grown up, deal with reality, and be told that if they don’t see the potential wonders now, they will. If they refuse, they’ll be left behind in the dust.  The other polarity, the politics of eternity, fixates on the let downs of the past and in current technological innovations, they see nothing but pain, seeing only the bad potential outcomes. They resist change because they believe nothing good will ever come out of it, pointing to past failures. This contributes to a cycle of promotion and resistance. In the AI conversations, we see this as a “war” between sides–those who are pro-AI and those who are not. Everyone can get swept up in it–teachers, students, parents, the media.  But according to Estrin, this polarity is fueled by Silicon Valley in their desire to promote their products, and is nothing new. AI is new, but the polarizing fueled by commercial interests is not. 

While this coercive tone in published work, like books, is tempered by a more respectful approach to the audience (somewhat), in social media conversations it can get a lot more raw and strange. On a social media group for educators interested in AI, one gentleman made it his duty to attack every teacher who expressed concern about their students turning in work they were sure was AI-generated. I posted to him that his language was disempowering and unsupportive, to which he replied that my feelings don’t matter, because AI has no feelings. Now, that’s bizarre, but it does show you that some teachers really do not know how to handle the emotions that this emerging technology generates for them.  Educator blogs, which are not peer reviewed, also can get a bit aggressive, like the ones that have declared the “death of the essay.” I really think that this aggressiveness is a byproduct of a grieving process (grieving over the loss of one kind of control) that some educators are going through and, instead of acknowledging their anxiety, projecting incompetence and denial onto other teachers who more actively express their feelings of anxiety and concern.

All this aside, I did end up playing around with ChatGPT eventually and I did find it interesting. I invited my undergraduates to research it for their Writing 102 papers, and in the fall, more have taken this up. Some confessed to having used it, either to play with or yes, to cheat (not in my class of course-lol, honesty goes only so far and self-preservation kicks in; but I still think the students who admit to this as less likely to use it “dishonestly” in a class that has a clear policy about it).  I suspect some of them do use it to draft their essays in various ways. While my assignments are hardly foolproof in preventing the use/overuse of it,  they do require students to do a lot of various different things that ChatGPT cannot do for them, at least on its own. For one, ChatGPT really cannot do citation. Also, my particular way of teaching tends to make students dig and explain and contextualize a lot so if they were going to use the chatbot, they might end up doing more work than if they did not use it.  One way I’ve encouraged students to use ChatGPT is for locating and researching stakeholders for their argument essays, which is something they have to do and document (in other words, they must prove what their potential audience/stakeholders think about their research topic). I’m also letting them actively use it to find arguments for the research they have already done. Since all the literature about educational uses of GPT say that it’s important for students to learn how to prompt chatbots in order to be viable in jobs in the future, I figured we should try learning how to do this together. Our relationships with our students are, in my opinion, our best superpower as teachers, and the most valuable tool in learning to navigate this new “invasive” technology.

Leave a Reply

Your email address will not be published. Required fields are marked *