Four Ways Instructors Can Manage ChatGPT in Our Classrooms

Was this article written by AI? Just for the record: it’s not. But in a classroom situation where teaching, learning, and grading all come into play, questions about ChatGPT and other emerging  technologies are rapidly turning the education world inside out--and generating splashy headlines like “The End of the English Major” in national publications such as The Atlantic Monthly

To help give community college educators coping tools in this rapidly evolving classroom environment, Stanford’s Global Educators Network sponsored an online Inquiry Meetup in March entitled “Responding to Generative AI Tools such as ChatGPT.”  To kick off our discussion, Mission College Academic Senate President Aram Shepherd (who earned his PhD in English from the University of North Carolina) opened with a quick overview of the current state of AI technology--with the crucial caveat that any overview is instantly out-of-date given the sheer speed at which these technologies are evolving. But regardless of all the marketing hype one fact is incontrovertible: huge numbers of our students are employing these technologies routinely in classes from STEM to the Arts and beyond. 

So at least for now the best educators can do, Shepherd stressed, is to choose between four basic classroom strategies, each of which has distinct pros and cons:

  1. Forbid it

  2. Allow it

  3. Work around it

  4. Teach it

Forbidding the use of AI for all assignments and tests is a frequent first response. Clearly a blanket prohibition does have some obvious advantages, at least as a stopgap measure. “But realistically,” Shepherd admits, “it’s pretty hard to enforce that.” Yes, there are already AI detection tools on the market. Hence it’s at least possible to imagine that we could somehow scan and check and verify each assignment. However, students could then simply ask GPT to “write an essay that can’t be detected.” Alas, such a request is fully within the capacities of the technology already. Worse yet, Shepard stressed, “there’s no way to conclusively prove that ChatGPT was ever used, so taking this to a disciplinary level seems fraught; to say nothing of the inevitable risk of false positives.” 

Similarly, simply allowing students to freely use ChatGPT seems like another tempting solution, at least temporarily. Here’s one upside to that policy you might not have thought of: no one doubts that employees in a work environment are going to be using advanced AI. Indeed they already are. So in the workplace of the future, won’t the student who knows how to use ChatGPT be at an advantage, not a disadvantage? But then comes the equally obvious question: are students learning the class content, or just learning how to use ChatGPT? “Currently I’m allowing it in my classes,” Shepherd reports, “but many students still say ‘I don’t want to use it because I really want to learn on my own.’” That’s admirable. But as Shepherd himself asks, “how much work are these students putting in versus those who chose to use it?” And how is that reflected in our grading?

One compromise solution involves designing Work-Arounds that allow for limited use of AI, such as attempts to limit its use through more carefully constructed assignments. “Yet here again,” Shepherd reports, “the problem is that ChatGPT can already, for example, write a Reflection on how it wrote the essay.” So simply asking for students to reflect on their writing process, or to critically access their problem-solving steps, won’t always help. Similarly, its’ true that ChatGPT doesn’t currently have access to information on current events. However,  this will very soon change as Microsoft Bing and other products enter the market. In view of such constraints, required use of Social Annotation tools might discourage use of  ChatGPT. Similarly, requiring students to create podcasts, videos, and project-learning demonstrations might provide a work around by moving assessment away from the traditional essay.

Finally there’s the option to Teach It by building in assignments that require students to both use and evaluate AI output directly. Here Shepherd himself sees numerous possible advantages, especially for community college instructors whose students come from diverse socioeconomic backgrounds (with widely varying levels of exposure and access to high tech innovations). Hence “teaching AI” instead of forbidding or ignoring it assures that all students are equally aware of these new tools, empowering students to confront the many pros and cons of these emerging technologies. Alas, AI technologies are changing so fast that designing assignments requires constant updates and enormous additional labor by any instructor. Not to mention the fact that teaching AI applications may not easily align, if at all, with the overall course objectives. 

In an English course, Shepherd noted, “You might ask students to respond to text generated by ChatGPT. And then ask students to evaluate feedback given on an assignment.  For example, with a student’s permission I submitted his/her essay to chatGPT and asked for feedback —then asked the student to respond. Four of the suggestions were good. The fifth suggestion wasn’t. So students need to reflect critically on that advice. Initially students may think ‘it’s a computer so it’s always right.’  But this is a very different kind of technology.”  

Regardless of which of these four strategies a teacher adopts, Shepherd believes all instructors should try feeding their own assignments, prompts, and tests into ChatGPT to see what it produces. Equally important, instructors should also incorporate clear written guidelines regarding ChapGPT and related AI tools into their syllabus and assignments immediately--so that students understand these expectations from day one.