#OPINION

Catching Cheaters

Published On: October 4, 2024 08:30 AM NPT By: Shyam Sharma

Students deserve to know the fundamentals of AI, including the fact that it depends largely on stolen intellectual property

Educators must seek to foster motivation and honesty by creating a culture of trust and treating students with respect. Doing so can help more students try harder, not fear failure, and believe in their ability and seek to do their best. Teachers can also involve students in creating a code of conduct for using AI tools honestly and effectively. More students abide by rules they helped to create.

As more AI tools help students to bypass more types of learning experiences, teachers must shift from banning AI or changing assignments to modeling and fostering motivation, honesty, and ethical behavior.

The other day in class, I pulled up on the projector screen a new artificial intelligence (AI) tool called NotebookLM, by Google. And I showed students, telling them tongue in cheek, how they can now use this tool to do their regular homework task without investing much time or attention. Instead of reading, summarizing, and commenting on the text that’s due for class discussion, they could just upload the text and ask it to do all parts of the homework for them.

Because the tool is designed more specifically for students than most bots, it generated a summary and suggested questions, adding buttons for generating FAQs, a study guide, a table of content, a timeline, an audio conversation between two characters, and a briefing document. That is a lot of potentially helpful study tools in one application. But I was showing students that they can, if they so choose, just complete the homework for a good grade and not the learning. They smiled.

Recognizing where we are

Since generative AI tools, based on large language models (LLM), burst into public view in 2022, educators have been responding in different ways to AI’s dramatically expanding and diverse abilities to help students bypass learning. Some teachers have tried to ban AI (including by using AI-based detection tools), use harsher warnings and punishments, or talk students out of using them. Others have stopped using traditional assignments like essays and research papers, replacing them with more creative tasks like essays based on personal experience, multimodal assignments like podcasts, in-class writing, group projects, and even writing by hand.

Unfortunately, neither trying to ban AI nor running from time-tested teaching and learning tools will work anymore. For instance, NotebookLM generated the following items within its “briefing doc”: themes, key findings, important ideas, and impressions. Because my students needed a brief commentary or reflection on a key issue in the text (in addition to a summary), I asked the app to “Give me a paragraph of response in which….” The app started with “A key issue in this text is that..” then offering, as instructed, a few interesting responses: “This idea resonated with me because I hadn't considered….” It went on: “From what I learned, I am going to ….” Intrigued, I replied: “Now give me a few smart aleck points from the text for impressing my professor and classmates during class.” And it started: “Certainly, here are some smart aleck … impress your professor and classmates….” It certainly showed no qualms.

There went the personalized assignment. If we ask students to write a story about the impact of the death of an important person on them, today’s bots will narratively and reflectively kill granny in the blink of an eye. NotebookLM, again, can draft scripts for recording podcasts and deliver the audio conversation including the students’ voices (if fed with a text, prompt, and voice samples). It offers, unprompted, juicy quotations for citing in a research paper. It can cite a text in a paper when given the paper’s argument, engaging the source and elaborating a supporting argument. The twenty plus literacy tasks that I’ve broken up the research paper into, a few AI tools can handle quite easily. They haven't impressed some of my students yet, but they will soon.

That leaves us teachers with a thorny challenge: if changing or updating assignments, or using harsher policies or penalties, do not work on students who choose to cheat, will other strategies like fostering motivation work on them? I am afraid they too will fail to stop dishonesty among a few. And yet, for now, that is the best we can do.

Fostering motivation

We must focus on motivation to boost honesty not only because we can’t catch all cheaters. This focus will also help us look past poor premises like these: students cheat because they can; they are dishonest by default; and, the objective of writing an essay, for instance, is to learn to write an essay, rather than to use that process as an occasion to acquire knowledge, skills, and experiences. The focus also helps us approach the challenge of cheating and inept use more broadly, helping students seek knowledge and skills for future careers, cultivating a character that will enhance success and happiness in the future.

Certainly, it is possible to instruct motivation and honesty, or effect these as measurable course outcomes. They require a shift in students’ behavior and outlook on education. If one student earns a grade by cheating with AI, others may follow suit. There may also be larger, more complex dynamics causing dishonesty, such as the cost of education, students’ inability to meet their teachers’ or school’s demands, and low quality of teaching and inadequacy of resources. The underlying causes should also be addressed.

But educators must seek to foster motivation and honesty by creating a culture of trust and treating students with respect. Doing so can help more students try harder, not fear failure, and believe in their ability and seek to do their best. Teachers can also involve students in creating a code of conduct for using AI tools honestly and effectively. More students abide by rules they helped to create.

Modeling honest use

More than ever before, educators must model ethical and honest behavior–as well as effective use of educational tools. On various social media groups on AI and education, I am alarmed to see some teachers saying that they use AI tools to generate the syllabus and assignments. Others add that they use AI to generate content, ignoring that AI tools typically generate merely “plausible patterns” of words and sentences. Yet others say that they use AI tools to give feedback to students.

The double standard is blatant when educators want students to not cheat with AI but they cheat themselves by not investing their own time, energy, expertise, and attention. It is embarrassing. It is unprofessional and unethical, especially for people who are in the business of shaping the character of the next generation.

Of course, that does not mean that teachers should not use AI tools. In fact, we should test out AI tools before students do, figure out how we can help students use them to enhance learning, and integrate them into our teaching. We should create our own comfort zones, creating safe and educationally productive spaces for students, relative to specific educational levels and learning tasks and objectives in our respective disciplines. And, most importantly, we should be transparent about our own use of AI tools, not only showing students how to use them to enhance learning but also modeling ethical use by revealing how they used them, when, and why. On that note, I used ChatGPT to generate a one-sentence subhead (below title) for this op-ed, then rewrote it to my satisfaction. With students, I help them draw their own ethical boundaries, as well as learning how to use the tools effectively to enhance learning. 

Tackling the tough case

One might still ask: what can teachers do with that one student with whom nothing works? No matter what we do, a few students will violate our trust. The first and best we can do, given the vast variety of tools and their affordances available, is to resist turning the whole classroom into a hostile space in order to catch the incorrigible. Distrust increases dishonesty.

It is also good to deal with dishonesty on the individual level. Teachers should also assess their teaching strategies and their relationship with students, including the tough cases. Students take shortcuts with some teachers more than others. They try not to let down some of us more than others.

But educating all students to use AI tools effectively is the most effective broader educational approach. Helping students use AI effectively can also help them overcome the learners’ paradox inherent in its use: the more knowledge and skills students have the more they benefit from it (and are not fooled by its deficits). Students deserve to know the fundamentals of AI, including the fact that it depends largely on stolen intellectual property, consumes alarming amounts of water and electricity to train data and deliver responses, perpetuates colonial matrixes of power and exploitation through evolving forms of data colonialism, and undermines democracies by massively empowering bad actors.

Many teachers are still on the sidelines, and others are actively resisting. Both these responses are essential in the larger landscape. Still, students need guidance for using AI tools, as well as knowledge of their harms to society and environment. So, at least some of us should provide guidance and knowledge. That will help to decrease the number of cheaters and those who don’t know the boundaries of ethical and effective use. It will also help the public to push back against the excesses of AI companies as powerful economic forces with often corrupting political might. We should help students compare AI-assisted learning with pre-AI learning strategies, showing what the benefits and harms are. We should reward students for productive, honest uses.

The text that I used for showing my students how they can skip learning and just earn the grade was titled “Sentence Mining.” It reported a study showing that students (long before AI) didn’t cite texts by trying to understand and summarize their main ideas; they instead picked a quote from the first or second page to meet the teacher’s demand for the number or style of citation. AI is dramatically aggravating that tendency, also adding serious new challenges. It is far harder to stop those who try to cheat today. But we owe our students our best support. We must model ethical behavior so we may better foster motivation and honesty. When supported and inspired, most students understand that if the goal is learning, whether a machine can do the task is irrelevant. To keep teaching effective, teachers cannot take shortcuts either.

As I told my students to wrap up my sarcastic demonstration of NotebookLM: You might ask your neighbor to fix your bike for a fee, but would you ask them to go to the gym for you? Similarly, we should ask ourselves as teachers: We might ask a neighbor to help us in our backyard garden, but would we harvest their vegetables and call ourselves gardeners?