#OPINION

Three Thresholds for AI use

Published On: June 22, 2024 09:00 AM NPT By: Shyam Sharma


Shyam Sharma

Shyam Sharma

The author teaches writing and rhetoric at the State University of New York in Stony Brook. He can be reached at shyam.sharma@stonybrook.edu
ghanashyam.sharma@stonybrook.edu

AI use is exposing people’s lack or disregard of knowledge, honesty, and accountability. If we want to avoid letting AI make us ignorant, dishonest, and uncountable to broader social good, we must commit to elevating and maintaining fairly high standards in these three regards. 

At a gathering of department chairs and other academic leaders some time ago, my university’s faculty development center did a fascinating warm up activity. They gave all attendees green, yellow, and red stickers, asking them to attach one to each of the several statements glued to the wall: how well do you think ChatGPT can do this task? After we all went around to stick green stickers where we thought ChatGPT can do an excellent job, yellow for an okay job, and red for indicating it can’t do that kind of task, we had an open conversation. 

The conversation revealed an extremely important reality about public perception about artificial intelligence (AI) tools: people, including professors, overvalue its capacity in fields outside their expertise–and that has serious consequences. For example, I was the only writing professor in the room, and I noticed that besides me, only a colleague from the philosophy department stuck a red sticker under the statement indicating that ChatGPT can draft the kinds of essay assignments we give to our students. To me, it was shocking that computer scientists, economists, medical science scholars, and business professors alike would believe that AI tools can “do writing just fine” (to borrow the words of a faculty trainer at another event). Similarly, the conversation made it clear that while most others put green stickers on ChatGPT’s ability to complete coding assignments, computer science professors in the room did not believe that. They knew better, as I knew better about writing. And the same was true about other disciplines. 

Before the advent of AI, computer scientists didn’t have machine writing assistants, just as writing teachers didn’t have machine assistants that claimed to know everything about financial decision making, just as financial managers didn’t have machine assistants that seemed to “do computing just fine.” When we sought human assistance, it cost us much time, money, and effort. These costs are now so low that it is easy to problematically lower our expectations about critical issues. In this essay, I discuss how AI use is exposing people’s lack or disregard of knowledge, honesty, and accountability and how to raise the thresholds in these three areas. That is, if we want to avoid letting AI make us ignorant, dishonest, and uncountable to broader social good, we must commit to elevating and maintaining fairly high standards in these regards. 

Maintaining the thresholds 

AI tools seem to be prompting people to lower the thresholds for accuracy and comprehensiveness of content, trust and credibility in communication, and honesty and accountability in work. Public, media, and academic discourses increasingly show that scientists seem okay with AI-generated arguments when applying for grants that have serious social implications. Communication professionals seem to trust AI tools to make consequential financial decisions for them. And financial managers seem comfortable using AI applications to automate financial transactions on behalf of their company and clients. 

AI technologies are exposing weaknesses in all kinds of professions and in society at large. They are exposing our ignorance: when we are impressed by merely plausible patterns of words as fact or knowledge – rather than by the plausibility itself – our knowledge threshold is likely lower than we need on the subject at hand. We’re using AI to pretend to know what we don’t. AI tools are exposing our honesty: if we’re skipping the hard work of researching, reading, thinking, and investing labor to develop our ideas, we’re asking others to invest their time in ideas we didn’t invest our own time in. We’re using AI to give the impression that we have skills that we don’t. And AI tools are exposing our irresponsibility toward the environment and social good: every time we use AI “for fun,” we are contributing to the mind-boggling amount of energy the systems behind the tool are using. Every time we use AI for getting work done, we could be validating a knowledge system that doesn’t represent societies equally and fairly. AI data sets will continue to represent a few societies, cultures, and epistemologies. AI algorithms will continue to reflect rhetorical thought patterns of dominant societies that create and control the systems. And the AI market will continue to advance the interest of the rich and powerful, especially in societies that have colonized and marginalized and often erased the epistemologies of others. 

Of course, AI is a wonderful new development in that it makes new things possible for all professionals – just as fire, automobiles, and computers did in the past. But the mad pace at which this new technology is advancing lacks a matching pace in knowledge, skills, and accountability among its users. It is not enough for the writing teacher to have a reasonable expectation about AI writing; to the extent their writing has consequences in the world, the scientist and financial manager also need a reasonable threshold of knowledge, skills, and honesty about writing. They shouldn’t use generated text for interpersonal, legal, or financial exchange without considering all potential harms to others. In fact, we should all raise our thresholds as everyday users. We cannot afford to push our societies where voters are not sensitive to AI-generated misinformation, neighbors communicate with each other through the made-up language of AI, and public leaders lack nuanced understanding and courage to prevent various harms of AI. 

Noticing exposure

Every technological change, or social change for that matter, not only brings out what is in us, but also opens up new possibilities for us. When radio and television were developed, we got to hear what people would say when they were not interacting with an audience; the internet and social media in particular added a massive wrinkle by allowing strangers to interact. AI is exposing what people will do when they can say and write things they didn’t create themselves or didn’t think through; further developments in AI are going to mediate communication and all things text and ideas in ways we cannot currently imagine. One thing seems clear: AI will expose our ignorance, our dishonesty, and our irresponsibility toward society and the environment. How high we set our thresholds of knowledge, honesty, and ethical/social responsibilities is up to us. 

I had an eye-opening experience about these thresholds while facilitating an AI workshop for graduate students. For an end-of-year lunch and discussion for a writing support group, I created a few activities, with help from my graduate assistant, helping doctoral and master’s students see how they react to AI use by them versus by others. For the first activity, we first asked the students to first define a complex term in their discipline without any AI assistance, then do the same thing with ChatGPT only. When asked to grade AI’s definition, they were willing to give it 7 or 8 points out of 10. However, when asked how many points they would give as teaching assistant if an undergraduate student submitted that definition, they said they would give no credit or just a few points, depending how well the student understood the concept. It was clear that they didn’t want to give credit without evidence of learning. 

We could say that the graduate students were justified in giving ChatGPT a higher grade, but they were perhaps not realizing whose time and energy the generated response was saving. So, we gave the second activity in which they were asked to imagine using ChatGPT to write a job application letter, upon completion of their graduate degree, for seeking a position as an industry researcher or university faculty member. Would they be comfortable submitting the letter, with some adaptation, because it is much stronger in the “quality of writing” than their own writing? They said they would. However, when asked if they would hire someone who submits that same letter if they were on the hiring committee reading that application, they said no. This time, it was clearer that they were not concerned about credit for learning or having the skills to qualify for the job. While they indicated that the gap in their responses had to do with job qualification, their response seemed driven by convenience and benefit to themselves–more than it was by honesty and accountability. 

In the final activity, we asked the students to imagine that they had worked in the same company for five years and are writing up a SWOT analysis report, taking six months for the study and working with a team. Would they use ChatGPT to “write it up” by feeding “data” to it? The responses were mixed. What would they feel if the company’s manager fed their report into ChatGPT to generate an email response and praised their work with that email? Many would consider starting to look for a new job. 

The opportunity

It must be noted that the flipside of the exposure of our ignorance, dishonesty, and irresponsibility is a massive opportunity. If we learn how to filter out junk toward creating valid and valuable knowledge by using AI assistance, we could elevate our thought processes and advance and apply new knowledge for greater social good. If we learn how to be transparent and honest about AI use, and focus on benefits to others as much as ourselves, we could help make the world a better place for everyone. And if we mitigate environmental harms and mobilize AI to advance social justice, we could find new opportunities to leverage our efforts to do so. 

Sadly, for now, the status quo is not inspiring. AI companies are releasing products that are not yet ready at all, gutting safety and ethics teams, becoming more able to ignore or bypass government regulations (if there are any), and listening increasingly more to their investors than they are paying attention to public safety and social concerns. Worse, as AI use penetrates societies globally, more and more people are adopting AI tools with very low thresholds of knowledge, honesty, and accountability toward broader social good. 

Educators can begin by helping to raise and maintain those thresholds, but it is not clear how 

societies will for the public. There are no simple solutions to the epistemic bias. Still, scholars and other professionals must take the ethical responsibility to push back on the spread of disinformation, aggravation of dishonesty, and reinforcement of irresponsibilities on society. We can do this by fostering critical AI literacy that has a strong global DEI dimension, and through public scholarship. Discourse can and must shape practice.


Leave A Comment