#OPINION

We’re Hallucinating, Not AI

Published On: March 17, 2024 08:30 AM NPT By: Shyam Sharma


Shyam Sharma

Shyam Sharma

The author teaches writing and rhetoric at the State University of New York in Stony Brook. He can be reached at shyam.sharma@stonybrook.edu
ghanashyam.sharma@stonybrook.edu

When lawyers lie, doctors kill, or teachers fool and then deflect responsibility to a machine, we must see these problems as the visible tip of humanity’s collective hallucination. 

Michael Cohen, a former lawyer of a former US president was recently caught submitting AI-generated (meaning, fake) legal cases to court. He was appealing to shorten a probation for lying to Congress earlier. But Cohen is not a rare case. Artificial intelligence tools are convincing millions of educated people in highly sensitive professions and across the world, and not just everyday people using them for mundane purposes. Doctors are embracing AI-generated assessments and solutions, and engineers even more. 

Most people know that AI tools are designed to generate plausible sentences and paragraphs, rather than just locate or synthesize real information. The “G” in ChatGPT means “generative,” or making up. But AI tools like this are so powerful in processing information in their datasets that they can produce stunningly credible-looking responses that range from the most reliable to the most absurd (called “hallucination”) on a wide range of topics. And they cannot distinguish between the absurd and the reliable: human users have to do that. The problem is that because people are increasingly trusting AI tools without using their own judgment, humanity is becoming the party that is collectively hallucinating, more than AI. Factors like speed, convenience, gullibility, and a seeming desire to worship the mystic appearance of a non-human writer/speaker are all leading humans to ignore warnings that even AI developers themselves are placing everywhere from their log in screens to about pages. 

Deflecting responsibility 

AI response is useful to the extent that a human user has skillfully prompted it and then can knowledgeably judge it. Did the tool make up imaginary people and past events without telling the user? Did the underlying data set have adequate information on the topic, and do the algorithms adequately maintain ethical boundaries? Or is the human user impressed like cows are “impressed” to see a remote-controlled toy car–simultaneously curious and scared into a frenzy? Indeed, AI promoters have been arguing that “we don’t know how it is learning so fast” or “how it is doing all these things.” That’s quite a marketing feat. 

When machines make things up and take no responsibility, helping humans deflect responsibility back to it, society loses established anchors of trust. If the gentleman above had deliberately made up the legal cases, he would face severe consequences. He didn’t, because he could just say that he didn’t know Google Bard was a make-it-upper, rather than an advanced search tool. People and institutions seem more willing to overlook dishonesty involved, or harm caused, when AI tools are involved. The implicit circular blame game is creating a vast ethical minefield across the landscapes of civic and professional discourse. 

Trust and credibility were already on thin ice in the social media era, when alarming experiments were perpetrated on society by companies that became so powerful that governments were increasingly hesitant to regulate. With the rise of AI technologies and their integration into most domains–from media to financial sectors to politics and education–accountability is further collapsing, along with human agency. Even powerful people and organizations, and not just general users, are getting away by blaming machines. 

In traditional human discourse, communication starts with a rhetor (speaker/writer) who creates a message and delivers it to an audience (listener/reader). Add a text generator to that process, and we have a proxy agent that lacks consciousness of its own ideas, awareness of context or culture, regard for values and norms, and accountability for its actions. Discourse no longer has shared points of reference in reality, trust in rhetors, and accountability for dishonesty or harm toward the audience and other relevant parties. What we have is a Mr. Know-All that is as arrogant as someone who turns up the “heat level” inside a black box owned by a corporate entity. 

With the entry of the AI proxy in discourse, the idea of knowledge itself becomes redefined in disturbing ways. First, AI systems are trained in an extremely small fragment of human knowledge: selected sites on the internet. Most books produced by societies around the world for centuries are excluded. So are, so far, vast academic databases containing far more reliable information than current datasets. Of course, AI datasets have no way to access the millions of times broader scope of human knowledge that is embodied in professions and occupations, communities and cultures, lifestyles and living traditions around the world – not to mention the skills and experiences embodied in individuals and shared by groups, or the experience that is reflected when humans speak, write, and perform in limitless ways. Yet, AI tools effectively pretend that they know everything there is to know. 

Epistemic colonization on steroid

Beyond the collective hallucination of trusting makers-uppers as know-it-all agents, humanity is also hallucinating at a global geopolitical scale by not asking whose bodies of knowledge AI tools have access to. Which societies and cultures (or on-the-ground realities) do they serve? Whose patterns of thoughts and discourse do they reproduce/promote? The political implications of people around the world believing that AI tools serve everyone’s interests equally, equitably, and fairly is mind boggling. Just ask who is able to pay the monthly subscriptions. Who gets to influence the design? Whose harms go overlooked? Whose lifeworlds do AI tools reflect and help to universalize? Whose benefits and harms are addressed, how quickly and effectively? What governments can force AI creators and deployers to provide guardrails and safety responses? In the social media era, Facebook, for instance, refused to provide many countries harmed seriously by its platform with even a single safety staff. Will AI companies, with far more potential for harm, treat all countries and communities equitably? Not likely.  

According to New York University’s AI Now Institute, less than 1% of the total training data of current AI systems is from or about the entire African continent, which has 54 countries and 20% global population and is the cradle of human civilization where a stunning diversity of cultures and epistemologies have advanced. Asia, another continent with a majority of world population, also doesn’t feature well. AI training data is also mostly in English. 

Basic facts about AI, like the above, reminds me of the story of the drunken man who kept searching for his keys under a streetlamp, and when asked why didn’t go beyond that spot, he said: “This is the only place in town that is lit.” People around the world are relying on the metaphorical drunken man telling them where to look for their keys, while the larger global townsquare awaits for more equitable knowledge making and sharing systems. 

As it is, AI systems are perpetuating epistemological colonization of the world, far worse and long after colonizers left. It is sad to see scholars across the world being happy to “co-author” books with ChatGPT, sharing knowledge that’s shockingly lacking roots in local society, and turning into mimic men who don’t believe they can address their own local social and professional needs. 

Is there hope?

Could AI systems and tools themselves be made better? They “know” what’s in their dataset, and this curse of knowledge will persist. But if designers can drastically expand the dataset to make it globally inclusive, sociopolitically equitable, and ethically sound. They can update their algorithms with the extremely rich and diverse modes of discourse, rhetorical traditions and practices, and value systems underlying patterns of thoughts the world’s many cultures offer. 

To give a small, positive example, Sal Khan of the famous Khan Academy partnered with OpenAI and developed the ChatGPT-based Khanmigo, reportedly pushing OpenAI to respond to users with greater flexibility. Khan realized that his tutoring tool needs the personality of a helpful guide, not an overconfident school master. Indeed, OpenAI now acknowledges that “ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content. . . . Bias mitigation is an ongoing area of research for us.” ChatGPT feels humbler now than early on: it not only increasingly refuses to generate illegal or unethical content but also often gives users suggestions for situating its responses in real-world contexts. OpenAI further notes that “the model [behind ChatGPT] is skewed towards Western views and performs best in English,” adding that “some steps to prevent harmful content have only been tested in English.” This inspires some hope. 

Yet, AI systems and tools will likely stay close to the center of Western, male-dominated, white, middle class understanding and expectations regarding how they view the world, how they engage in discourse, and where they draw lines about critical issues. Knowing this, users must push back, raise their own awareness, and use AI systems mindfully. Scholars must seek to make small achievements in the right direction from wherever we are.

AI systems and tools can and must be made better for all of humanity–there’s no choice. The harms and dangers will persist, but the positive potentials can and must also be realized far better, helping the global community go far beyond the collective hallucination today. Developers must vastly enhance AI tools’ ability to interpret and analyze its globally expanded/inclusive knowledge base, to respond to diverse users across the world, and to adapt to the contextual realities of vastly many more communities beyond the West. Today, even academic researchers are offered with imagined authors and books–even as doctors and engineers trust generated content without vetting! This is shameful. 

What the world needs

Ultimately, the world needs AI systems and tools that both represent and serve people across national, cultural, racial-ethnic, professional, socioeconomic, and other borders. That is, AI developers must adopt a diversity-equity-inclusion (DEI) framework for updating current systems and creating new ones. The majority of human beings cannot be a second thought, as they are now. Acknowledging that the current approach doesn’t distinguish between useful knowledge and absurd “hallucination” is the first step. But a global DEI framework is necessary to earn the trust of the world’s diverse peoples and cultures, and to serve them well.  

When lawyers lie, doctors kill, or teachers fool us and then deflect responsibility to a machine, we must see these problems as the visible tip of humanity’s collective irresponsibility. Only waking up to the larger reality under the surface will help us rise and overcome challenges that AI will continue to bring to the world. 


Leave A Comment