header banner
OPINION

AI and the Test of Teaching

As AI dominates news and education, educators must develop better tests than Turing did, seeking to foster human intelligence while confronting the dishonesty of and powerful financial forces behind the AI industry. 
By Shyam Sharma

Educators have a professional and social responsibility to push back against the hype and dishonesty inherent in AI technology today. 


One of my memories of silly childhood experiences in northeastern India involves a dishonest boy whom we started calling “Snotty” when we found out how he had been cheating us. This clever playmate would throw rocks at random things, then claim that each landing spot was his target. It was embarrassing to learn that Snotty was not supremely skilled. That must be why I have no memory of Snotty’s real name, but his antics are still fresh in my mind. 


I remembered the story above while reading a book on the opposite end of silly, The myth of artificial Intelligence: Why computers cannot think the way we do. Written by a computer scientist and entrepreneur Erik Larson, this is a serious investigation into the history and current public perceptions of AI. Larson starts by exploring the famous “Turing Test,” a challenge that the famous British scholar Alan Turing set up for determining if, or when, future computers might become “intelligent.” Larson shows that Turing developed the test on a fundamentally flawed premise: a computer’s ability to confuse humans with a computer. Turing ignored the complexities of human intelligence, including our social and emotional intelligences. 


As an educator, I am reminded of Snotty’s dishonest tricks and Turing’s flawed intelligence test everyday. As AI dominates news and education, educators must develop better tests than Turing did, seeking to foster human intelligence while confronting the dishonesty of and powerful financial forces behind the AI industry. 


Fostering human intelligence


I remembered the Snotty story the most when ChatGPT 3.5 went viral in 2022. The trick of shooting first and claiming targets later, or even the act of setting up low bars to compare human intelligence, is playing out in all kinds of offensive ways in AI-dominated landscapes today. For instance, AI tools produce merely “plausible” responses, and we are asked to ascribe meaning or value to them. AI companies scrub every intellectual property they can with their enormous harvesters, then ask us to pay for the remixed material (with added fabrication). 


This is an extension to Larson’s critique, but I think humans can confuse ourselves far more easily than in the Turing test. We can confuse ourselves with a ball whose other half is missing (which we naturally interpret as a full ball in our minds) and with a variety of optional illusions, with emotion-laced illogical ideas, harmful ideologies and propaganda, biases and conspiracy theories, wishful thinking and deeply ingrained habits of thought that resist change. 


Related story

Trace and test


None of the above confusions make us less (or more) smart than machines – nor does the fact that machines can process far more numbers and words and data points than our brains can possibly handle. They make us uniquely, un-comparably intelligent. As the tech industry muddies the waters, reckless and unaccountable to the harms it is causing to education and democracy and the environment, it has become more urgent and important to foster human intelligence than ever before. 


The test of teaching 


Let us take the claims of intelligence by today’s artificial intelligence (AI) to the educational context. Teaching is a vastly more complex enterprise than delivering or following plausible word patterns (even where AI tools have become reliable). We must not define teaching downward to a silly level, such as the delivery of content and multiple answer quizzes, and then say that AI can do it. We will be dishonest about our profession if we start giving automated feedback or otherwise offload our teaching responsibilities on AI tools. Effective teaching requires the facilitation of social, emotive, and transformative human experiences.  


Google argues through its education-adapted NotebookLM AI that students and scholars alike can now “think smarter, not harder” by letting it do the heavy lifting. NotebookLM invites users to upload documents, which automatically prompts a summary, offering tabs to produce audio overview, “briefing doc,” study guide, FAQ, and timeline. Indeed, it has started giving a thematic title to its summary, offering potential discussion questions to click on and generate answers, and so on – all of them trying to convince us that we no longer need to expend our intellectual energy to do burdensome tasks like reading or analyzing texts, gathering or synthesizing information, visualizing or interpreting data.  


The problem is not just that AI tools cannot yet do what they claim to. For instance, the claim that AI tools can “read” assumes they can “understand” texts, and the belief that they can “summarize” assumes that they can process texts to infer context, audience, genre conventions, and purpose like humans do. Those are dishonest claims because the premises are false. The problem is that even if and when AI tools become able to reliably read and summarize texts, or as well as humans can, asking AI tools to “read” or “summarize” will undermine student learning. 


AI-dependence makes learning shallow–as it does teaching. What I learned last week based on NotebookLM’s summaries of texts has mostly disappeared by today; in contrast, I remember ideas from texts that I actively read by investing my time, attention, and labor a few decades ago. So, it is both silly and dishonest to tell someone to skip the process of reading (or summarizing, researching, or analyzing). It is no different from asking someone who is on a morning walk, to “just get in my car, walking is silly”–especially if you plan to eventually charge them a monthly subscription or use a sneaky way to make them pay. 


Educators’ social responsibility 


Even beyond education, using AI tools can be, in effect, a lazy, unproductive, or dishonest thing to do. Imagine receiving lengthy AI generated emails from a stranger, colleague, or friend. Would you spend your time and energy on something they didn’t invest theirs in? This is why educators must stand the ground and help new generations develop the skills to read, with and without AI tools. 


Our students aren’t going into a future where they can (or would want to) pull out a phone to record every conversation, or ask ChatGPT how to answer a question at a job interview. They should still be trained to read books and articles, reports and memos, emails and text messages – as well as to use AI assistance when the goal is not learning, when they can adequately judge the machine’s “reading” as reliable, when the effect of its use is not dishonest, and so on. 


If we keep lowering the bars of what human intelligence is–while letting the speed, efficiency, and convenience of computers undermine our honesty and ethics–the muddied waters of discourse about both human and machine intelligence will make these values less and less appealing to the public. If “reading” comes to mean uploading a file and pressing the “summarize” button to skim through the synthetic text in a minute or two, then that will be the new normal even in the university (not to mention in society). If we keep introducing powerful new AI technologies into our education, government, professions including medicine and law, and public discourse without adequate guardrails, we will do to society and its institutions what supercars that go 300 miles an hour would do to our current highways (that can sustain 100 mph maximum).  


Of course, if we only oppose the influences of AI, we and our students will miss opportunities. For example, if we use NotebookLM to generate a summary before actually reading (or at least purposefully scanning through) the full text, especially while annotating and summarizing or discussing the ideas with someone, then that tool will help enhance the experiences of reading, engaging with, and using texts. But that is not what Google is selling: it is selling laziness, which will severely undermine learning, and teachers who adopt such tools uncritically are betraying their students and the society at large.


We should not be steamrolled by the hype, or go wherever personal convenience or self-interest takes us. We should not abuse our power over students, or fail to deliver our professional responsibilities and cultivate ethical behavior. The public is more fascinated by AI’s ability to answer legal or medical exam questions than to ask why such important educational tests are machineanswerable. And AI is reshaping education and careers. So, teachers and scholars must challenge the power dynamic, asking tough questions even when it is risky. 


Cultivating character


Turing was undeniably talented, but as Larson points out, he became arrogant about technology’s power after his work with computers helped win World War II. Turing was not dishonest, but today’s AI industry depends on often dishonest and always exaggerated claims, leading to the widespread false belief that AI can think like humans. With enormous financial powers and political influence to the point of taking over governments, the tech industry is reshaping media and educational discourse about its products. The most significant of their strategies is using metaphors about human capacities (thinking and learning, reading and writing, arguing and opining, and even feeling and dreaming). This hijacking of human metaphors makes it harder to point out the difference between human agency and machine action. Scholars must expose this abuse of metaphors, deception of the public, and manipulation of systems like education–whether these are done intentionally or are products of social power dynamics or mere market force. 


One other thing I remember about the story involving my childhood playmate Snotty is this: in the attempt to compete with him, his peers practiced hitting targets so much that, to this day, I still seem quite good at that skill. I realize this when I play games like seven stones with my kids on New York’s Long Island beaches. While engaging with AI is inevitable, and often beneficial, we must also expose the hype and dishonesty. 


The world’s educators can and should do what we can to challenge the AI industry to stand on more honest grounds than Snotty’s tricks. To do so, we must develop our own Turing tests: more informed, humble, and focused on human intelligence and wisdom. Our test in the years and decade ahead will be how we foster human intelligence while using increasingly powerful and disruptive technologies. It will be a test of how we defend the grounds on which society can cultivate human competence, culture, and character.


 

See more on: human intelligence AI
Related Stories
OPINION

Teach them English

OPINION

Teaching with rubrics

SOCIETY

Deputy Mayor of Damak back to teaching job

My City

NASA Adjusts Dates for Artemis I Cryogenic Demonst...

ECONOMY

We are supporting schools through test ride campai...