Educational discourses about AI should have become mature by now but they’ve unfortunately not yet. The hype cycle is likely to last quite a while on this one. That doesn’t mean AI isn’t a serious and often beneficial disruptor, or that we shouldn’t use it to advance educational innovation. But AI is only likely to help transform education and other knowledge-intensive professions if we can get through the current fog and rebuild the professions more solidly upon their respective foundations. In this essay, I describe AI as it is currently developing as a paradigm of utterly wild paradoxes that are inherent in how its systems are designed, marketed, and adopted.
The Learning Paradox
AI advocates most often market it as a companion for learners: they suggest that it is most helpful when students need clarity on a topic or when learning something new. This assumes that AI tools “know” their outputs as true, relevant, and appropriate for the user. It assumes that the learner is able to adequately prompt and use the tool, not just that they can pick the right tool for the right purpose. In reality, users with adequate background knowledge on a topic can recognize whether an AI tool’s clarification or new knowledge is reliable; but novices may be fooled by AI’s pretense of omniscience and the polish of its response. Just ask a system like ChatGPT to “compare a potato and the movie 3 Idiots,” and you’ll see that it produces fluent and logical nonsense. Novice users are also more overwhelmed by the computational power, scale, and speed of AI’s response, all of which go far beyond human capacities. Especially when AI tools save their time and effort, learners can make all kinds of compromises with their learning. So, when promoters of AI describe it as a learning assistant, their claim overlooks how easily AI can harm the learners’ long-term capacity building, especially during early stages of learning.
The Teaching Paradox
Teachers who uncritically recommend or require their students to use AI fail at their primary duty as educators. Adequate prompting requires adequate content knowledge, disciplinary skills, and the knowledge and skills for using the AI tool being used. Teacher advocates of AI tend to assume that a bit of “prompt engineering” will do the trick. In fact, learners need adequate language proficiency, logical and critical thinking skills, metacognition and reflective abilities, etc, to use AI tools effectively. Ask a system like Perplexity (a common research assistant AI tool) to “Give me a 2-page literature review on [specify your topic],” and you will see that it produces a polished synthesis that mixes peer-reviewed research with non-scholarly sources, makes absurd connections and syntheses, and makes a mockery of the half a dozen or more intellectual tasks that students should learn for “developing the intellectual position” that a literature review teaches. Even if students have already learned basic lit review skills, teachers assigning the lit review must teach more advanced skills for their courses. Furthermore, given the pressure of the grade, workload, stress level, cost of education, and more, students may instead just complete the task without doing the learning. Without adequate instructional scaffolding AI use can remove the productive friction that learning generally requires.
The Expertise Paradox
The Nepali Puzzle
As users gain expertise, AI systems may appear to “level the playing field” by giving emerging experts access to expert-like information and analytical assistance. In practice, those with deeper expertise are better able to recognize AI’s shortcuts, hallucinations, and design flaws, while novices are more likely to mistake fluent output for reliable knowledge. AI tools can give even emerging experts a false sense of competence outside their own fields; in fact, at an experts’ meeting, I was struck by how readily university professors marked “AI can do this” for complex tasks beyond their disciplinary expertise. In reality, AI systems have no access to most academic databases, specialized repositories, or for that matter non-digitized, multilingual, indigenous, or community-based knowledge. To test, ask an AI tool a question on a local issue that is too new or uncommon for there to be information on the web. Yet, AI is luring users away from empirical research in the lab and clinic, fieldwork in nature and community, historical work in the archives and faraway sites. By convincingly presenting an appearance of expertise, based mostly on internet junk and often limited or flawed datasets, AI companies are encouraging even institutions to substitute AI outputs for professional knowledge. Expertise cannot be “scaled” by scraping more of the internet; it is contextual and entangled in complex social realities.
The Flattery Paradox
Most generative AI tools seem designed to agree with, even flatter, the user. Just ask one of them to argue that puppies are ugly, and they will do a brilliant job. Or, ask it to defend an obviously weak or absurd claim, and it will do so persuasively. This is not a bug but a key design feature; that’s how the product markets itself to our subconscious minds. In addition, speed, fluency, and confidence also flatter the user, by making them feel more efficient and capable. A lot of people don’t pause to distinguish the fluency from usefulness. When users are “wow’ed” by all the above, that wow factor erodes critical thinking. The user’s experience of being impressed becomes the value of the tool. The charm of the tool becomes a liability, especially in educational contexts where learners should be cultivating the “friction” of learning and experience, of struggle and discovery, of forming sound intellectual and ethical habits. Users are most fooled by flattery, perhaps also feeling intellectually enhanced, when they don’t have to do the hard work of thinking.
The Power Paradox
AI is framed as empowering individuals, because it offers tools with unprecedented capacities to “extend” our intelligence and enhance our work. One colleague shared an inspiring story about how farmers were able to write grant proposals and acquire funding; that’s an undeniable benefit, because it added access, equity, and justice for the farmers. Yet this narrative obscures an asymmetry of power. The sheer magnitude of the disruption to broader knowledge ecosystems and the professions, which is leading to loss of jobs in shocking numbers, and the concentration of wealth and power in the hands of a few companies, will dwarf all the benefits like the above combined. While farming may be further down the line, the political and economic leverage of the tech industry will render rather insignificant the benefits derived by individual users or groups here and there. The empowerment rhetoric further masks the consolidation of epistemic and infrastructural power on a global scale, with entire nations standing to lose control of the information ecosystems.
The Employment Paradox
Schools and universities are urged to prepare students for an AI-based future of work, often by teaching them to use AI tools efficiently. The “career ready” argument has never been more at a fever pitch than with AI. Most ironically, and hypocritically, AI advocates are also admitting that AI is going to take away a lot of jobs soon (if not most of them), often sounding weird and cringey about it. Safety researchers such as Roman Yampolskiy have warned that the pace and scope of displacement make stable workforce planning increasingly implausible. So, what serious educators need to do instead is to update academic curricula and programs to better foster lifelong learning and adaptation. We can do this by making general education AI-responsive, help students explore AI’s disruptions as they develop disciplinary skills and identity/confidence, and mentor and socialize/professionalize the more advanced students upon stronger intellectual and interdisciplinary foundations.
The Responsibility Paradox
AI systems distribute decision-making across the companies (or their political/financial interests), designers and engineers (or their limited knowledge), and institutions and end users. But the end user, who is made to feel they’re in control, has to take responsibility, as there are so far no regulations holding AI companies (or AI adopting institutions) accountable for harms. Those with the least power, unless they’re intentional bad actors, shouldn’t have to bear the burden. Students shouldn’t be expected to use AI “ethically,” while the more powerful instructors just police its use without taking responsibility to teach how to use or limit AI. Frontline workers shouldn’t have to bear the burdens that integrating AI often adds, while working with a black box with little knowledge about training data, model limitations, or system updates. Tech leaders frequently emphasize “responsible use” and “human-in-the-loop” models, but these phrases are rhetorical smokescreens, dishonest ways to avoid accountability. AI systems are designed to individualize/decentralize responsibility while keeping power and control centralized.
The Equity Paradox
AI is described as democratizing access to knowledge and opportunity, but the benefits are highly concentrated among the already privileged. Access to AI tools is unequal (whether within classrooms, communities, or countries) and it is magnified by unequal knowledge and skills, exacerbating digital divides. AI advocates tend to emphasize mere access (or exposure), but two students using the same AI tool may experience different outcomes, based on difference in background knowledge, time, and institutional support. The less privileged users face higher risks of misinformation, bias, and misguidance. From a technocratic perspective, mere “scaling” might also look like democratizing. But AI is democratizing inequities far more than it is creating access. Thought and communication are becoming flattened and performative, leaning toward dominant groups and cultures, rather than anchored in diverse contexts and dynamic human relationships. Generative AI systems are also built on vast accumulations of human labor – everyday writing and literature, art and music, photography and videography, code and documentation, and the most dangerous work of labeling and removing disturbing texts and images – yet the benefits are sucked to the top (i.e., the few companies, or men who run or own them). Collective knowledge becomes a private property, while the appearance of benefit for all is used as an effective cover for the exploitation of everyone’s labor and knowledge, stories and emotions.
The Transparency Paradox
AI outputs appear accessible and easy to understand, yet the systems producing them remain opaque. Users are asked to trust what they cannot see. The processes behind the response and behaviors of AI tools – from training data to weighting, filtering, and optimization – are inaccessible to them. Beneath clarity on the surface, there’s obscurity in tool design and function. This paradox extends to another one: individual and public knowledge is harvested and turned into private property, with the promise that both of them are benefiting but someone is charging a monthly fee (after the “freebie” window is over). Even during the freebie window, the user is giving away their private information, with many people sharing information about the most intimate parts of their lives.
It seems to me that there are so many paradoxes because the industry is driven by people with the ethics of drunk car salesmen. And I am most concerned about these paradoxes as an educator. So, I will follow up on these paradoxes with faulty premises and false hopes affecting education. The benefits of AI will be better harnessed if we first expose the wild paradoxes and false promises.
The author is a professor of writing and rhetoric at Stony Brook University and serves as AI Fellow for the Public Good for New York’s state university system (Email: shyam.sharma@stonybrook.edu)