header banner
OPINION
#Opinion

Using AI Responsibly

As artificial intelligence becomes increasingly capable, we must shift our focus from what it can do to what we should do with it—guided by knowledge, skill, goals, and values that preserve human responsibility and integrity.
By Prof. Dr Shyam Sharma

“AI tools cannot read,” argue the authors of two books I read recently. One is titled Rebooting AI and the other The Myth of Artificial Intelligence. I tested ChatGPT-4 on some of the reading tasks the authors say AI tools cannot perform, and it has already proven many of the authors' claims mostly wrong. While both experts make larger, valid points, it is time to shift focus from asking what AI can or cannot do toward asking what we should or should not do with AI, as well as what it should and should not do for us.


If AI tools can do what we are supposed to do as humans, should we use them to impersonate us, help us fake expertise, or undermine our human agency or hurt our relationships? No. The key issue is our judgment: what can we delegate to AI, how, when, and why? Our values must guide AI use for our benefit and when that use affects others.


Knowledge


If AI tools can already take on cognitive tasks and are performing social and professional ones increasingly, what are the tasks we should delegate or “partner” for? And what are those that undermine our agency, ethics, responsibilities, or relationships–and when and how? As I explore in this essay, we must develop clear personal and professional value systems. And the first step to doing so is knowledge and experience. We need to develop knowledge about what AI tools do and don’t do for developing clear values for the age of AI; but it is the knowledge of the subject matter, of people and society, and of culture and value systems that we must master even more. It is certainly ignorant to say that “AI knows more than I do”; but even if we cannot know how AI tools do what they do (the “black box” issue), we must learn how it is affecting our learning and knowledge work, our profession and society. 


Whether we work in education, health, law, policymaking, child-rearing, spiritual care, or social service, we will need more knowledge – NOT less –to learn how to use AI tools well and to maximize benefits while mitigating harms. Reliance on AI technology for knowing about the world and each other will do more harm than good; but knowledgeable use can help to harness the good and reduce the harm. 


Skills


Related story

Let’s act responsibly


One of the criticisms of AI is that AI tools don’t (or cannot) do what the industry claims, and we should salute those who are able to point out the flaws in a technology that has the backing of untold amounts of money, centuries of colonial and inequitable knowledge foundations, and often corrupt politics. The problem with the focus on AI’s capacity, even when it is valid, however, is this: What are we going to do if and when AI tools start doing reliably what they currently can’t?  


A wide range of private services has already replaced human connection with AI bots, seemingly wherever they can ignore consumer experience, such as when there is monopoly or more demand than supply. Neither the users replacing their human attention or labor with AI nor the AI tools are ready for the changes being made. Indeed, and the more troubling, the more power people have (teachers, doctors, politicians), the more they are already misusing AI in ways that harm those under their care. Even when human entities become skilled enough, after AI technology is effective, those progresses are not likely to be matched with responsibility, patience, and experience. Harms seem likely to continue outpacing help.


Goals


Many harms caused by AI result from people forgetting the goal and focusing instead on getting a task done. Knowledge and skills, when applied toward clear goals, help us act with better judgment. Remembering the purpose behind what we are doing can make individuals and systems more thoughtful and less careless. 


Focusing on goals can make students focus on learning, not merely completing assignments (or getting a good grade). It can help teachers use AI to make learning happen, not for dumping content on students or automating tests that students must take. It will challenge doctors to remain attentive and not abdicate care to machines. It could remind legislators to study, deliberate, and consult—rather than generate “laws” with AI. It could remind the police to use AI to help ensure justice, not to aggravate injustice. Parents who focus on higher goals can help their children grow emotionally, socially, and intellectually – instead of helping them dishonestly compete with their peers. Researchers could hold themselves accountable to explore real-world complexities, instead of copy-pasting synthetic “data” just to get a publication out. 


Values


What must ultimately change, on a social scale, however, is our value system. We must ground AI use in values that will help us focus on knowledge, skills, and goals. AI is not just a faster computer or a more powerful version of the internet. It is disrupting our cognitive abilities, ethical responsibilities, and social relationships as well. AI’s capabilities for creating new ideas and products out of thin air is different from the internet's help for people to sell books and clothing with the speed and convenience. As such, we must ask questions of value: what pulls me into the convenience that may make me lie or harm? What would push me toward what is right, fair, and honest? 


As citizens, we must cultivate habits of mindful AI use and critical reflection—because our daily decisions, especially when mediated by AI, have ethical implications. In personal use, it's not enough to ask, “Why can't I just look at the AI-generated abstract instead of reading the whole book? After all, I’m hurting no one other than (potentially) my own understanding.” For ourselves and for others, we should not be someone who doesn’t read the full book. It is fine in many contexts, wherever AI becomes reliable, to use it to preview books and research articles – such as when we’re deciding what to read. But even when no one is directly harmed, taking shortcuts will shape our habits, shift our thresholds for effort, and influence how we treat intellectual work and each other. What we permit ourselves in moments of convenience may become norms that diminish personal growth and collective standards, as well as harming others – not to mention harming the environment. 


If we compare AI with another “person” whose ideas we copy and paste into our communication with a third person, whether AI is able to “produce” the text we need/want is still problematic. In the human world, we use our own thinking and we credit others for theirs. If we further keep in mind that AI tools bear no responsibility for strings of words or plausible images they find or create for us to use, even after indicating that we got them from or via AI, then we have still potentially undermined interpersonal and social ethics. We don’t make things up without bearing responsibility, and we don’t give credit for ideas or artifacts to someone who cannot do so. Simply replacing the “someone” with “something” doesn’t make it better; it is worse.


Responsibility


In the professional world, the stakes get even higher: the consequences of delegating to AI without appropriate oversight are not just ethical but often legal. Professionals must remain the ultimate bearers of responsibility, no matter how advanced the tools they use. This means establishing clear boundaries for when and how AI may assist, revealing its use, and ensuring that human judgment is adequate in decisions affecting others' rights, safety, or well-being. Institutions should require training, create accountability mechanisms, and enforce professional standards that align AI use with core ethical (as well as legal) obligations. 


In the world of education, we must not conflate or confuse efficiency with effectiveness, speed with ethics, convenience with corruption of the profession. The act of teaching is fundamentally relational and context-sensitive. Treating students as recipients of automated content—rather than participants in a process of inquiry—turns education into transaction and undermines its transformative potential. 


In research/scholarship, scholars are increasingly normalizing what I call “machine vomit” and publishing copy-pasted content. Generating plausible patterns, brainstorming new ideas with an unconscious partner (even if technically “reliable”), discovering new combinations or solutions that AI spots or the human partner confirms – none of these are convincing. To make it all worse, AI content feeding back into AI systems is making AI even more dangerous. If research is a tool for producing new knowledge, we must ask: is my AI use a mere convenience, is it dishonesty, and am I using it in a knowledgeable and skillful way that’s guided by solid values? Am I taking ultimate responsibility toward the stakeholders? It is only through relevant knowledge, contextual skills, and goal-driven uses that we can develop the values that can guide our AI use. 


AI tools can go far beyond “generating” plausible text to helping with a wide variety of research tasks – and yet researchers must ask: Is this use of AI a shortcut, a deception, or a responsible application? Convenience cannot be the compass for scholarly integrity. Unless researchers preserve the distinction between generating information and creating new knowledge, between plausibility and truth, between suggestion and truth, they risk corrupting the whole scholarly enterprise. Without getting out of the “cave” of plausible words into the world of discovery, they are not advancing knowledge—they will just multiply noise. 


We must use AI transparently, take full responsibility for its outputs, and avoid using it to bypass meaningful effort and attention. By the way, when my initial draft of this essay got very long, I used ChatGPT4 to help me identify paragraphs that I could condense. I use AI tools for purposes like this where I can pay more attention, remember things, better, etc, and yet I worry if I’m practicing enough of what I preach. Rejection and resistance against AI alone are not likely to be realistic or effective. But, ultimately, the question is not what AI can do, but what we choose to do with it—especially when no one is watching or when unethical uses become normalized.


As AI blurs ethical boundaries, we must hold on to principles like the golden rule, the obligation to do no harm, and the TARES test of (truthful, authentic, respectful, equitable, and socially responsible) communication. Values like this do not change with technology or even society; and the advent of AI has made them more urgent than ever. Our future with AI depends less on its power and more on our judgment, on our ethics, and our shared responsibility towards each other.


The author is a professor of Writing and Rhetoric at the State University of New York at Stony Brook. Email: shyam.sharma@stonybrook.edu

Related Stories
ECONOMY

Farmers in Kaski prefer using hand tractors

SOCIETY

Fake PCR report prepared using letterpad of Bharat...

SOCIETY

Sexploration Season 2 Episode 6: The Onus of Contr...

SOCIETY

Police personnel barred from using TikTok, other s...

SOCIETY

Youths in Rolpa resorting to unsafe abortion inste...