header
                        banner
OPINION

Using AI: Credit, Content, Credibility

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for cont...
By Shyam Sharma

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for content, credit, and credibility when using AI tools.


Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


Faced with machines that seem to (and often actually) match our linguistic abilities, students, professionals, and the general public alike are struggling to maintain boundaries for effective learning, professional relationship, and honest communication. The usefulness of any AI tool is at the intersection between its ability to generate content and our ability to judge that content knowledgeably and skillfully. If a tool generates more than we can judge, we cross the zone of safety and enter the zone of dangers, which risks undermining the value of the content, making the credit we seek undeserved, and threatening our credibility and human relationship.


 AI and credit


Let us begin with learning. Imagine that you were a college professor before the internet, and you learned that one of your students submitted to you a research paper that he had asked his cousin to write. Imagine that he actively guided his cousin to meet your expectations for the top grade. Most likely you would not have given that student the top grade. Now imagine that you are a college professor today, and you just learned that a student has submitted a paper he prompted an AI tool (such as ChatGPT) to write. Imagine that he skillfully used ChatGPT to produce the paper, and the final product meets your expectations for the top grade, while he learned little from the process. Would you give the student the top grade?


Related story

Capacity for credibility


The premise in both “learning” scenarios above is that the learner gets credit for showing evidence of learning. And if a student short circuits their own learning, the teacher should not give them credit. The teaching-learning contract, often written up in course policies, is violated when the student uses a cousin or human-like assistant. In the educational context, the question is NOT whether the aid (human or machine) can produce content/text as well as the learner–a question educators should not be distracted by. The educational question is if, how, and how well available technologies can assist and enhance the student’s learning. It is how teachers can facilitate that assistance or meaningful harnessing of technological affordances. It is how students must develop their own knowledge, skills, and agency to be able to decide when, how, why to use what particular tools – or not – for facilitating or enhancing their learning.


One approach to AI technologies for assisting and enhancing learning is to break down a particular educational assignment or task into the smallest (or atomic) learning tasks necessary and identify a specific tool and its specific affordance to assist/enhance the intended learning. For example, an undergraduate student may break down a research paper into ten, twenty, or thirty component skills and learning tasks, such as doing preliminary research on a potential topic to locate books and journal articles on it, formulating a research question for further inquiry, developing a working thesis, annotating relevant sources, drafting, introducing and engaging sources effectively, doing peer review, revising to make paragraphs more cohesive, and so on. The student could then take the first task (of doing preliminary research on a potential topic) and pick a particular AI tool that might help her practice that task, exploring if and how well that tool helps her learn how to do the task. The educational outcome is not to complete that task: it is using the task and tool to learn how to do it. The credit her teacher will give her is for the process and experience of learning, even though the product may be viewed as a reflection of the process and experience.


There is also an elephant in the educational room: transparency. If the teacher has not endorsed the particular use of a particular AI tool, or not yet taught the student to effectively use it, then the student must reveal to the professor what specific tool and affordance she used for what particular purpose in the process of doing the research and writing the paper. She can do that with a footnote, in addition to using the standard citation method for quoting text or summarizing ideas from AI tools. If the professor uses an AI tool, he too must reveal that with specifics to his students. That two-way transparency is the basis of trust in the educational contract, without which communication breaks down. Integrity and honesty are undermined.


AI and content


Are there situations in education where the focus is not learning but instead content? Not really. Imagine that you are a doctoral student working on your dissertation and that you are able to use AI tools without short circuiting your learning–and your advisor knows this. Also say that you effectively use specific tools and harness their specific affordances for specific needs. You ask and answer sophisticated questions, such as: (How well) can a given tool help me develop a hypothesis or research questions? (To what extent and how effectively) can any tool assist me in the literature review process? Is any tool designed yet (or can be adapted) for data gathering, analysis, or interpretation? What classification, translation, or context extraction tools could help me perform or enhance tasks related to these functions? And, say, you only use AI generated content that is real, reliable, and relevant by closely judging and cross-checking it.


But your advisor will not–and should not–give you credit for producing text instead of learning how to develop hypotheses or research questions, conduct literature reviews, gather and analyze and interpret data, classify or translate, contextualize data. They should not grant you a degree for producing “text” without doing all the above. That is because your dissertation is a reflection of your learning process and intellectual experience, development of disciplinary identity, and professional growth. Your doctoral degree is not just a credit: it is evidence that you are a person of significant intellectual expertise and stature, academic integrity and rigor, professional honesty and ethics. And only uses of AI tools for “assisting” you to meet your learning and development goals should receive credit.


What credit you receive depends on the single foundation of credibility, which depends on transparency, or the revelation that the only credit you seek is for doing the work and learning you’ve done. This does not prohibit you from using the calculator, the computer, the internet (or books and journal articles for that matter), or AI tools: it requires you to not only use all the above tools effectively and reliably but also with full agency and honesty.


In the academic context, it is not enough to just avoid the charge of plagiarism by citing others’ ideas (including the machines’). Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


 AI and credibility


We can best understand issues about credibility in AI use if we focus on professional communication. Imagine that an AI tool provides you what looks to you like a far better job application letter than you can currently write. Will you submit it (maybe with some tweaking) because the letter is better written than one you can write yourself? Or, imagine that you are the hiring manager, and you read a brilliantly written job application letter, which is clearly written by AI (mostly or entirely). Would you extend a job offer to that candidate, or would you look for another candidate–other things being equal? Or imagine that you submit a months-long analysis of a major project in your organization and submit a 20-page report with a 1-page executive summary to your boss. Or imagine that you sent an email to your boss about a complicated problem affecting hundreds of staff members in your unit. And your boss uses ChatGPT to respond with detail, but you can see that he didn’t read your report, or your email. Would you be happy? Does it matter that the AI tool got most or even all of the context of complexity of the issue “right”? Do you expect your boss to give you and goals of communication his human attention? Where do you draw the line on his AI use?


Issues of trust and credibility become visible to us when we put ourselves on the receiving end, when these are the fuel on which a profession runs. Tools and strategies should enhance professional communication, and that happens when we maintain trust and goodwill.


There is no broad and bright line for separating human and AI content (AI detection tools are ineffective). But we can (and must) learn to identify or create small and specific boundaries–based on contexts, audiences, medium/affordances, and purposes of communication. These boundaries, created and adapted individually and carefully, can constitute our personal communicative philosophies.


To share my own principles of AI use, I sometimes use AI tools to jog my memory on topics I am knowledgeable about. I only use AI-generated ideas, such as to brainstorm a topic, within my expertise, by verifying them. And I always acknowledge what tools I used (I didn’t use any for writing this essay). I use AI to challenge my own thoughts and perspectives, but I don’t let it “improve” my style where style is substance, or whenever I should/want to speak in my own voice (which is almost always). And I always try to put front and center my audience (colleagues, friends, family, students) and my relationship with them, as well as their trust in me and my credibility among them.

Related Stories
ECONOMY

Infomerics Credit Rating Nepal receives operating...

1 min read
WORLD

YouTube blocks all anti-vaccine content

1 min read
My City

Promoting business via content marketing

1 min read
POLITICS

PM Oli accuses Dahal of investing in media to erod...

1 min read
SOCIETY

‘Judiciary in the clutches of interest groups’

1 min read