On the opening day of the CPN-UML’s 11th general convention in Sallaghari, Bhaktapur, leader Mahesh Basnet posted photos meant to signal momentum — a sea of bodies, a party back on its feet. One of those images had an AI-generated crowd added using Google’s Gemini, and it spread online as if it were real. Basnet had publicly talked about mobilizing 500,000 people; police estimates for the same gathering were far lower — around 70,000. Whether the edit came from a campaign aide, an overexcited supporter, or someone closer to the center, the point is the same: Nepal’s election season has already started with manufactured “proof.”
Nepal now heads toward parliamentary elections on March 5, 2026, after the September “Gen Z” protests and the government’s social-media clampdown tipped into violence, upheaval, and a profound trust crisis. Reuters has reported a death toll in the dozens (later updated to 77) and thousands injured, along with major damage to public buildings and the economy.
That is the backdrop for the coming vote: not just a contest of parties, but a contest over what citizens can still agree is true.
Platform bans don’t fix an information crisis
Nepal tried the bluntest tool first: shutting down major platforms. The government blocked services like Facebook, X, and YouTube for failing to register, and the ban helped ignite mass protests — then got rolled back after blood had already been spilled.
Bans do not disinfect an ecosystem; they rupture it. Ordinary people lose their channels for news, work, and family. Determined actors route around restrictions, shifting rumors into private groups, fringe apps, and encrypted forwards. And when platforms come back, the public returns to feeds that are hotter, angrier, and easier to manipulate.
In other words, bans don’t end misinformation. They push it underground — and make the next lie harder to trace.
The next distortion is a voice
If the Basnet crowd image is the warm-up act, the main event is coming in audio.
This year, Swarnim Wagle (a vice-president of the Rastriya Swatantra Party) filed a complaint with Nepal Police’s Cyber Bureau alleging that a “deepfake” audio of him was used in video content, including materials presented as if he were having a conversation with India’s Prime Minister.
That case matters because voice notes are Nepal’s political bloodstream. They travel faster than headlines, jump across family chats without context, and feel intimate. Audio deepfakes exploit exactly that. And detection is still a mess: Poynter’s fact-checking coverage has noted that tools for spotting AI-manipulated audio are inconsistent and often fall short, especially for everyday users who don’t have forensic software or newsroom time. Poynter Institute is a nonprofit journalism school that operates an international Fact-Checking Network, among various other things.
In a high-emotion, high-stakes election, one believable voice clip can outrun a dozen corrections.
Nepal’s rulebook is older than the problem
Nepal is not starting from zero. The Election Commission has a “Policy on the use of social media in Electoral Management, 2077” (2021), which recognizes the need for structured communication and the risks of online harm.
Election Commission issues 17-point election directive
And ahead of the March 5 election, the commission has floated a draft code of conduct that explicitly prohibits spreading false or misleading information on social media and bans fake accounts intended to influence the election.
But the generative-AI era needs an addendum that is blunt, specific, and fast:
What counts as synthetic or manipulated media (image, video, and audio)?
What disclosures are required when campaigns use AI tools?
What is the triage system for election-timeline complaints (hours, not weeks)?
What penalties apply — and who gets punished: the creator, the publisher, the candidate, or all of the above?
Without that clarity, enforcement becomes arbitrary — and arbitrariness is rocket fuel for conspiracy theories.
What works better than bans
Newsrooms and fact-checkers — provenance first.
Treat every “explosive” clip like a crime scene. Who posted it first? When? Where is the original file? If it’s a video, grab key frames and reverse-search. Listen for audio seams: unnatural breaths, clipped pauses, room tone that changes mid-sentence. Publish the receipts (not just the verdict), so audiences learn the method. Poynter has practical verification guidance that can be localized quickly for Nepali reporting teams.
Parties and candidates — pledge, label, verify.
If leaders want trust, they must behave like they want trust. That means:
a no-deepfake pledge with clear internal discipline,
labeling AI-assisted campaign content (even when it’s “just” enhancement),
and maintaining verified libraries of official photos, logos, speeches, and channels.
The Basnet episode is exactly why. Even if supporters call it “harmless,” it trains voters to assume that campaign visuals are marketing but not reality.
Platforms — Nepal-specific integrity lanes.
Global “community standards” are not enough when disinformation moves in Nepali and spreads through local political networks. Major platforms should build:
local-language rapid escalation,
partnerships with Nepali fact-checkers,
and public reporting on manipulated media takedowns affecting Nepal during the election period.
If platforms can build “war rooms” for major markets, they can build them for fragile democratic moments.
Citizens — the ten-second triage.
Here’s the habit that actually scales:
Pause before forwarding.
Check the account: Is it new? Does it have a real history?
Look for corroboration from credible outlets.
Reverse-search a screenshot (for images/video).
Scrutinize audio: does the voice have odd pacing, robotic smoothness, or mismatched background sound?
Wait 24 hours before amplifying high-impact claims.
A million small hesitations do more for election integrity than any single government filter.
From “TikTok” to toolbox
Nepal’s uprising was unmistakably Gen Z, which is fast, visual, and networked. That same energy can defend the vote if it gets redirected from outrage to skill.
Instead of trying to chase down every rumor, flood the zone with verified micro-lessons: how to spot deepfakes, how to check a source, what the ballot process looks like, and where to report violations. Poynter’s MediaWise approach (breaking verification into simple, repeatable behaviors) is a useful template for that kind of “short, teachable” civic content.
Imagine a daily Election Fact-Check Reel: 30–60 seconds, in Nepali, mirrored across platforms, and republished via radio and SMS for low-data users. When attention is short, good information must get smaller, faster, and easier to share.
Policy updates before the polls
Election Commission
Update the 2021 social-media policy to explicitly cover synthetic audio/video and AI-generated imagery.
Align the March election code of conduct with clear synthetic-media definitions and a rapid complaint pathway that matches election speed.
Run a real-time myth-buster feed that media and citizens can cite.
What success looks like
A credible election is still possible. An election where a viral fake dies in hours because citizens, creators, newsrooms, platforms, and parties share the same playbook. Nepal has already seen how fragile trust becomes when institutions wobble. It has also seen how fast young people can mobilize when the system fails them.
To honor the dead and protect the living, Nepal needs one simple outcome in March: let truth travel faster than lies.
The author is a PhD student in Machine Learning and Bioinformatics.