[[{“value”:”
In the era of AI and real-time virality, the old adage “seeing is believing” is no longer true. In fact, what you see might just be a synthetic illusion, generated by lines of code designed to manipulate, mislead, and mobilize. The internet is now at the center of an information war not just between nations, but between truth and technology. From viral lies to global crises, the alarming role of AI leads to misinformation.
The recent India-Pakistan misinformation storm
Recently, tensions between India and Pakistan escalated following reported military exchanges along the border. But beyond missiles and media briefings, a different kind of weapon was being deployed, that thrives in the shadows of virality: AI-generated deepfakes and digitally altered content. Below are several notable cases that the Entrackr team closely analyzed:
Example 1: The Phantom fighter jets
A video went viral across social media platforms claiming that Pakistan admitted to losing two JF-17 fighter jets in the conflict. The footage featured what appeared to be a Pakistani Army spokesperson making the statement. But on closer analysis by fact-checkers at BoomLive, it was revealed to be a deepfake with misaligned lip movements and inconsistencies in audio syncing. However, fact-checker and Alt News co-founder Mohammed Zubair flagged the clip as a deepfake in a tweet, pointing out the manipulated audio and mismatched lip-syncing. His timely intervention helped expose the video as a fabricated narrative.
Example 2: Chinese official’s alleged anti-India comment
Another widely circulated clip showed a Chinese government official allegedly stating that “China is more committed to Pakistan than India.” This video, picked up from Reddit and amplified on social media, has raised red flags. Its authenticity remains under question, but AI experts suspect manipulated audio overlays, part of a new trend where real footage is used as a base for synthetic voice insertion.
There have been numerous instances where misinformation has spread at scale, leaving behind long-lasting perceptions and distrust between India and Pakistan, even after the original claims were debunked.
While speaking to Entrackr, a senior mainstream journalist said, in an era where misinformation spreads six times faster than the truth, the human brain is already at a disadvantage. Cognitive biases, especially our tendency to believe content that aligns with our emotions or pre-existing beliefs make us highly vulnerable to deepfakes and AI-generated propaganda. Even when false claims are debunked, the damage is rarely undone due to the continued influence effect. In this hyper viral ecosystem, the internet favors outrage over accuracy, leaving slow, factual journalism gasping for attention. Against this backdrop, government regulation on AI and synthetic media is no longer optional, it’s a national security imperative and a civil stability safeguard.
Media’s TRP trap: When verification loses to virality
The most troubling aspect was these videos which were not only viral on Telegram groups or fringe Twitter handles. Reputed Indian mainstream media houses from TV news to digital outlets aired or quoted these pieces without proper verification, caught in the race for exclusivity and engagement.
Channels that once prided themselves on credibility fell for video game footage repurposed as “war zone visuals” or AI-generated soldier testimonials, inadvertently becoming tools of misinformation themselves. Media has become the first victim and the biggest amplifier of synthetic lies,” says a senior digital forensics analyst at a fact-checking NGO.
Deepfakes and synthetic media aren’t inherently evil; they power entertainment, dubbing, accessibility tools, and virtual influencers. But in the wrong hands, they become digital weapons capable of destroying reputations, inciting communal violence, triggering diplomatic breakdowns and creating alternative realities that drown out factual journalism.
The collapse of trust in the Internet age
The result? A trust deficit in the very mediums meant to inform us. If a respected anchor can be deepfaked to say something inflammatory, or a war update can be faked with cinematic realism, what remains real? The age of internet misinformation has entered a new phase, one where artificial intelligence doesn’t just report the news, but rewrites it. If the media, governments, and citizens don’t act now, the line between truth and fiction will be permanently blurred.
According to a senior executive at a leading fact checking website, we are now dealing with misinformation that doesn’t just mislead, it fabricates entire people and moments. Media organizations need internal AI-detection workflows before publishing sensitive footage and TRP should never trump truth, exclusivity without verification is complicity. Of course, we understand that it is easier said than done.
Need of govt regulation, law and accountability
The rise of synthetic media demands an urgent and coordinated response from governments, technology platforms, and media institutions. India must move swiftly to introduce robust legislation that criminalizes malicious deepfakes and mandates the clear disclosure of AI-generated content. Without legal deterrents, digital impersonation will continue to thrive unchecked. “What we’re witnessing with deepfakes during the India-Pakistan conflict isn’t just a security concern — it’s a legal vacuum. India still lacks clear legislation around synthetic media, and without accountability, real harm will continue unchecked. This isn’t a future threat — it demands urgent legal action now,” said Vasundhara Shankar, Managing Partner at Verum Legal
With the legal vacuum around synthetic media unlikely to be addressed anytime soon, social media platforms must take greater responsibility by proactively flagging, demoting, and removing fake content—especially during times of geopolitical tension. Collaboration with independent fact-checkers and the development of real-time detection systems must become core components of platform accountability in the AI era, not optional add-ons.
“}]]
Read More Entrackr : Latest Posts