Latest update:
In the video, a cloned voice resembling Shashi Tharoor appeared to praise Pakistan’s diplomatic move after it withdrew its threat of boycott.

Shashi Tharoor denied the video and called it fake. (Image source: X/Shashi Tharoor)
Why AI-generated content poses serious risks
Deepfakes can be used to attribute false statements to political leaders, celebrities, or ordinary individuals, potentially damaging their reputations.
Cybersecurity is another concern. Why? Because AI can be used to impersonate people through voice cloning, which could lead to financial fraud or identity theft.
How to recognize images generated by artificial intelligence
One of the most common signs is anatomical asymmetry. Look for signs such as extra fingers, fused hands, or misshapen faces. AI content often suffers from these types of subtleties.
Another sign is background distortion. How is that? Buildings may appear asymmetrical, objects may blend into each other, or parts of the image may appear melted or distorted. Lighting and shadows may not be properly aligned with visible light sources. Text within AI-generated images is often mistyped, jumbled, or unreadable.
How to recognize videos generated by artificial intelligence
Facial expressions may look unnatural in AI-generated videos. Lip syncing is another key indicator because sometimes the sound doesn’t exactly match the mouth movements. Background elements may also appear distorted.
Best AI image and video detection tools
Hive Moderation provides a probability score that indicates whether an image or video was created using AI tools.
SightEngine offers deepfake detection and image management services.
Illuminarty helps identify AI-generated visuals and manipulated text.
Deepfake detection tools and private videos
Sensity AI is known for identifying manipulated faces and edited videos. Meanwhile, Reality Defender scans video, audio, and images in real-time, and is often used for AI-based fraud prevention.
Deepware Scanner is designed to scan videos for deepfake manipulation. Microsoft Video Authenticator provides frame-level probability scores to highlight exactly where a video may have been edited.
Forensic and advanced analysis tools
For a deeper investigation, use FotoForensics, which uses Error Level Analysis (ELA), to detect manipulated pixels in images. Forensically Beta has advanced features such as clone detection.
DOS when sharing content
It’s important to clearly label AI-generated material as you share it further. You must manually review and verify the AI. Always check the source of the content.
Check if the video or photo promotes stereotypes or harmful discrimination.
This should not be taken into account
Never share deepfakes that show real individuals making false claims (Tharoor, for example). If you share such harmful content, it may result in legal action.
Never share sensitive information such as financial data, medical records, or trade secrets using AI tools. Finally, if platforms mark content as AI-generated, users should not attempt to remove or bypass these warnings.
Handpicked stories, in your inbox
A newsletter containing the best of our journalism
Delhi, India, India
13 February 2026 at 13:53 IST
Read more


