As the creation of deepfakes becomes increasingly accessible, YouTube has made minor adjustments to simplify the process of reporting videos that misuse someone’s image without permission.
Users can now submit a request to YouTube to take down deepfakes through the privacy request process. The company will verify whether the content is a parody or satire and check if the person requesting the takedown is a real human or a bot. Previously, such impersonations could only be reported as misleading content.
This change signals that YouTube now views deepfakes primarily as a privacy issue rather than just a content moderation challenge. With the widespread use of AI tools, the potential for their misuse also increases. On November 15 of last year, ET reported that YouTube introduced several updates to enhance its approach to responsible artificial intelligence innovation.
ETtech reported on April 25 that over 75% of Indians surveyed online by cybersecurity firm McAfee had encountered various types of deepfake content in the past year, with at least 38% experiencing a deepfake scam during this period.
In India, the discussion on deepfakes gained momentum after a short clip of actor Rashmika Mandanna went viral last year. A week later, Prime Minister Narendra Modi highlighted the potential harms arising from the misuse of the technology.
“This issue is magnified in India, as many people unknowingly forward deepfake content on social media, mainly WhatsApp and Telegram groups, without verifying its origin, causing a multiplier effect,” McAfee said.