The Indian government has issued a stern warning to social media giants Facebook and YouTube, urging them to take stricter measures to combat the spread of deepfakes on their platforms. Deepfakes are manipulated videos or audio recordings that use artificial intelligence to make it appear as if someone is saying or doing something they never did.
The government’s concerns stem from the potential harm that deepfakes can cause, including inciting violence, spreading misinformation, and damaging reputations. In a closed-door meeting with representatives of Facebook and YouTube, India’s Deputy Information Technology Minister Rajeev Chandrasekhar emphasized the need for these platforms to proactively enforce their existing rules against deepfakes.
Chandrasekhar reportedly expressed dissatisfaction with the current level of compliance from social media companies, stating that many have not updated their usage terms to reflect the 2022 regulations prohibiting content that is “harmful” to children, obscene, or impersonates another person.
To address these concerns, Chandrasekhar has proposed two potential solutions
- Regular Reminders: Social media platforms should implement a system to remind users every time they log in that posting deepfakes and other prohibited content is strictly against the rules.
- Issuing Reminders: Social media platforms can periodically issue reminders to users about the prohibition on deepfakes and other harmful content.
If social media companies fail to take adequate action, Chandrasekhar has indicated that the government may resort to issuing directions mandating compliance.
The Indian government’s stance on deepfakes aligns with growing global concerns about the potential misuse of this technology. Deepfakes have the ability to manipulate public perception, sow discord, and undermine trust in institutions. As deepfake technology continues to advance, it is crucial for governments and social media platforms to work together to establish effective safeguards against its harmful applications.