TikTok ban: Apps featuring user-generated content must be regulated to protect children
The Madras HC recently passed an interim order banning TikTok, a Chinese app used to create and share short videos, in a PIL that the app promoted the sharing of inappropriate and pornographic content. The court is considering whether the Central Government should enact a law similar to the Children Online Privacy Protection Act in the US. While the official court order is not yet publicly available, the interim order directs the government to do the following: prohibit download of the TikTok app and television telecasts of TikTok videos.
The Madras HC order has been challenged in the Supreme Court.
This is not the first time that a ban on TikTok has been debated in India. In February 2019, the TN state government hinted that it would urge the Centre to ban TikTok over concerns related to obscenity and law and order problems; these incidents include morphed images of women downloaded from TikTok used to lure customers in a flesh trade racket and suicide by a youth who had been bullied on social media because of his TikTok videos. Around the same time, the Swadeshi Jagran Manch (SJM) (the economic wing of the RSS) wrote to Prime Minister Modi to ban Chinese apps like TikTok citing data security concerns.
In February 2019, a UK-based charity reported that children as young as five were being groomed by paedophiles on TikTok; the predators were able to establish contact with children using the live comments feature on TikTok. A BBC Investigation found that TikTok had failed to take any stringent action against child predators on its app. One of the biggest concerns is that apps like TikTok are unsafe for children as the apps violate children’s privacy, makes it easier for sexual predators to contact children, and promotes the distribution of pornographic content. This brings into question whether there is a need to regulate user-generated video content apps like TikTok.
TikTok in India
TikTok has gained popularity among preteens and adolescents especially in Tier II and Tier III cities in India.
In February this year, it was reported that TikTok does not have a grievance redressal officer in India. Presently, the TikTok website does list contact details for a grievance officer in India. In March, Tik Tok announced a new initiative to protect Indian users against cyberbullying, namely, a new user-defined filter to block obscene comments on TikTok.
Measures taken by TikTok to promote online safety include a moderation team which covers major Indian regional languages including Tamil, Hindi and Bengali. Further, the Digital Wellbeing feature on TikTok allows users to manage time spent on the app, and limit the appearance of content which is not appropriate for all audiences. Under TikTok’s General Terms-for India Resident Users, the minimum age limit for TikTok users is 13 years.
Existing law on liability of internet intermediaries in India
Under section 79 of the IT Act, an internet intermediary is not liable for third-party content hosted on the intermediary’s platform. Internet intermediaries are broadly defined under the IT Act and would cover social media platforms like TikTok. Under section 79, an internet intermediary can avail of safe harbour from the acts of third parties if the intermediary exercises due diligence in accordance with the IT Rules and if the intermediary does not play an active role in the information transmitted by third parties (i.e. the intermediary does not initiate/modify/select the receiver of the information transmitted on their platform).
In the case of cyber-crimes committed through social media, internet companies often take the defence that they are merely platforms and therefore, protected by the safe harbour provisions in the IT Act. However, in the light of fake news, hate speech and disturbing content that is often circulated through these platforms, it is becoming increasingly difficult to accept the argument that internet platforms have minimal responsibility with respect to user-generated content. It is also debatable whether TikTok plays only a passive role with respect to content since its key features include using AI to determine users’ interests and preferences, and displaying a personalised content feed to the user accordingly; one could argue that this amounts to selecting the recipient of information transmitted on the app’s platform by TikTok.
Ban on TikTok in other jurisdictions
As recently as February 2019, the Bangladeshi government banned TikTok along with close to 20,000 websites as part of its anti-pornography efforts. The pornography ban in Bangladesh follows the November 2018 ruling of the High Court (the lower division of the Supreme Court of Bangladesh) wherein the court ordered the government to ban all pornography websites for six months (and consider a permanent ban). Interestingly, the court also referred to the use of the National Identity Card (NID) number by social media (including Facebook) to verify users’ age. In Bangladesh, the NID functions as a voter ID card and is mandatory for opening a bank account or getting a new mobile connection.
Indonesia had banned TikTok in July 2018 for hosting content which was considered blasphemous and pornographic. The Indonesian government lifted the ban only after TikTok agreed to establish a censorship team to monitor negative content and a liaison office in Indonesia. TikTok also promised to raise the minimum age requirement for users from 12 to 16 years.
Cases involving breach of privacy of children by TikTok
In February 2019, TikTok paid the US Federal Trade Commission (FTC) $5.7 million to settle a case involving claims that the app had collected personal data of children under the age of 13 in violation of the Children’s Online Privacy Protection Act (COPPA). Under section 312.5 of COPPA, an operator of general audience websites or online services must obtain “verifiable parental consent” before collection, use or disclosure of personal information from children (defined as an individual below the age of 13). To register for the app (at that time Musical.ly), users were required to provide a username, first and last names, a phone number, an email address, a short bio and a profile picture. Musical.ly received thousands of complaints from parents that their children had created accounts on the app without parental consent.
While Musical.ly would respond by deleting such accounts, the app retained users’ videos and profile information on its servers. It was also reported that adults were attempting to contact children via the app. Apart from paying the fine, TikTok was also required to take down all videos made by under-age children. In the aftermath of the US FTC fine, TikTok sought to verify that its users were above the age of 13 by asking users to submit copies of government ID proofs.
In Australia, it was reported that a man posing as Justin Bieber had asked an 8-year-old girl on Musical.ly to send him nude photos.
In May 2018, a Post investigation in Hong Kong revealed that children as young as nine years had exposed their identities on TikTok raising concerns that this placed them at risk of harassment.
Possible measures which TikTok could employ
The legal framework for holding internet intermediaries liable is being strengthened by the Indian government. MeitY recently issued draft IT (Intermediary Guidelines) Rules 2018 to replace the existing intermediary guidelines –some of the proposals include incorporation requirements for internet intermediaries having more than 50 lakh users in India and use of automated tools by intermediaries to proactively monitor unlawful content on their platforms.
In the meantime, TikTok can take measures to make its platform more children friendly. Taking a cue from YouTube, TikTok may disable comments on videos featuring minors – this will prevent predators from contacting children on TikTok through the app’s comments feature. Secondly, TikTok may develop its facial recognition technology (which lets users add filters or funny features to their photos) to verify that its users are not under-age.
It is important that internet intermediaries take on greater responsibility voluntarily to ensure that the internet remains a healthy space for people to exchange information and ideas.
Devika Agarwal is a Policy Analyst at Nasscom. The views expressed are personal and should not be attributed in any way to Nasscom or its members.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.