TikTok removed more than 580,000 videos in Kenya between July and September 2025 for violating its content policies, according to the company’s latest transparency report.
The figures highlight the scale of content moderation on one of Kenya’s most widely used social media platforms, as debate intensifies over online safety, consent, and covert recording.
TikTok said 99.7% of the videos taken down in Kenya during the quarter were removed proactively before users reported them, while 94.6% were removed within 24 hours of being posted. The company also interrupted roughly 90,000 livestreams in the country — about 1% of all live sessions — for breaching its rules.
Globally, TikTok removed 204.5 million videos during the same period, representing about 0.7% of all uploads. Of those, 99.3% were removed before being flagged by users, and nearly 95% were taken down within a day. Automated systems accounted for 91% of global removals, the company said.
TikTok also deleted more than 118 million fake accounts and over 22 million accounts suspected to belong to users under the age of 13.
The report was released days after public outrage in Kenya over allegations that a foreign content creator secretly recorded women and uploaded the clips to social media platforms, including TikTok and YouTube. The incident reignited concerns over how quickly platforms detect exploitative content and whether moderation tools can keep pace with emerging technologies.
Online speculation suggested that smart glasses may have been used in the alleged recordings, although no official confirmation has been provided. Manufacturers such as Meta say their smart glasses include visible recording indicators and prohibit privacy violations under platform policies. Privacy advocates, however, argue that public awareness of such safeguards remains limited.
Kenyan lawyer Mike Ololokwe said consent to interact with someone in public does not amount to consent to be recorded or have footage published online.
“Digital platforms need to treat hidden recording as a serious rights violation and policy breach, because harm spreads long after posting,” he said.
TikTok said it continues to combine automated detection tools with human moderators to address harmful content, including harassment and misinformation, and has expanded in-app wellbeing tools aimed at helping users — particularly teenagers — manage screen time and digital habits.














