NextFin news, TikTok confirmed on August 22, 2025, that it will lay off approximately 300 staff members working in content moderation and trust and safety roles at its London office in the United Kingdom. This move is part of a global restructuring aimed at shifting towards artificial intelligence (AI) to assess and moderate problematic content more efficiently.
The company stated that it is concentrating its trust and safety operations in fewer locations worldwide to maximize effectiveness and speed, leveraging technological advancements such as large language models. Affected employees will have the opportunity to apply for other internal roles, with priority given if they meet the minimum job requirements.
The layoffs coincide with the recent enforcement of the UK Online Safety Act, which came into force in July 2025. This legislation requires social media platforms to implement stricter controls on harmful content and user age verification, with potential fines of up to £18 million or 10% of global turnover for non-compliance.
TikTok has introduced new parental controls and age assurance technologies, including AI-based systems that infer user age from behavior and social interactions, although these systems have not yet received formal approval from the UK regulator Ofcom.
The Communication Workers Union (CWU), representing TikTok staff, criticized the timing of the layoffs, which were announced just before a planned union recognition vote. CWU's National Officer for Tech, John Chadfield, expressed concerns that the AI systems are immature and that the cuts prioritize corporate interests over worker and public safety.
TikTok reported that over 85% of videos removed for violating community guidelines are flagged by automated tools, and 99% of problematic content is proactively removed before user reports. The company also highlighted that AI has reduced the exposure of human moderators to distressing content by 60%.
Similar workforce reductions are also planned in South and Southeast Asia as part of TikTok's global strategy to automate content moderation. The company currently employs over 2,500 people in the UK and plans to open a new office in central London next year.
This restructuring reflects broader industry trends toward automation in content moderation, driven by regulatory pressures and cost-efficiency goals. However, it raises ongoing debates about the balance between AI efficiency and the quality and accountability of content governance.
Explore more exclusive insights at nextfin.ai.

