NextFin News - On December 12, 2025, the University of Cambridge unveiled the Cambridge Online Trust and Safety Index (COTSI), a pioneering platform tracking real-time prices for fake account verification across over 500 digital platforms, such as TikTok, Instagram, Amazon, Spotify, and Uber. The study underlying COTSI maps global costs for phone number verification tied to fake accounts, finding prices as low as $0.08 in Russia and $0.10 in the United Kingdom, with slightly higher costs in the United States and much steeper costs in countries with strict SIM regulations like Japan (€4.25). Researchers identify these low-cost verifications as enablers for building vast bot armies that simulate genuine users to manipulate online public discourse, propagate scams, amplify products, and execute coordinated political influence operations.
The study reveals that this cheap and accessible market fuels misinformation in a thriving underground economy. Vendors manage extensive inventories of SIM cards and millions of pre-verified accounts, some offering bundled services to inflate likes, comments, followers, and artificially generate politically charged content. The rise of generative artificial intelligence enhances bots' ability to mimic human behaviors and tailor messages contextually, making them increasingly persuasive and difficult to detect.
Political influence campaigns drive temporal surges in demand, particularly in messaging apps like WhatsApp and Telegram before national elections, with verified fake accounts' prices spiking 12-15% in the 30 days preceding polls. Such real-time correlations strongly suggest weaponization of these fake accounts for election interference. Platforms where account verifications are cheaper, like Facebook and Instagram, see less localized pricing since accounts verified cheaply in one country can target audiences globally. Notably, the study documents significant ties to Russian and Chinese payment and SIM card systems, with linguistic cues pointing to Russian actors operating many suppliers.
This research is contextualized within a shifting social media landscape. Major platforms have reduced content moderation while adopting engagement-driven monetization models that may incentivize fake interactions. Concomitantly, governments like the UK have acted to outlaw SIM farms, with COTSI set to measure the effectiveness of such regulatory interventions.
The underlying causes of this phenomenon stem from the convergence of inexpensive telecommunications verification, insufficient identity authentication on platforms, and sophisticated AI technologies automating fake content creation. These factors collectively lower barriers for malign actors seeking to influence public opinion and election outcomes, further aggravated by existing geopolitical tensions and information warfare tactics.
The impact is multifaceted. The proliferation of fake accounts undermines the authenticity of digital discourse, distorts democratic processes, and erodes public trust in online platforms and institutional actors. Election interference risks delegitimizing democratic outcomes and increasing political polarization. Economically, the market creates a shadow ecosystem where criminal and state-sponsored entities profit from misinformation campaigns.
Looking forward, the escalating sophistication of AI-driven bots portends growing challenges for detection and prevention. Without robust identity verification frameworks, especially around SIM registrations and multilayered authentication protocols, platforms remain vulnerable. Policymakers face difficult trade-offs, balancing privacy rights with security imperatives. Future regulatory frameworks might include mandatory SIM card registration linked to verified identities, stronger platform accountability for user verification, and international collaboration to dismantle cross-border disinformation networks.
Technological innovation also offers opportunities for mitigation. AI-based detection algorithms, digital provenance tracking, and user behavioral analytics can identify and curtail bot networks. Simultaneously, transparency tools such as COTSI provide critical data insights enabling evidence-based policy decisions and public awareness. Election commissions and civil society groups will increasingly rely on such digital forensic capabilities to safeguard electoral integrity.
In conclusion, the Cambridge study elucidates how cheap fake account verification is a critical enabler amplifying global disinformation and election interference. With U.S. President Donald Trump in office amid a politically charged environment, the urgency of addressing this complex, evolving threat is paramount. Coordinated efforts combining regulation, technological innovation, and international cooperation are essential to stem the tide of online manipulation and protect the foundational democratic processes worldwide.
Explore more exclusive insights at nextfin.ai.

