NextFin

Meta Allegedly Profited Billions from Scam Ads to Finance AI Advancements

NextFin news, Meta Platforms, the social media giant headquartered in Menlo Park, California, has come under intense scrutiny following the publication of internal documents leaked in November 2025. These documents reveal that during the calendar year 2024, an estimated 10% of Meta's total revenue—approximately $16 billion—was generated from advertisements identified as scams and promoting banned goods. This alarming disclosure comes after a Reuters investigation that exposed the breadth and depth of fraudulent ad campaigns on Facebook, Instagram, and WhatsApp, all of which fall under Meta's ecosystem.

The internal reports span from 2021 to 2025 and show Meta's strategic prioritization of fraud-related revenues due to their significant contribution to the firm's resources, notably its ambitious artificial intelligence (AI) research and development programs. With daily user exposure exceeding 15 billion “high-risk” scam ads across its platforms, Meta admitted to allowing “high-value” scam accounts to persist despite accruing more than 500 policy strikes without shutdown, effectively monetizing these fraudulent actors by charging premium ad rates. Meta also leveraged its behavioral ad-targeting algorithms, which, while designed to maximize engagement, have inadvertently become powerful tools that scammers exploit to precisely target vulnerable users. Examples of deceptive ads include impersonations of celebrities such as Elon Musk and President Donald Trump and promotions of fake medical products, illegal online casinos, or unlawful investment schemes.

Meta’s spokesperson publicly refuted the implication that the company knowingly profited from scam ads to the extent alleged. Andy Stone described the leaked documents as a “selective view” that misrepresents Meta’s overall approach, and declined to confirm the exact revenue figure earned from scam ads, suggesting it is significantly lower than the 10% projection.

Nevertheless, insiders have confirmed a reluctance within Meta to aggressively purge these advertisers, citing concerns that sudden revenue decreases could impair the substantial $72 billion capital expenditures earmarked for AI advancements. A 2025 internal directive reportedly placed caps on the financial impact allowable from banning questionable ad accounts, constraining enforcement teams to avoid revenue losses above 0.15% of total company revenue, roughly $135 million, thus institutionalizing a revenue-first approach that indirectly endorses scam advertising activity.

The ethical and regulatory ramifications are considerable. Former Meta executives and fraud experts have called for increased transparency and external audits, arguing that traditional regulatory frameworks applicable to financial institutions should extend to digital advertising platforms. With Meta’s platforms accounting for approximately one-third of successful scams in the United States, public trust is eroding amid perceptions of complicity and negligent governance.

Furthermore, experts from Cornell University highlight the systemic flaws in algorithmic ad personalization, which magnify the reach and recurrence of scam advertisements. The financial incentives built into Meta’s ad auction mechanisms perversely reward fraudulent advertisers who generate higher engagement via deceit, allowing the company to collect disproportionately higher advertising fees from bad actors.

This revelation emerges against the backdrop of Meta’s aggressive push into AI and the metaverse, as underscored by its robust Q3 2025 earnings report revealing a 26.2% revenue growth and a user base of 3.54 billion daily active people. Yet, this growth is juxtaposed with escalating capital expenditures and mounting investor concerns about the long-term profitability of AI investments financed, in part, by questionable ad revenues.

Looking ahead, the persistence of scam ads presents a multifaceted challenge. Financially, while the revenue from these ads currently supports Meta’s AI ambitions, it risks inciting stricter regulatory crackdowns, potential fines, and litigation that could curtail future profitability. Strategically, Meta must balance the imperative of protecting users from fraud with the imperative of funding transformative technologies. Failure to adequately disrupt the scam ecosystem could undermine user engagement and brand integrity, providing rivals like Alphabet and TikTok opportunities to capture market share by offering safer advertising environments.

To mitigate these risks, Meta could adopt several forward-looking measures: enhancing transparency by releasing detailed scam ad metrics to external auditors; implementing proactive user notification systems when scam ad exposures are detected; dedicating a portion of ill-gotten revenue to fund consumer education programs; and recalibrating ad-targeting algorithms to reduce susceptibility to exploitation without sacrificing personalization benefits.

In conclusion, the internal revelations about Meta’s revenue reliance on scam advertisements underscore a compelling case study in the ethical quandaries of platform monetization. By prioritizing immediate financial gains to bankroll AI innovation, Meta confronts an inflection point that will shape not only its own corporate governance and market standing, but also set precedents for accountability in the global digital advertising ecosystem under President Donald Trump’s current administration. The unfolding situation calls for heightened regulatory oversight, industry-wide reforms, and sustained public scrutiny to realign incentive structures towards safeguarding user trust while embracing technological progress.

Explore more exclusive insights at nextfin.ai.

Open NextFin App