NextFin news, on November 12, 2025, in San Jose, California, a class-action lawsuit titled Thiel v. Google (case number 25-cv-09704) was filed against Google LLC, accusing the company of unlawfully spying on users through its AI assistant, Gemini. The complaint alleges that Google secretly enabled Gemini by default across its communication platforms—Gmail, Google Chat, and Google Meet—without explicit user consent, allowing the AI to scan and analyze private email content, instant messages, and videoconference data. This alleged covert activation effectively grants Google unauthorized access to sensitive user communications, violating the California Invasion of Privacy Act (CIPA), a 1967 statute designed to prohibit surreptitious wiretapping and recordings without all parties' consent.
The lawsuit explains that while users initially had the option to opt into Gemini features, in October 2025 Google flipped the switch to turn on Gemini automatically for all users of these platforms. Although users retain the ability to deactivate Gemini, the process is hidden deep within Google's complex privacy settings, making it difficult for average users to opt out. Plaintiffs claim that absent deliberate deactivation, Gemini can access the entire recorded history of messages and attachments in Gmail accounts, which goes beyond typical data collection and crosses into digital wiretapping.
According to the complaint, Google's use of Gemini undermines core privacy protections and exploits the AI assistant's capabilities to harvest data for purposes likely extending to product improvement and monetization. The company did not respond to requests for comment on this matter as of November 12, 2025.
The Gemini AI assistant is a central pillar in Google's artificial intelligence strategy, a domain in which Alphabet Inc., Google’s parent company, is heavily investing — recently announcing a $25 billion bond issuance aimed at expanding AI infrastructure and innovation. However, this lawsuit places Google among the growing list of tech giants confronted with legal and regulatory challenges over AI’s intersection with user privacy and data protection.
The case emerges amidst heightened scrutiny of Google's business practices in the United States and Europe, where similar concerns have led to regulatory probes and fines, especially under frameworks such as the EU Digital Markets Act. Privacy advocates warn that the covert activation of AI tools like Gemini, without clear informed consent, erodes user trust and calls into question industry-wide norms regarding transparency and ethical AI deployment.
Analyzing the causes behind this lawsuit, it reflects a broader tension between technological innovation and privacy rights. The surge in AI integration into everyday digital communication tools stems from industry pressure to enhance user experience and drive monetization through data analytics. Yet, this rapid AI rollout often outpaces comprehensive privacy safeguards and user education, creating vulnerabilities that adversaries, including watchdogs and litigants, can exploit.
Statistical data from recent privacy surveys indicate that over 70% of U.S. internet users demand stricter control over personal data use by technology firms. However, complexity in privacy settings remains a significant barrier; a 2024 study showed that only 15% of users managed to effectively configure app permissions aligned with their preferences. Thus, Google's alleged practice of enabling Gemini by default may be seen as leveraging user inertia and interface opacity to maximize data intake.
The impact of the lawsuit could be extensive. Should the court find Google violated the California Invasion of Privacy Act, repercussions might include substantial financial penalties and enforced policy changes that mandate explicit opt-in consent for AI features processing private communications. Furthermore, this case could catalyze regulatory frameworks nationwide aiming to reconcile AI adoption with privacy protections, particularly regarding AI assistants’ passive data collection capabilities.
Financially, litigation risks introduce uncertainty to Alphabet’s valuation. Legal provisions and reputational damage may influence investor sentiment. Conversely, balanced AI governance could enhance long-term user trust and market positioning in a privacy-sensitive environment.
Looking forward, as AI assistants like Gemini become ubiquitous across communication platforms, this lawsuit signals a trend towards more assertive legal and regulatory responses to digital privacy breaches in the AI era. Companies must prioritize transparency, consent mechanisms, and user agency to mitigate privacy risks and comply with evolving legal standards.
In summary, the Gemini lawsuit encapsulates the complex interplay of innovation, data privacy, and regulation in 2025, marking a critical moment in how AI’s surveillance potentials are policed and balanced against fundamental user rights.
According to Law360 and corroborated by reports from Sada Elbalad and Ukrainian National News, the case has just begun but already highlights significant implications for Big Tech’s AI governance and user privacy globally.
Explore more exclusive insights at nextfin.ai.
