NextFin News - On February 2, 2026, Julie Brill, Microsoft’s Chief Privacy Officer and former Commissioner of the U.S. Federal Trade Commission (FTC), addressed a high-profile gathering at the Berkman Klein Center for Internet & Society at Harvard University. The event, held in Cambridge, Massachusetts, focused on the rapidly evolving landscape of technology governance, specifically how data privacy laws and corporate practices must adapt to govern artificial intelligence (AI). Brill, who also serves as Microsoft’s Corporate Vice President for Global Privacy, Safety, and Regulatory Affairs, utilized the platform to explore the challenges of surveillance, microtargeting, and the shifting public-private balance in tech regulation during a period of unprecedented federal and state legal friction.
The timing of Brill’s discussion is critical. As of early 2026, the United States is navigating a fragmented regulatory environment. According to White & Case LLP, U.S. President Trump signed Executive Order 14365, titled “Ensuring a National Policy Framework for Artificial Intelligence,” on December 11, 2025. This order seeks to establish a minimally burdensome national policy and directs the Attorney General to challenge state AI laws that conflict with federal interests. However, Brill’s remarks highlight the practical reality for multinational corporations: despite federal efforts to preempt state action, a “patchwork” of state laws—including California’s Transparency in Frontier Artificial Intelligence Act and Colorado’s AI Act—are already moving toward enforcement in 2026.
Brill’s analysis centered on the concept that privacy is the foundational architecture for safe AI. She argued that the current debate over “preemption versus state experimentation” often overlooks the technical necessity of data protection in training large language models (LLMs). For a company like Microsoft, which has integrated AI across its productivity suite, the challenge is not merely legal compliance but maintaining “truthful outputs” and preventing the algorithmic bias that U.S. President Trump’s administration has explicitly targeted. The administration’s “Genesis Mission,” launched in late 2025, aims to increase federal scientific datasets, yet Brill cautioned that such expansions must be balanced against the evolving public expectation of data sovereignty.
The impact of this regulatory tug-of-war is most visible in the corporate sector’s compliance costs. Data from MultiState indicates that over 1,000 AI-related bills were introduced in 2025 alone, with 136 becoming law. Brill noted that while the federal AI Litigation Task Force is currently evaluating “onerous” state laws for potential legal challenges, businesses cannot afford to wait for judicial clarity. According to Orrick, at least 21 state AI laws are set to go into effect throughout 2026. This creates a “compliance ceiling” where companies must often adhere to the strictest state standard—typically California’s—to ensure nationwide operational continuity, effectively rendering federal preemption efforts secondary to market realities.
Looking forward, Brill’s insights suggest a trend toward “regulatory diplomacy.” As U.S. President Trump’s administration pushes for a uniform federal standard through the FCC and FTC, industry leaders like Microsoft are increasingly acting as intermediaries between state Attorneys General and federal regulators. Brill emphasized that the future of AI governance will likely be defined by how companies manage “microtargeting” and “surveillance” risks—areas where state AGs have remained bipartisan and aggressive despite the federal shift toward deregulation. The emergence of the “AI Civil Rights Act of 2025” in various state legislatures further complicates this, suggesting that the legal battleground is shifting from technical safety to the social impact of automated decision-making.
Ultimately, Brill’s discourse at Harvard underscores a pivotal moment for the tech industry. The transition from 2025’s legislative surge to 2026’s enforcement phase requires a sophisticated dual-track strategy: supporting federal efforts to streamline innovation while maintaining rigorous, state-compliant privacy frameworks. As Brill concluded, the “public-private balance” is no longer just about who writes the rules, but about who can provide the most stable environment for the next generation of AI deployment amidst a period of intense constitutional and regulatory volatility.
Explore more exclusive insights at nextfin.ai.
