NextFin

Algorithmic Governance: The Trump Administration’s Shift Toward AI-Generated Transportation Regulations

NextFin News - The U.S. Department of Transportation (DOT) has launched a controversial initiative to utilize artificial intelligence, specifically Google Gemini, to draft federal transportation regulations. According to internal agency records and interviews with staff reported by ProPublica on January 26, 2026, the plan aims to revolutionize the traditionally slow rulemaking process. Gregory Zerzan, the agency’s general counsel, reportedly informed staff that U.S. President Trump is "very excited" about the initiative, positioning the DOT as the "point of the spear" for a broader federal transition toward AI-driven governance. The primary objective is to reduce the time required to produce a complete regulatory draft from months or years to just 30 days, with Zerzan noting that a draft could be generated in as little as 20 minutes.

The implementation of this technology is already underway. According to ProPublica, the DOT has already used AI to draft a still-unpublished Federal Aviation Administration (FAA) rule. During a demonstration in December 2025, agency officials showcased how Gemini could handle 80% to 90% of the workload involved in writing a Notice of Proposed Rulemaking. However, the push for efficiency has met significant internal resistance. Zerzan’s reported comment that the agency does not need "perfect" or even "very good" rules, but rather those that are "good enough," has alarmed career civil servants responsible for the safety of the nation’s airspace, pipelines, and rail networks. Critics within the agency argue that outsourcing the drafting of life-critical safety standards to a technology prone to "hallucinations"—the generation of false or nonsensical information—is a dangerous gamble with public safety.

This shift toward algorithmic rulemaking is a logical extension of the administration’s broader deregulation and efficiency agenda. Since U.S. President Trump returned to office in 2025, the administration has issued multiple executive orders aimed at removing barriers to AI leadership and accelerating its use across the federal government. This trend is heavily influenced by the Department of Government Efficiency (DOGE), led by Elon Musk, which has advocated for using AI to eliminate half of all federal regulations. By automating the drafting process, the administration seeks to bypass what Justin Ubert, a cybersecurity official at the Federal Transit Administration, described as the human "choke point" in bureaucracy. This transition is occurring against the backdrop of a shrinking federal workforce; DOT data shows a net loss of nearly 4,000 employees since 2025, including over 100 attorneys, creating a vacuum that AI is now being asked to fill.

From a technical and legal perspective, the use of Large Language Models (LLMs) for regulatory drafting presents profound risks. Rulemaking is not merely an exercise in "word salad," as some proponents suggest, but a rigorous legal process that must be based on reasoned decision-making and a deep understanding of existing statutes and case law. According to Bridget Dooling, a professor at Ohio State University, the mere production of words does not equate to high-quality government decisions. If the DOT cedes too much responsibility to AI, it may produce regulations that are legally vulnerable to challenges under the Administrative Procedure Act, which requires agencies to provide a rational connection between the facts found and the choices made. Furthermore, the "good enough" philosophy ignores the high-stakes nature of transportation safety, where a single technical oversight in a rail or aviation rule can result in mass casualties.

Looking ahead, the DOT’s experiment serves as a bellwether for the future of the American administrative state. If the administration successfully normalizes AI-generated regulations in transportation, other agencies—from the Environmental Protection Agency to the Securities and Exchange Commission—are likely to follow suit. This could lead to a "flooding the zone" effect, where the sheer volume of new, AI-drafted rules overwhelms the capacity of public interest groups and the judiciary to review them. While proponents argue this will spur innovation by clearing regulatory backlogs, the long-term impact may be a degradation of regulatory clarity and a shift in power from human subject-matter experts to the tech companies providing the underlying AI models. As the DOT moves forward, the tension between the speed of the "AI culture" and the precision required for public safety will remain a central conflict of the current administration’s tenure.

Explore more exclusive insights at nextfin.ai.

Open NextFin App