NextFin

Algorithmic Governance: The Trump Administration’s Shift Toward AI-Generated Transportation Regulations

Summarized by NextFin AI
  • The U.S. Department of Transportation (DOT) has initiated a plan to use AI, specifically Google Gemini, to draft federal transportation regulations, aiming to reduce drafting time from months to just 30 days.
  • Internal resistance exists within the DOT, as critics express concern over the safety implications of using AI for drafting life-critical regulations, citing risks of generating false information.
  • This shift aligns with the broader deregulation agenda of the Trump administration, which seeks to streamline processes by leveraging AI, despite potential legal vulnerabilities in the resulting regulations.
  • If successful, the DOT's approach could lead to widespread adoption of AI-generated regulations across various federal agencies, raising concerns about regulatory clarity and public safety.

NextFin News - The U.S. Department of Transportation (DOT) has launched a controversial initiative to utilize artificial intelligence, specifically Google Gemini, to draft federal transportation regulations. According to internal agency records and interviews with staff reported by ProPublica on January 26, 2026, the plan aims to revolutionize the traditionally slow rulemaking process. Gregory Zerzan, the agency’s general counsel, reportedly informed staff that U.S. President Trump is "very excited" about the initiative, positioning the DOT as the "point of the spear" for a broader federal transition toward AI-driven governance. The primary objective is to reduce the time required to produce a complete regulatory draft from months or years to just 30 days, with Zerzan noting that a draft could be generated in as little as 20 minutes.

The implementation of this technology is already underway. According to ProPublica, the DOT has already used AI to draft a still-unpublished Federal Aviation Administration (FAA) rule. During a demonstration in December 2025, agency officials showcased how Gemini could handle 80% to 90% of the workload involved in writing a Notice of Proposed Rulemaking. However, the push for efficiency has met significant internal resistance. Zerzan’s reported comment that the agency does not need "perfect" or even "very good" rules, but rather those that are "good enough," has alarmed career civil servants responsible for the safety of the nation’s airspace, pipelines, and rail networks. Critics within the agency argue that outsourcing the drafting of life-critical safety standards to a technology prone to "hallucinations"—the generation of false or nonsensical information—is a dangerous gamble with public safety.

This shift toward algorithmic rulemaking is a logical extension of the administration’s broader deregulation and efficiency agenda. Since U.S. President Trump returned to office in 2025, the administration has issued multiple executive orders aimed at removing barriers to AI leadership and accelerating its use across the federal government. This trend is heavily influenced by the Department of Government Efficiency (DOGE), led by Elon Musk, which has advocated for using AI to eliminate half of all federal regulations. By automating the drafting process, the administration seeks to bypass what Justin Ubert, a cybersecurity official at the Federal Transit Administration, described as the human "choke point" in bureaucracy. This transition is occurring against the backdrop of a shrinking federal workforce; DOT data shows a net loss of nearly 4,000 employees since 2025, including over 100 attorneys, creating a vacuum that AI is now being asked to fill.

From a technical and legal perspective, the use of Large Language Models (LLMs) for regulatory drafting presents profound risks. Rulemaking is not merely an exercise in "word salad," as some proponents suggest, but a rigorous legal process that must be based on reasoned decision-making and a deep understanding of existing statutes and case law. According to Bridget Dooling, a professor at Ohio State University, the mere production of words does not equate to high-quality government decisions. If the DOT cedes too much responsibility to AI, it may produce regulations that are legally vulnerable to challenges under the Administrative Procedure Act, which requires agencies to provide a rational connection between the facts found and the choices made. Furthermore, the "good enough" philosophy ignores the high-stakes nature of transportation safety, where a single technical oversight in a rail or aviation rule can result in mass casualties.

Looking ahead, the DOT’s experiment serves as a bellwether for the future of the American administrative state. If the administration successfully normalizes AI-generated regulations in transportation, other agencies—from the Environmental Protection Agency to the Securities and Exchange Commission—are likely to follow suit. This could lead to a "flooding the zone" effect, where the sheer volume of new, AI-drafted rules overwhelms the capacity of public interest groups and the judiciary to review them. While proponents argue this will spur innovation by clearing regulatory backlogs, the long-term impact may be a degradation of regulatory clarity and a shift in power from human subject-matter experts to the tech companies providing the underlying AI models. As the DOT moves forward, the tension between the speed of the "AI culture" and the precision required for public safety will remain a central conflict of the current administration’s tenure.

Explore more exclusive insights at nextfin.ai.

Insights

What are the origins of algorithmic governance in the U.S. transportation sector?

What technical principles underlie the use of AI in drafting transportation regulations?

What is the current market situation regarding AI-generated regulations in the U.S.?

What feedback have users and employees provided about the AI drafting initiative?

What are the latest updates from the U.S. DOT regarding AI regulations?

What recent policy changes have influenced the use of AI in federal rulemaking?

How might the use of AI in rulemaking evolve in the next five years?

What long-term impacts could AI-generated regulations have on public safety?

What are the major challenges associated with the implementation of AI in transportation regulation?

What controversies surround the use of AI for drafting life-critical safety standards?

How does the DOT's AI initiative compare to similar efforts in other countries?

What historical cases highlight the potential risks of AI in regulatory processes?

Which companies are leading the competition in AI-driven regulatory technology?

What are the implications of using AI to bypass human involvement in regulatory drafting?

How does the concept of 'good enough' regulations challenge traditional regulatory standards?

What role does the Department of Government Efficiency play in promoting AI usage?

What legal risks are associated with AI-generated regulations under the Administrative Procedure Act?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App