Bipartisan Bill ANNOUNCED – Could This End It?

Sign displaying United States Senate in a government building

When a bipartisan AI bill hands sweeping, unprecedented power to the executive branch under the pretense of protecting America, the question isn’t just how we regulate technology—it’s who really controls our future.

Story Snapshot

  • Hawley and Blumenthal’s AI Risk Evaluation Act would force advanced AI systems to pass Department of Energy scrutiny before launch.
  • Critics argue this centralizes executive authority and could weaken U.S. national security by slowing innovation.
  • The bill uniquely empowers the DOE, not traditional tech regulators, with AI oversight, raising separation-of-powers alarms.
  • Industry and civil liberties groups are sharply divided, with warnings of regulatory overreach and global competitiveness risks.

DOE Oversight: The Power Shift AI Developers Fear

Senators Hawley and Blumenthal’s Artificial Intelligence Risk Evaluation Act of 2025 would require every “advanced” AI model—defined as those trained on more than a hundred trillion trillion operations—to clear a federal safety evaluation by the Department of Energy before deployment. This is not a minor bureaucratic hoop; it places the DOE, an agency more familiar with nuclear reactors than neural networks, in the AI driver’s seat. The Act hands the executive branch power to halt or greenlight the most ambitious AI projects in the name of national security, civil liberties, and labor protections. Critics warn that this shift would delay, or even freeze, American innovation while adversaries race ahead. The political gravity of this move cannot be overstated: it places the future of U.S. AI under a single agency’s purview, with Congress watching from the bleachers.

DOE’s new mandate would include reporting to Congress on AI threats, issuing compliance orders, and imposing crushing penalties for non-compliance. The bill’s supporters say this is the only way to prevent catastrophic misuse—whether from rogue actors or foreign adversaries. Yet, industry leaders and civil liberties advocates see another danger: a regulatory chokehold that could make U.S. AI as sluggish as the most plodding government agency, with innovation bogged down by endless review cycles and risk-averse decision-making. The DOE’s ascendancy would create inter-agency turf wars, with traditional tech regulators sidelined as the new AI overlords assert their claim.

National Security or National Slowdown?

The bill’s backers present it as a national security imperative. Hawley and Blumenthal argue that, without federal guardrails, America risks losing control of AI to both domestic bad actors and foreign powers eager to weaponize this technology. The DOE’s involvement is explained as a matter of technical capacity and national security tradition, not a lack of faith in the FTC or FCC. Yet, industry voices warn that the bill’s pre-market approval model—more typical of medical devices than software—could grind progress to a halt. In the race against China, where AI regulations are designed to accelerate, not restrain, deployment, a bottleneck at the DOE could mean American companies fall hopelessly behind. National security, critics claim, is not just about preventing disaster; it’s about staying ahead. Slow the pace, and the U.S. could lose both the AI arms race and the economic spoils that come with technological dominance.

Industry groups like Americans for Responsible Innovation praise the bill’s ambition, citing its potential to force transparency and accountability on an industry notorious for secrecy and “move-fast-and-break-things” bravado. On the other hand, the Progress Chamber and major tech developers argue that the bill’s penalties and red tape would drive investment offshore, stunt domestic innovation, and expose the nation to even greater risk as rivals exploit America’s self-imposed delays. The debate is not just technical; it’s existential, pitting the need for safety against the imperative of speed.

Separation of Powers: Executive Overreach or Sensible Safeguard?

The choice to empower the DOE—rather than a new or existing tech regulator—has become the bill’s most polarizing feature. Proponents defend the move as pragmatic, leveraging the DOE’s national security expertise and technical muscle. Detractors, however, see a brazen executive power grab that up-ends two centuries of American regulatory tradition. The bill’s centralized authority structure could, in practice, give a single agency—and, by extension, the President—unprecedented veto power over the nation’s technological destiny. Civil liberties groups warn that, once the precedent is set, similar “risk evaluation” schemes could spread to other digital sectors, from biotech to quantum computing, with every new innovation forced to clear a government checkpoint before reaching the public.

Academic and national security experts remain split. Some see value in rigorous risk evaluation at the highest levels, especially as AI systems approach superintelligence. Others argue that slow-moving oversight could itself become the greatest risk, stifling innovation and ceding ground to adversaries without the same regulatory hang-ups. The open question: can a single government agency keep pace with the exponential acceleration of technology, or will the attempt to control AI’s rise simply ensure America falls behind?

Sources:

NextGov

Secure AI Now

Hawley Senate Press Release

Axios

Americans for Responsible Innovation (ARI)

Bill Text (PDF)

Progress Chamber