Key Trends that Will Shape Tech Policy in 2026

0
6

In 2025, the central AI policy debate in Congress revolved around whether to impose a federal “AI moratorium” that would block states from regulating AI for a set period of time. This proposal, strongly supported by the Trump administration, nearly passed as part of the summer’s “Big Beautiful Bill” and later resurfaced in the year-end National Defense Authorization Act, but ultimately failed due to insufficient congressional backing. As an alternative, U.S. President Donald Trump issued a December executive order aimed at limiting the impact of state-level AI laws without full federal statutory preemption. The order directed the Department of Justice to develop ways to challenge state-level AI laws deemed contrary to administration policy and, furthermore, instructed executive branch agencies to withhold certain funds from states maintaining AI regulations viewed as restrictive. It also committed the White House to draft a legislative framework for a uniform federal AI policy that would preempt state laws, while preserving state authority over issues like child safety, data center infrastructure, and government AI procurement.

Looking ahead to 2026, the debate is shifting from whether to “preempt something with nothing” to whether to “preempt something with something.” In other words, the key question will no longer be about eliminating state AI laws which continue to proliferate without a federal substitute, but will instead become about replacing them with a concrete federal regulatory framework. This change fundamentally alters the conversation: both supporters and critics of the 2025 moratorium must recognize that preemption with a substantive policy is a different proposition from preemption without one. What that federal framework will actually look like remains uncertain, but its details will be critical in determining the level of support for renewed legislation. The bottom line: expect a very different and potentially more consequential  discussion about federalizing AI law in 2026.

The biggest AI federalism story of 2026 will not be about algorithms. It will be about silicon and steel. The National Conference of State Legislatures predicts that data centers will be a central legislative concern. While the dominant political narrative focuses on energy affordability and sustainability, the grassroots data center backlash runs deeper. People vote how they feel, and many Americans feel negatively about an AI-driven future. Data centers are vessels for AI anxiety and antipathy toward big tech more generally. This matters for two related reasons. First, the backlash reflects a broad coalition, spanning affordability, sustainability, job security, and corporate accountability. Second, even if energy costs are contained, the backlash probably will not be. For constituents anxious about AI, job loss, and cultural decay, blocking a local land-use permit or a corporate tax credit is how their voices will be heard.

Beyond infrastructure, states will continue to regulate AI itself. However, comprehensive AI acts are losing momentum. Colorado’s flagship law illustrates why. Originally passed in 2024, Colorado’s AI Act was designed to regulate AI discrimination across employment, housing, healthcare, and more. As the effective date approached, however, Colorado’s Governor and Attorney General backed the industry’s effort to gut the law. Instead, the Colorado legislature delayed the effective date to mid-2026, and future setbacks are likely. States are now pivoting to more targeted approaches, focusing on high-risk applications and legacy sectors. AI chatbots, for example, are in the legislative crosshairs, following headline news that linked chatbots to suicide, defamation, and deception. In 2026, states likely will respond with transparency laws, age-gating, and other guardrails. Pricing algorithms are also on the agenda. Some states may take a general approach, for example, by amending antitrust codes. But most states will seek to regulate price-setting algorithms in specific domains, like housing and insurance. Meanwhile, major legislation enacted in 2025 will take effect this year, including California’s “companion chatbot” law and Illinois’ employment-decision protections.

None of this sits well with the Trump administration. Acceleration and deregulation are twin pillars of the White House’s domestic AI agenda. Most recently, Trump issued an executive order to limit state AI laws through a multi-pronged approach: litigation, federal funding conditions, and regulatory preemption. The order’s ambition makes it legally vulnerable. The executive branch cannot unilaterally preempt state law without a delegation from Congress. Nor can the executive branch impose spending conditions that Congress itself rejected. Agencies will be hard-pressed to demonstrate otherwise in court. Legal issues aside, the order is politically tone-deaf. By large margins, Americans favor AI regulation. States are delivering. The federal government has not. Expect more of the same in 2026.