close
close

The end of the Chevron Doctrine complicates US lawmakers’ efforts to regulate AI – but there is another way

The end of the Chevron Doctrine complicates US lawmakers’ efforts to regulate AI – but there is another way

At a time when the passage of basic legislation is already a challenge, the Supreme Court in Loper Bright Enterprises et al. against Raimondo has transformed the space for maneuver for the United States with respect to highly dynamic areas of innovation such as artificial intelligence (AI). By removing the restraint on executive agencies enshrined in the Chevron doctrine, the Court weakened the ability of legislative agencies to interpret and enforce laws on issues of public interest and transferred the authority to interpret federal law to the court system.

America is in a battle for global leadership in AI, both in developing cutting-edge technologies and in ensuring the safe development of new models and applications for the benefit of society. Against this backdrop, determining how the U.S. will govern, innovate, and compete globally in AI is critical, especially when it is nearly impossible for a partisan Congress to legislate clearly on complex technologies. When the dust settles, this new reality will become clearer. This is especially true—and urgent—as Congress and the White House grapple with AI governance.

To meet the challenges of AI governance in this new environment, the United States needs flexible, forward-looking strategies to protect against the risks of AI and promote American innovation.

We propose three principles to enable an agile approach to AI governance and maintain the U.S. technological edge. Our conclusions are based on extensive analysis conducted by the Center for Security and Emerging Technology (CSET) research team that has shaped our understanding of the practical realities of AI governance. This work drew on assessment of the technical capabilities of existing AI models, examination of AI risks and incidents, a comparative policy analysis of the U.S. and other countries, examination of current regulatory and technological assessment methodologies, and insights from face-to-face conversations with members of Congress, current and former senior officials, and other key stakeholders.

First, effective protection against AI harms depends on our ability to identify them. Prioritizing AI incident reporting is critical to mapping the landscape of AI risks. New AI governance frameworks should encourage companies to report incidents involving their systems to regulators or a neutral body such as the National Institute for Standards and Technology. This approach would increase public awareness of AI risks and help identify cross-industry patterns.

Implementing a robust damage measurement system and mandating comprehensive incident reporting can prevent risky innovation practices without hindering progress. A well-thought-out, phased self-reporting amnesty system would motivate AI developers to learn from mistakes and act responsibly. Combining this mandatory approach with voluntary and citizen-led reporting would provide a more complete picture of AI safety. While setting up such a system comes with initial costs and challenges, a reporting framework that creates the right incentives for industry could inform and catalyze future AI governance efforts.

Second, an adaptive and flexible approach to AI governance is critical. Federal agencies should leverage existing regulatory authorities for AI rather than creating new regulations. But they must also recognize their limitations in areas such as human capital, expertise, and infrastructure for testing and evaluation. The Supreme Court’s decision will inevitably raise questions about how exactly agencies understand their existing authority, but it also underscores the need to adopt governance that incorporates expertise from the private sector while avoiding regulatory capture.

Improving AI competence among policy makers and Now Judges are essential. This includes developing a fundamental understanding of the strengths, weaknesses, applications and limitations of AI. This knowledge will be key to developing adaptive governance strategies that can keep pace with rapid technological advances.

Third, AI governance should leverage America’s strengths: our culture of innovation and our decentralized, dynamic economy. This is especially critical in the context of technological competition with China. While the Biden administration has taken defensive measures, such as restricting exports of advanced semiconductors and controlling foreign investment in critical technologies, these steps are likely to slow China’s progress only temporarily.

Rather than focusing on defensive strategies, policymakers should aim to accelerate innovation within the distributed U.S. innovation environment. This approach plays to our strengths rather than competing on China’s terms, which has long experience in legal, illegal, and illicit forms of technology transfer. Policymakers should complement necessary controls with regulatory incentives that strengthen America’s ability to create robust innovation ecosystems and attract top talent. By encouraging breakthrough research, creating tax incentives to relocate critical supply chains back to the U.S. and friendly countries, and continuing to develop a favorable startup environment, we can outpace our competitors through relentless creativity and adaptability rather than constraints.

The Loper Bright While this decision challenges existing regulatory approaches, it offers an opportunity to create a more agile, distributed and innovation-friendly governance environment for AI. By focusing on incentives, promoting adaptability and leveraging our strong innovation ecosystem, Loper Bright drives us to develop an intelligent control of artificial intelligence – and perhaps an approach that could serve as a model for dealing with other future technologies.

Other indispensable comments published by Assets:

The opinions expressed in Fortune.com’s commentaries reflect solely the views of their authors and do not necessarily reflect the opinions and beliefs of Assets.