close
close

California’s controversial AI safety bill raises fears of nuclear war and catastrophic damage | Technology News

California’s controversial AI safety bill raises fears of nuclear war and catastrophic damage | Technology News

A new bill in the US state of California that would regulate large-scale breakthrough AI models has faced fierce opposition from various stakeholders in the technology industry, including startup founders, investors, AI researchers, and organizations that advocate for open source software. The bill is titled SB 1047 and was introduced by California Senator Scott Weiner.

According to Weiner, the bill would require developers of large and powerful AI systems to adhere to universally accepted safety standards. But opponents of the bill are convinced that it would kill innovation and bankrupt the entire AI industry.

In May of this year, the California legislature passed the controversial bill, which is currently moving forward through various committees. After a final vote in August, the bill could be sent to the state’s governor, Gavin Newsom, for his signature. If that happens, SB 1047 would be the country’s first major bill regulating AI in a state that is home to many large technology companies.

What does the bill propose?

SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, seeks to hold leading AI companies like Meta, OpenAI, Anthropic and Mistral accountable for the potentially catastrophic dangers associated with rapidly evolving technology.

The bill mainly applies to companies introducing large-scale breakthrough AI models, with the term “large” defined in the bill as AI systems trained with a computing power of 10^26 Floating Operations Per Second (FLOPS), with the training process costing more than $100 million (approximately Rs. 834 crore). AI models fine-tuned with a computing power of more than three times 10^25 FLOPS are also covered under the bill.

Festive offer

“If the future development of artificial intelligence is not subject to adequate human control, there is also the potential for it to be used to create new types of threats to public safety and order, including by enabling the production and proliferation of weapons of mass destruction, such as biological, chemical and nuclear weapons, as well as weapons with cyber offensive capabilities,” the bill states.

According to the latest draft bill, developers of large AI models can be held liable for “critical harms.” These harms include the use of AI to create chemical or nuclear weapons and cyberattacks on critical infrastructure. The bill also covers crimes committed by AI models operating under limited human supervision that result in death, bodily injury, and property damage.

short article insertion

However, developers cannot be held responsible if the results generated by AI that result in death or injury are also available elsewhere. Interestingly, the bill also requires AI models to have a built-in kill switch for emergencies. It also prohibits developers from launching large AI models that pose a reasonable risk of causing or enabling critical harm.

To ensure compliance, AI models must undergo independent audits by third-party auditors. Developers who violate the proposed provisions of the law could face legal action from the California Attorney General. They would also have to comply with safety standards recommended by a new AI certification body called the Frontier Model Division, which is to be established by the California government.

Why did the bill cause an uproar?

Essentially, the bill sums up the views of AI doomsayers. It is supported by tech industry figures such as Geoffrey Hinton and Yoshua Bengio, who generally believe that AI could wipe out humanity and therefore needs to be regulated. One of the bill’s sponsors is the Center for AI Safety, which published an open letter saying that the risks posed by AI are as serious as nuclear war or another pandemic.

While the bill receives support from these quarters, it faces heavy criticism from almost all other quarters. One of the main arguments against the bill is that it would effectively eliminate open-source AI models.

When AI models are open source, it means that their workings can be freely accessed or modified by anyone, providing greater transparency and improved security. However, the proposed California bill could prevent companies like Meta from making their AI models open source, as they could be held responsible for other developers misusing the technology.

Experts have also pointed out that preventing AI systems from misbehaving is harder than it seems, so it is not entirely fair to place the regulatory burden on AI companies alone, especially since the safety standards required by the bill are not flexible enough for a fast-growing technology like AI.

© IE Online Media Services Pvt Ltd

First uploaded on: 04.07.2024 at 18:16 IST