The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act seeks to regulate the development and use of advanced artificial intelligence (AI) models in California to ensure public safety while fostering innovation.

Bill Text - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

Summary of Bill

Before we proceed with discussions, let's just take a look at the main provisions of the bill.

  • The Act applies to "covered models", defined as AI models trained using over 10^26 operations or equivalently large models. Developers must determine if their models qualify for a "limited duty exemption" if they reasonably exclude the possibility of hazardous capabilities.
  • Developers of non-exempt models must implement cybersecurity protections, full shutdown capability, covered guidance, safety protocols, capability testing, and other safeguards. They must certify compliance to the Frontier Model Division annually.
  • The Act requires reporting AI safety incidents, transparent pricing for model access, and allows the Attorney General to bring civil actions for violations with injunctive relief and civil penalties.
  • It creates the Frontier Model Division to review developer compliance, issue guidance, advise on emergencies, appoint advisory committees, and help prevent unreasonable risks from AI models with hazardous capabilities.
  • The Division will develop guidance on technical thresholds for covered models and limited duty exemptions by July 1, 2026 and review it every 24 months.
  • The Department of Technology will commission consultants to create CalCompute, a public cloud computing cluster for safely researching large-scale AI models and promoting equitable innovation. This requires analyzing the cloud ecosystem, establishing partnerships, determining use parameters, evaluating impacts, and reporting annually to the legislature.
  • The Act's provisions are severable and to be liberally construed to effectuate its purposes of ensuring AI safety and security while enabling innovation in California.

California Bill Regulating AI

As the capabilities of artificial intelligence continue to grow, policymakers continue to grapple with the challenge of creating effective regulations to address the potential risks and ensure the safe development of AI technologies. In California, a new bill has been proposed that aims to establish a comprehensive framework for AI governance, sparking a heated debate among researchers, industry leaders, and civil society groups.

Proposed Legislation to Address AI Risks and Safety Measures

The proposed California bill, known as SB 1047, seeks to address the growing concerns around AI safety and the potential for these powerful technologies to cause harm if left unchecked. The legislation aims to establish a set of guidelines and requirements for the development, testing, and deployment of AI systems, with a particular focus on high-risk applications.

Key provisions of the bill include:

  • Mandatory testing and evaluation of AI systems to identify and mitigate potential risks
  • Requirement for developers to implement appropriate safety measures and safeguards
  • Creation of a new regulatory agency, the Frontier Model Division, to oversee the development and deployment of AI technologies
  • Liability provisions that hold developers accountable for any harm caused by their AI systems

Proponents of the bill argue that these measures are essential to ensure that AI is developed responsibly and with the public interest in mind, while critics warn that overly restrictive regulations could stifle innovation and hinder the progress of this transformative technology.

Contrasting Views from AI Researchers on the Bill's Approach

The proposed California bill has generated a wide range of responses from the AI research community, with experts divided on the effectiveness and appropriateness of the proposed measures.

Some prominent AI researchers, such as Geoffrey Hinton and Yoshua Bengio, have expressed support for the bill, arguing that it represents a sensible approach to addressing the risks associated with AI. They emphasize the importance of establishing clear guidelines and accountability measures to ensure that AI is developed and deployed in a safe and responsible manner.

However, other researchers have raised concerns about the potential unintended consequences of the proposed regulations. They argue that overly broad or restrictive measures could hamper the ability of researchers and developers to innovate and push the boundaries of what is possible with AI. Some have also questioned the practicality of certain provisions, such as the requirement for mandatory testing and evaluation of all AI systems.

Concerns About Impact on Startups, Open-Source, and Innovation

One of the most contentious aspects of the proposed California bill is its potential impact on startups, open-source projects, and the overall innovation ecosystem surrounding AI.

Critics of the bill argue that the compliance costs and regulatory burdens associated with the proposed measures could disproportionately affect smaller companies and open-source initiatives, which often lack the resources and legal expertise to navigate complex regulatory frameworks. They warn that this could lead to a consolidation of power among a few large tech giants, stifling competition and diversity in the AI landscape.

Proponents of the bill, on the other hand, argue that clear regulations and accountability measures are necessary to ensure a level playing field and protect the public interest. They contend that the long-term benefits of responsible AI development outweigh the short-term challenges of adapting to new regulatory requirements.

As the debate around the California bill continues to unfold, it is clear that finding the right balance between innovation and safety, between progress and responsibility, will be a critical challenge for policymakers and the AI community alike. The outcome of this legislative effort could set a precedent for AI governance not just in California, but around the world, shaping the future trajectory of this transformative technology.

Controversy Around "Derivative Model" Clause

One of the most contentious aspects of the proposed California bill regulating AI is the inclusion of a "derivative model" clause, which has sparked intense debate and controversy among AI researchers, developers, and legal experts. The clause, as currently written, has the potential to criminalize the use and development of open-source AI models, raising concerns about the unintended consequences for innovation and collaboration in the field.

Potential Criminalization of Open-Source Models Under Broad Definition

The derivative model clause in the California bill defines a derivative model as "a combination of artificial intelligence model with other software, an unmodified copy of artificial intelligence model, or a modified copy of artificial intelligence model." Critics argue that this broad definition could effectively criminalize the use and development of open-source AI models, which often build upon and modify existing models to create new and innovative applications.

Under this interpretation, developers who use open-source models as a starting point for their own work could be held liable for any harm caused by their AI systems, even if the original model was created by someone else. This could have a chilling effect on the open-source AI community, discouraging developers from sharing their work and collaborating with others.

Debate Over Scope and Unintended Consequences for Developers

The controversy surrounding the derivative model clause has sparked a heated debate among AI experts and legal scholars about the appropriate scope of AI regulation and the potential unintended consequences for developers and researchers.

Proponents of the clause argue that it is necessary to ensure accountability and prevent bad actors from using open-source models for malicious purposes. They contend that developers who modify or build upon existing models should be held responsible for the safety and integrity of their AI systems.

However, critics warn that the clause is overly broad and could have far-reaching implications for the AI community. They argue that holding developers liable for any harm caused by their AI systems, regardless of the origin of the underlying model, could stifle innovation and collaboration, as developers become more hesitant to share their work or build upon existing models.

Clarifications from Senator on Civil Sanctions vs. Criminal Liability

In response to the growing controversy surrounding the derivative model clause, the bill's sponsor, Senator Scott Wiener, has sought to clarify the intent and scope of the provision. In a series of tweets and public statements, Senator Wiener has emphasized that the clause is intended to establish civil sanctions, not criminal liability, for developers who fail to adequately test and mitigate the risks associated with their AI systems.

According to Senator Wiener, the derivative model clause would not automatically trigger liability for developers who use or modify open-source models. Instead, liability would only apply if a developer "unreasonably fails to test and mitigate the catastrophic risks" of their AI system, and if that system causes harm exceeding a certain threshold (e.g., $500 million in damages).

Despite these clarifications, many in the AI community remain skeptical of the derivative model clause and its potential impact on open-source development and collaboration. As the debate continues, it is clear that finding the right balance between accountability and innovation will be a critical challenge for policymakers and the AI community alike.

The controversy surrounding the derivative model clause in the proposed California bill highlights the complex and evolving nature of AI regulation, and the need for careful consideration of the unintended consequences of broad or overly restrictive measures. As the field of AI continues to advance at a rapid pace, it will be essential for policymakers, researchers, and developers to work together to craft effective and balanced regulations that promote responsible innovation while protecting the public interest.

Challenges in Legislating AI Models

As policymakers grapple with the task of regulating artificial intelligence, they face a host of challenges and complexities that arise from the unique nature of AI technologies. One of the most significant hurdles is determining the physical location and jurisdiction of AI models, which can have far-reaching implications for the applicability and enforceability of AI regulations.

Determining Physical Location and Jurisdiction of AI Models

In the era of cloud computing and distributed systems, determining the physical location of an AI model can be a daunting task. AI models can be developed, trained, and deployed across multiple geographic locations, often spanning different countries and legal jurisdictions. This raises important questions about which laws and regulations should apply to a given AI system, and how those laws can be effectively enforced.

For example, if an AI model is developed in one country, trained on data from another, and deployed on servers located in a third, which jurisdiction has the authority to regulate its use and hold its developers accountable for any harm it may cause? These questions become even more complex when considering the use of decentralized technologies, such as blockchain, which can further obscure the physical location and ownership of AI systems.

Applicability to Cloud-Based Services and Distributed Computing

The challenges of determining the physical location and jurisdiction of AI models are compounded by the widespread use of cloud-based services and distributed computing in the development and deployment of AI technologies. Many AI developers rely on cloud platforms, such as Amazon Web Services, Microsoft Azure, and Google Cloud, to access the computational resources and data storage necessary to train and run their models.

In these cases, the physical infrastructure supporting the AI model may be spread across multiple data centers and geographic regions, making it difficult to pinpoint a single location or jurisdiction. Moreover, the terms of service and data policies of cloud providers can further complicate the legal landscape, as they may impose additional constraints or requirements on the use and regulation of AI models hosted on their platforms.

Triggering Liability Based on Location of Harm vs. Model Development

Another key challenge in legislating AI models is determining the appropriate trigger for liability and accountability. Should liability be based on the location where harm occurs, or on the location where the AI model was developed and deployed?

This question has significant implications for the enforceability of AI regulations and the ability of victims to seek redress for harm caused by AI systems. If liability is triggered based on the location of harm, it may be easier for victims to pursue legal action in their local jurisdiction. However, this approach could also create a patchwork of conflicting regulations and legal standards, as AI developers may face different requirements and liabilities depending on where their models are used.

Alternatively, if liability is triggered based on the location of model development and deployment, it may provide a more consistent and predictable legal framework for AI developers. However, this approach could also create jurisdictional challenges, as victims may need to pursue legal action in foreign courts or navigate complex international legal frameworks to seek redress.

As policymakers and legal experts continue to grapple with these challenges, it is clear that developing effective and enforceable AI regulations will require a careful balancing of competing interests and a deep understanding of the unique characteristics of AI technologies. Collaborative efforts between policymakers, industry leaders, and civil society groups will be essential to crafting regulations that promote responsible innovation while protecting the public interest and ensuring accountability for any harm caused by AI systems.

What Should We REALLY Be Doing?

Instead of a fear-based crackdown, let's focus on:

  • Education: Teach people (including legislators) how AI actually works, not just sci-fi fantasies.
  • Ethical Development: Collaborate with AI creators to build in safety from the start, not regulate after the fact.
  • Transparency: Make companies explain how their AI works in plain English, so we can make informed choices.
Share this post