Artificial Intelligence

For the past few years, the Legislature has been interested in topics related to artificial intelligence (AI) and algorithmic decision making. This largely stems from the rapid advancements seen in technology sectors (i.e. Google Gemini, and Open AI’s Chat GPT). There have been more than 40 bills introduced on the topic of AI this year. The CA State Senate held a hearing on February 21st titled, “California at the Forefront: Steering AI Towards Ethical Horizons.”  The hearing was largely focused on the state’s role and responsibility on the ethical uses of AI without stifling innovations.

PIFC is part of a coalition of businesses and organizations led by the California Chamber of Commerce opposed to overburdensome bills intended to create a standard around disclosures by developers who use AI.   Assemblymember Rebecca Bauer-Kahan is the Chair of the Assembly Privacy Committee, and it is expected that many of the bills will be consolidated into a vehicle she will introduce to avoid conflicts or redundancy.

AB 2930 (Bauer Kahan) Automated decision tools. 

This bill creates a requirement for “deployers” of automated decision making tools to 1) annually conduct an impact assessment for any ADMT that they use and report it to the Civil Rights Department, 2) notify a consumer before or at the time that a ADMT is being used when it is used in a “consequential decision,” and 3) prohibits deployers from using ADMT in a way that results in algorithmic discrimination, and 4) creates a penalty of $25,000 per violation enforceable by certain public attorneys.

This bill is a huge undertaking, particularly given the work taking place at various state agencies on this very topic. AB 2930’s scope is both inordinately broad and vague, and likely to impede efforts to actually reduce bias and discrimination as a result of overregulation.

AB 2013 (Irwin) Artificial intelligence: training data transparency.

This bill is more insular. The bill requires a developer of AI to report on their website documentation of the data sets used to train the AI system.

Unfortunately, as currently drafted, we have significant concerns with the approach taken in AB 2013, and specifically around overburdensome mandates, the technical feasibility of the bill’s transparency measures (including its assignment of responsibilities and the unique challenges presented for open-source developers in meeting the standards set in this bill), insufficient clarity around key terms, and potential exposure to liability. Moreover, we are heavily concerned about AB 2013’s failure to provide protections for trade secrets and intellectual property, though we do not believe that is the intended outcome of this bill.

Insurers are not opposed to creating a standard around disclosures by developers, and in fact share in the goal of this bill insofar as it can enable greater trust and confidence in these technologies, which is in the interest of consumers and businesses alike.

SB 1047 (Wiener) Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

SB 1047 by Senator Wiener creates new obligations for companies that generate their own algorithmic systems to certify annually that there have been extensive training requirements on the AI before launching, and that there are safety mechanisms built in, including the ability for the AI to automatically shut itself down.

1201 K Street, Suite 1250
Sacramento, CA 95814

Phone: (916) 442-6646
Fax: (916) 446-9548

Scroll to Top