We just unveiled our latest advancements in AI agents. Watch the product launch on-demand.

ABC7 News: A.I. Bill Passes California Legislature

Conversica VP of Legal and General Counsel Lewis Barr joins ABC7 News Bay Area to discuss California SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

California AI Bill SB 1047 passes legislature
Share with your network
Share with your network
Conversation Automation
Published 09/05/24
0 minutes read

Transcript

Kumasi Aaron

We have been following SB 1047 closely as it moved through the legislature and now it's on Governor Gavin Newsom's desk to sign and a lot of big names in tech have weighed in including Elon Musk. Joining us live to talk about this AI bill and why it is so controversial, also to talk about what's next, is Lewis Barr who's the Vice President of Legal and General Counsel for Bay Area based AI company, Conversica. Thank you so much for being here.

Lewis Barr

Oh, thank you, Kumasi. Good morning.

Kumasi

Good morning. Okay. So for people who might not understand exactly what this bill is about, what's the easiest way to explain your interpretation of it?

Lewis

It's a bill that is designed to prevent the use of large AI models from causing or enabling critical harm.

Kumasi

And why is it so controversial?

Lewis

Well, there's a few areas. One is, should this be something that a state is involved in, or should this be actually the work of the federal government? Another is, does this, or would this bill, if signed into law by Governor Newsom, result in some unintended consequences? That could be detrimental to the growth of the open source industry, as well as detrimental to what some people refer to as small tech.

Kumasi

When you were explaining the bill and your interpretation of it, it's to prevent harm. For some people who are like—

Lewis

Critical. Well, critical harm.

Kumasi

Yes. How do you define that?

Lewis

Yeah, that's a really interesting piece of this bill because it's defined in the bill as being an event, causing damage of over a certain monetary amount. If I recall correctly, 500 million. Or causing harm of the nature of like major chemical warfare attack, major infrastructure attack. Things like that.

And it's interesting because if you contrast this bill with what the European Union has done with their AI Act, which recently was enacted over there, their focus is on harm to individual rights. Whereas this bill's focus is on major societal impacts that are negative.

Kumasi

Gotcha. I think we have always been having this conversation more so in the last year about regulating AI, the importance of that, especially when we think of potential outcomes like the ones you just mentioned, but we have Speaker Merritt and Nancy Pelosi calling this bill ill informed, Google and Meta are against it. They believe that it regulates broad AI, not specific harmful applications of AI. Where do you stand on that?

Lewis

Well, I agree on that. And again, if I can have a contrast with the European AI Act. That one focuses on what are called, what we call use cases, particular uses of AI. Some of those uses are banned outright.

Other uses would require similar types of safety protocols and, and other measures that are required in Senator Wiener's bill. But the big critique for a lot of people about the California bill is that it seems to put the blame on the system as a whole. I shouldn't say blame, but put the obligation on a system provider as a whole, where some argue that they really can't control what happens downstream.

Now the flip side of that argument would be, well, sure you can put in the right safety controls. And you can be sure it won't be used for ill intended purposes.

Kumasi

Gotcha. So how do you think the governor is going to move on this? I know it's impossible to tell, but what do you think?

Lewis

I think he's going to veto the bill.

And I think that would be good. Because right now, obviously there's great fear out there, understandably, but I think we're still in the early days and some of these critical harm fears aren't quite manifest yet, and there are already practices in place. For example, there is OpenAI, and others are starting to cooperate with the federal government to actually test their new models before they're released.

So some of this work is going on. And I just say, that it may make more sense to veto this bill at this stage because there's always tomorrow, Kumasi, right? The legislature meets on a regular basis. So I think they can work out these harms.

And I do take note when people like Dr. Lee, who is considered the godmother of AI, are opposed to this because of the possible negative impact on downstream users, on open source, on public users of AI. Because they may be considered within the scope of this bill and have to devote a lot of resources to put in place these controls. When people like that who clearly have the public interest at heart want us to pause, I think we should listen.

Kumasi

Well, we appreciate you Lewis, for coming and explaining something that can be a little controversial and complicated. So thank you as always for sharing with us and we will, like you, be watching this story.

Lewis

Okay, thank you. Pleasure speaking.

Kumasi

Have a good morning and we'll be right back.

Share this video:
Related Content

Explore More

Start the Conversation
See a Revenue Digital Assistant in Action

Supercharge your workforce, effortlessly scale your efforts and unlock revenue. What can our Powerfully Human Revenue Digital Assistants take off your plate today?

Request a Demo