California’s new AI safety law shows regulation and innovation don’t have to clash 

SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.   So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity.  “The reality is that policy makers themselves know that we have to do something, and…

Read More

California’s new AI safety law shows regulation and innovation don’t have to clash 

SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.   So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity.  “The reality is that policy makers themselves know that we have to do something, and…

Read More

California lawmakers pass AI safety bill SB 53 — but Newsom could still veto

California’s state senate gave final approval early on Saturday morning to a major AI safety bill setting new transparency requirements on large companies. As described by its author, state senator Scott Wiener, SB 53 “requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs & creates…

Read More

Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment

Common Sense Media, a kids-safety-focused nonprofit offering ratings and reviews of media and technology, released its risk assessment of Google’s Gemini AI products on Friday. While the organization found that Google’s AI clearly told kids it was a computer, not a friend — something that’s associated with helping drive delusional thinking and psychosis in emotionally…

Read More

A brazen attack on air safety is underway — here’s what’s at stake

At the end of July, the National Transportation Safety Board (NTSB) convened a three-day public hearing to investigate January’s mid-air collision over Washington, DC that killed 67 people. After the hearing, two conclusions were inescapable. First, the disaster should have been prevented by existing safety rules. And second, the government regulators responsible for air safety…

Read More

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

A third-party research institute that Anthropic partnered with to test one of its new flagship AI models, Claude Opus 4, recommended against deploying an early version of the model due to its tendency to “scheme” and deceive. According to a safety report Anthropic published Thursday, the institute, Apollo Research, conducted tests to see in which…

Read More