OpenAI’s latest AI models have a new safeguard to prevent biorisks

OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report. O3 and o4-mini represent…

Read More

California AI bill SB 1047 aims to prevent AI disasters, but Silicon Valley warns it will cause one

Update: California’s Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here. Outside of sci-fi films, there’s no precedent for AI systems killing people or being used in massive cyberattacks. However, some lawmakers want to implement safeguards before bad actors make that dystopian…

Read More