Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks


In a new report, a California-based policy group, a California-based policy group, which co-chairs by the EU pioneer, EU pioneer, must be developed in the world, “there is still no need in the world.”

This 41-page Intermediate Report Carier AI models released on Tuesday, an effort organized by Governor Gavin Newsom, Joint California Policy Comes from Working Group California’s controversial AI security bill, SB 1047. Found the news SB 1047 missed the signLast year, he admitted that the AI ​​risks need to be assessed in a wider risk to inform the legislators.

The report includes the structure of AI laboratories opened to the laboratory of laws that will increase the transparency of the laws, along with co-authors of co-authors with co-authors. Stakeholders, including the ideological spectrum, including TURING Prize-winning Yoshua Benjio, as well as Databricks, as well as Databrics co-founder Ion Stoica, were considered as defenders.

According to the report, the novel risks created by AI systems may require the laws that will make the security tests of AI model developers to open security tests, data acquisition experience and security measures. In addition to the protection of a wicked group of employees and contractors for employees and contractors, the AI ​​shall be related to increased standards around the third party assessments of third party assessments of the third party.

Li et al. Write the “Income Revelation” for the potential of AI, which helps to implement the EU cybershogs, creating biological weapons or bring other “extreme” threats. At the same time, the EU policy is not only to resolve the current risks, but also argue that the expectation of future results without sufficient protection.

“For example, it is not necessary to observe nuclear weapons [exploding] To predict that it can develop and cause extensive damage, “reports.” Those who speculate about the most extreme risks are true.

The report recommends two bent strategies to increase the development transparency of the AI ​​model: Trust, but check. AI model developers and their employees should be given avenues to report on public concerns, the report said that such as inner security testing is required, as the domestic security test is required to provide testing requirements for verification of third parties.

The report, in June 2025, was well received by the specialists of both sides of the dispute in the EU policy, confirming any specific legislation.

Dean Ball, AI-centered research worker at George Mason University, SB 1047 is critical, x prospective step California for AI security regulation. Senator Scott Wiener, who presented SB 1047 last year, is a victory for AI security defenders. Wiener, in a press release, we started the report “Actual conversations around the EU management” legislature [in 2024]”

The report seems to be adapted with several components of SB 1047 and Wiener’s follow-up bill, SB 53For example, it requires ai model developments to express the results of security tests. By taking a wider look, you can earn a lot of need for AI security people, Whose diary has lost its place in the last year.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *