Openai warns that the new Chatgpt agent has the ability to help develop dangerous bioweapon



Openai’s newest product promises to automatically collect data collection, schedules, book journey, slide decks, perhaps to build biological weapons. The new agent, which can move on behalf of a user, the first product was classified as the presence of a “high” capacity for Openai, Biorisk.

This means that the model allows “inexperienced” to actors that allow meaningful help and create known biological or chemical threats. The real world influence of this, the fact that non-state actors are more likely to monitor and prepare the risk of biological or chemical terrorist in the company, it is likely to be more likely and often.

“Some can provide information that is not real in Bioriskain and can only be found in search. In 2024 it may be true. in the social media post.

“I believe that this model cannot be sure that an inexperienced can not be sure to create severe biological damage, I believe that it would be deep irresponsible to release this model without comprehensive softening.”

Openai said that the model is classified as a high risk for bio-abuse and a model that causes additional provisions to the tool, he said.

Kenen GU, Security Studies and Oceana, When the company said There is a potential to create a potentially prominent evidence that the model can directly direct an inexperience to create a vulgar biological harmful thing, potentially responding to problems, faster response, and potentially responding to problems, potentially requiring problems, potentially requiring problems, potentially to produce systems.

One of the main problems in reducing the potential for Biorisk is that the same opportunities can unlock Life saved medical improvements, One of great promises for advanced AI models.

The company is becoming more concerned about the potential of the model abuse of the model in the development of biological weapons. Last month, in a blog post, Openai said the security test was reduced to reduce the risk of the models used to help in the creation of biological weapons. The AI Laboratory warned without these measures, models will soon be able to activate “inexperienced upgrade” – people with low scientific information can develop dangerous weapons.

“Unlike nuclear and radiological threats, the purchase of materials is less than an obstacle to create bio threats, so security knowledge and laboratory skills are more extent.” Barak said. “Based on our evaluations and foreign experts, UNMited Chatgpt agent narrows the knowledge gap and offer more advice to a topic specialist.”

Chatgpt agent

Openai’s new ChatGPT feature is an attempt to cash in the most risky and most risky, AI development areas: agents.

The new feature is the functions as a personal assistant who can manage tasks such as reservations, online shopping and business candidate lists. Unlike previous versions, the tool can actively manage web browsers, interact with files and use the virtual computer to walk in applications such as tables and slide decks.

The company has united a mediator teams designed to make the operator, the first AI agent and deep research, a single group to create a single group developing a new instrument.

AI Laboratories are races to build agents that can manage complex digital tasks independently and follow similar editions Google and anthropic. Big Tech companies see AI agents as a commercial opportunity, because companies move AI to automate work flows and certain tasks.

Openai, more autonomy has introduced more risk and admitted that it stresses the user control to reduce these risks. Eg Agent I would like permission It can be suspended or suspended or suspended by the user or suspended before taking an important event.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *