The AI Agent Era Requires a New Kind of Game Theory

[ad_1]

At the same time, the risk is immediately and available with agents. When the models are not available only in boxes, they can move around the world, let the world manipulate the world, I think it’s really more trouble.

We are moving here, better develop [defensive] Techniques, but if you break the main model, mainly equivalent to a buffer [a common way to hack software]. Your agent can be exploited by third parties to monitor third-party by third parties or to break the functionality of the system. We will be able to provide these systems to make agents safe.

This is different from something different from AI models, isn’t it?

There is no real risk of something like a loss of current models with current models. There is more of a future concern. But I am very glad they worked on people; I think this is important.

Then how much should we worry about the growing use of agentic systems?

In the study group, Openai recently produced in the beginning and several publications [for example]There is a lot of progress in relieving some of them. I think we are on a reasonable path to start having a safer road to do all this. This [challenge] We want to make sure they have security progress in LockStep in the balance of pushing agents.

Most [exploits against agent systems] At present, we saw that the experimental, openly classified, as agents are still in infancy. Usually one user is still in one place. If an email agent receives an email that says “Send all your financial information” before sending this email, the agent would notify the user before sending this email.

Therefore, there are many occupations of human interaction in more security prone situations around many agent releases. OperatorFor example, by Openai, when using it in Gmail, a person demands manually management.

First of all, what agency maintenance types do we see?

When agents bend in the wrong way, there are demonstrations of things like the data exfiltration. If my agent can use all my documents and cloud machines and make inquiries for links, then you can download these things together.

These are still in the demonstration stage, but this is really because this work is not yet accepted. And they will not be taken. These things will be more autonomous, more independent and will have fewer user controls, because “I agree” “Agree” “Agree” “Agree” “Agree” “Agree” “I agree” can do something.

In addition, we will also see the communication and talks of different AI agents. What happens after that?

Absolutely. We will go into a world where we will or not to do or not to interact with each other. On behalf of different users, there will be many agents with the world. It is absolutely so that the features that emerge in the interaction of all these agents.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *