Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Researchers from AI, Openai, Google Deepmind, Anthropic and Extent Companies and Non-Commercial Groups, call on techniques to watch the so-called thoughts of EU substantia position paper Published on Tuesday.
The main feature of AI reasoning models such as Openai’s O3 and Deepseek’s R1They are white-shopping or COTS – AI models of AI models, a foreign process that works with a difficult problem to work with a difficult math question. Touch models are a key technology to strengthen AI agents and the authors of the paper claim that the COT monitoring can be a key method for the control of AI agents.
“Cot monitoring is a valuable supplement for security measures for the border EU, which gave a rare idea for how AI agents decide,” he said. “Again, there is no guarantee that the visibility rate will continue. We encourage the research community and border AI developers, the best use of COT control ability and how to protect.”
Position paper willing to learn “trackable” from leading AI model developers – in other words, any factors, which can increase or reduce the transparency of AI models. The authors of the paper say Cot monitoring can be a key method to understand the AI substantiates, but inform you that it can be sharp and fragile against any interventions that can reduce transparency or reliability.
The authors of the paper also call to watch the AI model compilers and learn how to track the method and how a method can make a day in a day.
Noting that the paper is notable, Openai chief researchers Mark Chen, safe superpower CEO ilya Sutskever, Google Deepmind co-founder Shane Legg, Hendrycks and Finger Machines from Xai Security Machines co-author John Schulman. The first authors include the AI Safety Institute and Apollo Research, Amazon, Meta and UC Berkeley, and other signers from Berkeley.
The unity among many AI industry leaders in attempting to investigate the paper AI security is a momentary moment. Tech is in a period in a violent competition – LED Meta for top researchers Openai, Google Deprmind and a million dollars offer anthropic. Some of the most wanted researchers are building AI agents and AI justification models.
Techcrunch event
San Francisco
|
October 27-29 2025
“We were in this critical time with this new chain of this new thing. If people really do not concentrate, they may be away in a few years,” he said. “A position paper like this is a mechanism to get more research and attention on this topic in this regard.”
Openai, in September 2024, O1, O1, left a preview of a preview of the first AI substantiator model. Since today, the technology industry, Benchmarks, Google Deepmind, Khai and Anthropic, which features some models with Xai and anthropic, releases the opponents quickly.
However, the AI thinks are relatively little about how models work. The AI laboratories have made it better to improve the performance of the EU in the last year that they did not understand how they came to their answers.
Anthropic, AI models were one of the industry’s leaders in understanding how it works – an interpretable area. CEO Dario announced the EM early this year Obligation to crack the black box of AI models by 2027 and invest more in commentary. Openai and Google called Google to further explore the topic.
Early research in anthropic showed this COTS may not be a fully reliable sign How these models come to the answers. At the same time, Openai researchers said cot monitoring could be one day A valid way to watch alignment and safety In AI models.
The purpose of such official documents is to increase and draw more attention to research areas such as COT monitoring. In Openai, Google Deepmind and anthropic, companies are already investigating these topics, but this document can be more financing and investigation into the space.