Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic, now $ 61 billion, still offers the most powerful AI models and they have an edges on Openai and Google


Anthropik’s latest generation “border,” or advanced, AI models, Claude Opus 4 and Claude Sonnet 4, Sonnet presented during the first conference of developers in San Francisco on Thursday. AI starting of more than $ 61 billion, a new, very expected OPUS model provides continuous performance in long-scale works and long-term work that require “the world’s best coding model” and “oriented efforts and thousands.” Equipped by new models, AI agents can analyze thousands of information and implement complex actions.

The new release emphasizes violent competition among competitions to create the world’s leading AI models Google This week, the experimental research model called DEMO Gemini Diffusion. By comparing how well the program engineering tasks of various large language models are in the program engineering tasks, the two models of anthropy, Openai’s latest models of Openai, the best model of Google was left behind.

Some early testers were already able to access the model to test in real world assignments. In an example provided by the company, the EU General Manager, Rakuten Shopping Awards, after the placement of a complex project, Opus 4 “I was widespread in an Opus 4”.

Dianne Penn said, a member of the technical staff of Anthrop Fortune This, “this is a great change and leap”, especially “copylots” or auxiliary, assistants, especially “agents” or assistants who can work with the username, “this,” it is progressing to virtual staff.

Claude Opus 4, it has a new opportunity, including more accurate tips and more accurate and developing in “memory” capabilities. Historically, these systems do not remember everything they do before, Penn, “We were intentionally to unlock long-term task consciousness.” Model uses a file system of a file to track progress and then review the things that are stored in memory to change a person’s plan and strategies based on real world situations.

Both models can be an alternative between tools and using tools such as internet search, and they can use multiple instruments and use multiple tools.

“We really see this is a races up,” said Michael Gerstenhaber, the AI ​​platform product was leaders in anthropic. “We want to make sure that the EU improves for everyone so we put pressure on all laboratories to safely enhance it.” Explained that the company demonstrates its own safety standards.

Claude 4 opus begins with harsh security protocols than the previous anthropic model. The Company’s Responsible Scale Policy (RSP) is initially the liability of the public in September 2023, and we will not carry or place disastrous damage to us that we cannot take security and security measures that cannot keep the risks at a low level. “Anthropic was established by former Openai employees concerned about it in 2021 Openai accelerates speed and controls and is scaled.

In October 2024, the company was updated with the “more flexible approach to protecting and managing our commitment to protect our commitment to protect or manage our obligation to protect or deploy the obligation to place the obligation.”

So far, anthropical models have been classified by an AI security level 2 (ASL-2), which provides the main level of secure levels and model security for AI models. “An Anthrop Press Secretary said that the new Club could meet the ASL-2 limit, including harmful data or abuse, including the internal” weighted protection against more powerful protection, stronger defense or abuse, he said.

Models classified at the third security level of Antropy, pay more dangerous abilities for the company’s responsible scale policy and are strong enough for significant risks such as automation of aid or automation of AI R & D. Anthropic, Opus 4 confirmed that it does not require the highest preservation classified as ASH-4.

“We waited when we could do this when our last model launches our closed 3.7 Sonnet.” “In this case, we have determined that the model did not require the ASL-3 standard. However, we acknowledged the real opportunities caused by the pace of progress near future models.”

Claude was taken to release 4 opus, explained, anthropic decided to launch it under the ASL-3 standard. “This approach has allowed us to develop, try and purify these protection before we need. The model requires ASL-4 security. The anthropic did not say what caused the decision to move to ASL-3.

The model or system, which provides detailed information about the opportunities and security assessments of the facilities and security assessments of the models, has always left the cards. Penn said Fortune This anthropic Opus 4 and Sonnet 4 will release a model card with a new release, and a provider confirmed that today the model will be released today.

Recently, companies, including Openai and Google, delayed to release model cards. In April was Openai critic The company has not released the GPT-4.1 model without a model card, the company did not have this “border” model and did not require one. And in March, Google released the model and published with the Gemini 2.5 Pro Model Card critic Like “minority” and “worried.”

This story was first displayed Fortune.com



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *