Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Anthropic CEO Dario Amodei considers the hallucinat of today’s AI models, or as if people were, the first developer of Anthrop, Clode, said during a press briefing on Clode on Thursday.
Amodiii said all this in the middle of a larger point: EU hallucinations are not a restriction on the AGI – AI systems of human level AI.
“Indeed, it depends on how you measure it, but the AI models are less than humans, but more surprisingly,” he said.
The Director General of Anthropik is one of the most bullet leaders in the industry’s prospects of AGI models. Widespread Paper written last yearAmodii, AGI believed that he could come to 2026. During the press briefing on Thursday, Anthropic CEO said he saw continuous progress by seeing the “water rose everywhere”.
“Everyone always looking for what these hard blocks are [AI] can do, “said Amodoi.” There is no place to be seen. There is no such thing. “
Other AI leaders provide a huge obstacle to reach Hallucination AGI. This week ago, Google Deepmind CEO Demis Hassabis said Today’s AI models have many “holes” And make mistakes wrong. For example, it was a lawyer representing anthropicitude at the beginning of this month Was forced to apologize in court after using the clone Create quotes in the court document and the AI chatBot has the names and names errors.
Amodei’s claim is difficult to check the claim because most of the pit models of most halialination criteria are most of the pit models; They do not compare models to people. Certain methods, AI models help small hallucination rates such as access to the website. Separately, some AI models, for example, Openai GPT-4.5Have the main hallucination rates against criteria compared to the early generations of systems.
However, there are evidence that hallucinations are worse in developed justifications in the AI models. Openai has O3 and O4-mini models Higher Halucination Rates than Openai Gen’s Reasoning ModelsAnd the company does not really understand the reason.
He then pointed out that Amodei, television broadcasters, television broadcasters, politicians and people in all sorts of politicians and people do wrong. AI’s mistakes are also wrong, according to Amodei, it does not beat his intelligence. However, the CEO of Anthropik has confirmed things that are not suitable for the fact that the truths may be the problem.
In fact, the trend of deception of anthropic, AI models, a problem spreading in Clode Opus 4, a problem, which has an early access to the AI model, has introduced an early version of Claude Opus High tendency to the scheme against people and cheating on them. Apollo went to offer an anthropic offer, not to leave this early model. Anthropic, Apollo said that some softenings touched on the issues raised by the raised.
Amodei’s comments indicate that an anthropic AI model can be AGI or can be considered equal to human-level intelligence or still even if they still have the abode. Halucinians can fall shortly from AGI by the definition of many people.