Openai still has a management problem

[ad_1]

Be informed with free updates

It can be hard to grow a chatbot. Last month, Openai, “standard identity” was very dashed, because Openai returned an update. (Maybe the company’s training information was taken from US President Donald Trump’s transcripts Cabinet meetings. . .)

Artificial intelligence wanted to make his chatbot easier, but he wanted to focus on users to be an over-supporting and lightweight response. “Sycophantik interactions can be worried, anxious and may have difficulty. I’m short and try to get it right,” he said. Blog Post .

Cleaning can not be the most important dilemma of tanks, but it cannot be the most important dilemma with the biggest problem: to create a valid personality for the company as a whole. This week, Openai was forced to return the last planned corporate actualizeThe company is designed to make a profit organization. Instead, a passage Social benefit Corporation remaining under the control of the non-profit board.

This will not solve the structural voltage based on Openai. He will provide one of the ELON Musk, one of the company’s co-founders who took legal action against Openai to pass the original purpose. Does the company accelerate AI product placement to make financial support happy Or does it a more intentional approach to staying faithful to his humanitarian intent?

Openai was established in 2015 as a non-profit research laboratory dedicated to the development of artificial general intelligence in favor of mankind. However, the company’s mission – AGI definitions, since then.

Sam Altman, Openai’s CEO, quickly realized that the company requires a large amount of capital to pay for the request to stay at the AI ​​investigation. For this purpose, Openai created a helper for profit in 2019. Such Chatbot Chatgpt was the success of the demolition, which was pleased to throw money on investors $ 260 billion during the latest fundraise. With 500 mn user a week Open “Random” was the consumer internet giant.

In 2023, Altman, who was fired and re-working by a non-profit board, says he wanted to build a “brain for the world, which could require hundreds of billions of dollars. The only problem of wild-eyed ambition – as a technological blogger Ed citron Rants in saline terms, Openai has still developed a live business model. Last year, the company spent $ 9 billion and lost $ 5 billion. Is his financial assessment based on a hallucination? In order to trade technology, the investors will have an installation pressure in Openai.

Moreover, AGI definition changes. Traditionally, it was sent to the point where people surpassed people along the widespread tasks. But recently interview With Stratechnical’s Ben Thompson,Altman acknowledged the term “is almost completely devalued.” However, AGI has accepted as an autonomous coding agent that can write any people and any program.

In this account, the Great AI thinks they are close to AGI. A gift is reflected in their hiring practices. According to Zeki data The United States has received about 500,000 program engineers with a total of 500,000 to 500,000 to 500,000 among only 500,000 between 2011 and 2024. But recently, their net handling ratio fell to zero, as these companies expect many of the many tasks.

Hires schedule a month showing software engineering workers by EU companies

Recently Research paperGoogle Deepmind, who wants to develop AGI, stressed the four main risk of autonomous EU models: the abuse of evil actors; Incorrectness when an AI system has unexplained things; errors that accidentally hurt; Unexpected interactions between AI systems are very agent risks while producing bad results. These are all reasonable problems that carry some potential catastrophic risks and require some cooperation solutions. Although stronger AI models, even more careful developers should be in place them.

Therefore, the border is not an issue for how to manage AI companies, corporate boards and investors, but for all of us. Openai is still a concern in this regard with confrontation impulses. Wrestling with Sycophancy will be the least problem because we approach AGI, but you define it.

john.tornhill@ft.com

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *