Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing

[ad_1]

Openai CEO SAM Altman put a great vision for the future of Chatgpt with the EU event hosted by VC Sequoia at the beginning of this month.

When a participant asked how much it could be, Altman replied that the model wants to document and memorize everything in a person’s life.

Ideally, he is a very small thought model “with a trillion tokniti you put your life in your life.” “

“This model can cause all context, and this is in your life, you have read any book, you have read any book, each email, each of your email, from other sources,” he said.

“Your company is doing the same for all your company’s information,” he said.

Altman, this can be a reason based on some information to think of the natural future of Chatgpt. The same discussion, when young people want to cool ways, “people in college use it as an operating system,” he said. They load files, combine data sources and then use “complex suggestions” against this information.

In addition, with Chatgpt’s memory options – It can use previous conversations and memorized facts as a context – he said that young people do not really do life without asking “the young people.

“The rough extreme: older people use chatgpt as Google replacement,” he said. “People in 20 and 30 are used as a life advisor.”

It’s not much to see how ChatGPT can be converted to the AI ​​system. Currently, the valley was paired with agents trying to build, and a interesting future to think about it.

Enjoy the AI ​​automatically plan to plan your car’s fat changes and remind you; order a gift from the city to an outside and a gift from the notebook; Or pre-order the next volume of the book series you read for years.

But terrible part? How long should we trust a great technological company for a great technological company to know everything about our lives? These are companies that do not always behave in the models.

Google that begins life through the slogan “Don’t be bad” Lost a claim charged in the United States to engage in anti -otitypes, monopolistic behavior.

Chatbots can be trained to respond to politically motivated ways. Not only Chinese bots not found Suitable for China’s censorship requirements But Xai’s Chatbot Grock was casual this week Discussion of South African “White Genocide” When people ask the questions that are not completely related. Behavior, Many notedThe founder of the South African founder was intended to deliberately manipulate the response engine with the command of the Esman Musk.

Last month, Chatgept was so pleased It was open skoptik. Users began to share the bot’s screenshots, even dangerous decision and thought. Altman, the team’s promising team made the pinches that caused the problem.

Even the best, most reliable models are still open Time to make items from time to time.

Thus, having an AI assistant who knows everything can only help our lives in ways we can start to see. However, Given the long history of Big Tech’s IFFY behavior, this is also a situation for abuse.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *