Sam Altman’s goal for chatgpt remember “your whole life” is both exciting and disturbing

Sam Altman’s goal for chatgpt remember “your whole life” is both exciting and disturbing


The CEO of Openi Sam Altman presented a great vision for the future of Chatgpt in an event at the hosts by the VC company Sequoia at the beginning of this month.

When he was asked by a participant on how chatgpt can become more personalized, Altman replied that in the end he wants the model to document and remember everything in a person’s life.

The ideal, he said, is a “very small reasoning model with a trillion of context signals in which you put your whole life”.

“This model can reason throughout your context and do it efficiently. And every conversation that you have ever had in your life, every book you have ever read, every and -mail you have ever read, everything you’ve ever seen is there, in addition to all your data from other sources. And your life continues to pass the context,” he described.

“Your company does only the same thing for all the data of your company,” he added.

Altman may have some data based on data to think that this is the natural future of chatgpt. In the same discussion, when he was asked for ways in which young people use Ghatgpt, he said: “People at college use it as an operating system”. Upload files, connect data data and then use “complex prompts” on such data.

Furthermore, with the chatgpt memory options – which can use the previous chats and stored facts as a context – he said that a trend that has noticed is that young people “do not really make decisions on life without asking for chatgpt”.

“An gross simplification is: the elderly use chatgpt as a replacement of Google,” he said. “People between 20 and 30 years of age use it as a life consultant.”

It is not much jump to see how chatgpt could become an omniscient artificial intelligence system. Combined with the agents that the valley is currently trying to build, it is an exciting future to think about.

Imagine that your to automatically plan the oil changes of your car and remember; plan the journey necessary for a wedding outside the city and order the gift from the register; or pre -order the next volume of the series of books that you have read for years.

But the frightening part? How much should we trust a non -profit company of a great technology to know everything about our life? These are companies that do not always behave in a model way.

Google, who started life with the motto “Don’t Be Evil” has lost a cause in the United States that accused him of engaging in anti -concurrent and monopolistic behavior.

Chatbots can be trained to respond in politically motivated ways. Not only were Chinese robots found to comply with China’s censorship requirements, but this week Xai chatbot grok randomly discussed a South African “white genocide” when people asked completely unrelated questions. The behavior, many noticed, implicated the intentional manipulation of its response engine to the command of its South African founder, Elon Musk.

Last month, Chatgpt became so pleasant that it was definitely stapled. Users have started to share the bots of the bots who applaud problematic decisions and ideas, even dangerous. Altman quickly replied by promising that the team solved the change that caused the problem.

Even the best and most reliable models still do everything possible from time to time.

So having an omniscient artificial intelligence assistant could help our lives in ways we can only start seeing. But given the long history of Big Tech’s uncertain behavior, this is also a mature situation for improper use.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *