OpenAI Researchers Warned Board of AI Discovery That Could Threaten Humanity

Before the series of events that took at ChatGPT maker OpenAI's leadership, including firing of Sam Altman and then re-hiring, several staff researchers wrote a letter to the the company’s board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, said a report by news agency Reuters.

Citing two people privy to this unreported letter from researchers, the Reuters report said that the letter and the 'dangerous' AI algorithm were key developments before the board's ouster of Sam Altman.

Reuters' sources pointed out the letter as one of the long list of factors, that led to Altman's firing last week, which were concerning for the board to commercialize the AI (cited dangerous in the letter) before understanding its consequences.

A project called Q* (pronounced Q-Star) could be a breakthrough in the OpenAI's search for what's known as artificial general intelligence (AGI), said the report citing one of the people at the AI company.

Unlike classical AI, AGI is Cognitive and can generalize, learn and comprehend in contrast to a calculator that can solve a limited number of operations but cannot learn.

In their letter to the board, researchers flagged AI’s great aptitude and potential danger, said the report citing sources but does not specify the exact safety concerns noted in the letter. Notably, there has been a long time discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.

Altman led the ChatGPT to be one of the fastest growing software applications in history and attracted investment as well as required computing resources from Microsoft to get closer to this AGI — labelled dangerous.


Advertisements

Post a Comment

Comment

Previous Post Next Post