The mystery of what led the OpenAI board to take the precipitous step of firing CEO Sam Altman may now have been solved. A new report says a number of researchers warned them of a new breakthrough which they said could threaten humanity – after Altman seemingly failed to inform them.
In a letter to the board, the researchers suggested that the breakthrough – dubbed Q* and pronounced Q-Star – could let AI “surpass humans in most economically valuable tasks” …
The story so far
OpenAI announced on Friday that four members of the company’s board had fired Altman and removed Brockman from the board. Only the vaguest of reasons were given, which was that Altman was allegedly “not consistently candid in his communications with the board.”
The tech world came out in support of Altman, and major OpenAI investors tried to get him reinstated. Negotiations were held between the board and senior execs, but these were not successful.
Microsoft offered jobs to Altman, Brockman, and anyone else from the OpenAI team who wanted to join them. Almost the entire staff then sent an open letter to the board stating that they would resign unless Altman was reinstated and the board fired.
OpenAI initially said that CTO Mira Murati would act as interim CEO, but within 48 hours said that Twitch co-founder Emmett Shear would replace her – also as an interim hire.
A second set of negotiations were then held. Those resulted in all but one of the board members being removed, and Sam Altman being reinstated as CEO – with some notable compromises.
What is the Q* breakthrough?
Currently, if you ask ChatGPT to solve a math problems, it will still use its predictive-text-on-steroids approach of compiling an answer by using a huge text database and deciding on a word-by-word basis how a human would answer. That means that it may or may not get the answer right, but either way doesn’t have any mathematical skills.
OpenAI appears to have made a breakthrough in this area, successfully enabling an AI model to genuinely solve mathematical problems it hasn’t seen before. This development is said to be known as Q*. Sadly the team didn’t use a naming model smart enough to avoid something which looks like a pointer to a footnote, so I’m going to use the Q-Star version.
Q-Star’s current mathematical ability is said to be that of a grade-school student, but it’s expected that this ability will rapidly improve.
Does the Q-Star model threaten humanity?
On the face of it, an AI system which can solve equations doesn’t seem like the stuff of dystopian nightmares. Either humans go to work in the salt mines, or Q-Star works out which of four lines is parallel to 2y=x+7.
But a Reuters report says that the Q-Star research could point the way to the holy grail of AI: artificial general intelligence (AGI).
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
AGI is the name given to an AI system smart enough to perform any task which human beings can perform. If this goal is ever achieved, it would effectively lead to the jobs of almost all human beings being replaced by AI.
The firing of Altman may now make sense
If the Q-Star breakthrough does indeed make the development of AGI even slightly more likely, and Altman failed to inform the board of this fact, then that would explain the board’s comment about his lack of candor, and the perceived urgency of the dismissal.
However, it’s worth noting that this is – as far as we know – a concern shared by a relatively small number of researchers, as indicated by the vast majority of AI staff backing Altman against the board. The smart money (or smart people) would appear to be on the side of this being an overblown concern.
All the same, it does make sense for the existing corporate structure to be retained, in which an independent board – with no financial investment in the commercial wing of AI – provides oversight, and makes decisions on how far and fast the profit-making company should push.
Or not …
However, The Verge cites one source refuting the Reuters story.
A person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.
That’s just one person, and the Reuters piece offers a very plausible explanation, but it seems the mystery and the drama may continue for some time yet!
FTC: We use income earning auto affiliate links. More.