Technology

What is Project Q-Star that allegedly led to the firing of OpenAI's Sam Altman

Sushim MukulNovember 24, 2023 | 14:00 IST

Just before the abrupt departure of OpenAI CEO Sam Altman on November 17, a group of OpenAI researchers penned a letter to the board of directors, raising concerns about a new artificial intelligence discovery with potential implications for humanity's safety, as reported by Reuters.

The undisclosed letter, which might have had additional reasons, included information about a new AI algorithm known as Project Q* (pronounced Q-Star), which reportedly played a significant role in Altman's removal from the company's leadership, according to sources familiar with the matter cited by Reuters.

However, after a four-day absence, Sam was reinstated at OpenAI, after a short stint at Microsoft.

ALSO READ: After being fired as OpenAI CEO, Sam Altman joins Microsoft

Project Q*

  • The undisclosed letter highlighted concerns over the commercialisation of technological advancements without a full understanding of their consequences.
  • The letter reportedly had mentioned "Project Q*," or Q-Star, which some insiders believe could mark a significant breakthrough in OpenAI's pursuit of artificial general intelligence (AGI).
  • Sources speaking to Reuters disclosed that with substantial computing resources, the Q* model showcased the ability to solve certain mathematical problems.

AGI refers to autonomous systems surpassing humans in economically valuable tasks.

ALSO READ: OpenAI's Sam Altman invests $2.3 million in Indian teens' AI startup. But what are they doing?

AI vs AGI

  • Generative AI platforms, like ChatGPT, Midjopurney and StabilityAI excel in tasks like writing, language translation, and generating images and other multimedia content based on prompts fed to them.
  • Mathematical calculations, where there is a singular correct answer, need a higher level of reasoning like human intelligence. Researchers anticipate that such capabilities that the AGI brings to the table, could be applied to scientific research, broadening the scope of AI applications.

ALSO READ: OpenAI CEO Sam Altman does not trust ChatGPT blindly, and you shouldn't either

Safety concerns

  • The reported researchers' letter to the board raised concerns about the prowess and potential dangers of AI, although specific safety issues were not detailed, Reuters said.
  • Discussions about the potential dangers posed by highly intelligent machines, including the possibility of machines deciding the destruction of humanity is in their interest, have been ongoing among computer scientists, the report added.

ALSO READ: What living with AI looks like

Altman's role

  • Sam Altman, instrumental in bringing ChatGPT to unprecedented growth, drew significant investments and computing resources from Microsoft to advance towards AGI.
  • Altman had hinted at the development of new tools for major advances at a summit of world leaders in San Francisco before his dismissal by the board.

ALSO READ: Sam Altman and OpenAI saga summed up in 5 memes

Last updated: November 24, 2023 | 14:07
IN THIS STORY
    Read more!
    Recommended Stories