In the ever-evolving landscape of artificial intelligence, a recent upheaval at OpenAI sent shockwaves through the tech world. This article delves into the mysterious ‘Q*’ project, rumored to be OpenAI’s bridge to Artificial General Intelligence (AGI). We explore the controversies, ethical considerations, and potential implications.
Introduction
November 17th, 2023, was a date that would shock the artificial intelligence community and the tech industry as a whole. Co-founder and CEO of OpenAI, developers of ChatGPT, Sam Altman, was removed as CEO of OpenAI. Allegedly, according to an article from Reuter’s, a letter had been circulating to the board of directors at OpenAI claiming that a powerful discovery had been taken place, one that was concerning, with even prominent names in the tech industry expressing concern. Even a month later, a solid conclusion has not been made or announced as to why Altman’s firing and subsequent rehiring after a second letter was circulated through OpenAI demanding his reinstatement with over 700 employees to walk out and join him at Microsoft was even necessary.
While there is an argument that Altman was acting without regard but building and releasing more advanced and capable AI models, there is one consensus most accept- OpenAI had developed something beyond the current capabilities of artificial intelligence, something more advanced and often referred to as Q-star, written as Q*, and speculated to be a new advancement in artificial intelligence, a progression towards the future, but perhaps too soon.
What is Q*?
Q* is believed to be a project that OpenAI is actively working on, yet has not made an official statement. Nonetheless, Q* has been assigned to OpenAI if and should the technology ever be released, but it is strongly speculated that Q* is real. Q*’s capabilities, though, are not well understood; however, that is likely a progression towards the development of an artificial general intelligence, AGI, an artificial intelligence that has the potential to surpass human-like abilities and cognitive skills. This type of intelligence, AGI, can learn, adapt, and understand across a wide variety of domains and tasks similar to human intelligence. Models, such as what is speculated with Q*, are unlike the current artificial intelligence and what most are familiar with. A model such as ChatGPT is called an LLM – Large Language Model – which has been deliberately designed with a narrow scope to excel in natural language tasks. LLMs can generate and process human language but do not possess general intelligence. LLMs such as ChatGPT are considered to be “specialists,” restricted purely to linguistic tasks. In contrast, AGI could be regarded as a “generalist,” possessing a general knowledge set across a broad range of domains.
Reportedly, Q* taught itself how to solve basic math problems, thus raising concern about this advancement. Math is a benchmark in the development of artificial intelligence; it is an indicator that a model has the ability to reason. An LLM is effective with language as it can reliably predict text, but under conditions where there is only one correct answer, it implies reasoning capabilities nearly human, and with the ability to reason come other implications, such as the ability to script code or develop conclusions from a research paper. It should be noted that these problems were only elementary in difficulty, not problems that would require multiple steps and the ability to understand hardened rules on subjects that can be abstract in nature.
As a counterpoint, mathematics may not be the greatest benchmark for evaluating a model. Earlier in December, Google revealed the project that their artificial intelligence team, DeepMind, had been working on, their multimodal Gemini, while an LLM is capable of working on mathematical problems.
What is the Surrounding Controversy?
The development of Q* is seen as controversial by some, while others see it as a natural progression that is to be expected to exist. Yann LuCun, Chief AI Scientist at Meta (Facebook), a highly regarded voice in the AI community, has even stated that nearly all top labs working on AI are also working towards the development of some level of AGI. Additionally, this is not the first time that the release of an AI model has invoked such strong emotions. Google DeepMind released Gato in May 2022 as one of the first “generalist” AI models. Gato has the ability to perform over 600 tasks, inclusive of captioning images, stacking blocks with a robotic arm, and even playing Atari video games; however, as a generalist model, it can only perform tasks minorly effectively compared to a model developed for a singular task.
Q* faced more controversy in the background within the office politics of OpenAI. Sam Altman and Ilya Sutskever, OpenAI’s chief scientist and board member, reportedly differed on an ideological nature, with the former chief scientist taking a cautious approach to developing the technology and harboring a concern that Altman was pushing the development of the technology too rapidly.
The internal drama also included fellow board members, many of whom found Altman to be calculating and deceptive, with a report of Altman having two members working on the same project separately and offering inconsistent opinions, according to Sutskever.
Will Q* Be Available?
As with various other technologies, it is hard to say if it will be available, particularly with a model that has yet to be officially confirmed or denied to exist. However, assuming that OpenAI has developed Q*, it is unlikely that it will be released anytime soon, especially as the general public is becoming familiar with GPT-4 custom GPTs and API assistants. Let people become more familiar with the current technology as it is before releasing anything too complex unless it is on a smaller scalable model, such as with the Nano options of Gemini, which, while small, are still highly capable.
How Can AGI Be Used Safely?
Currently, with available AI models for the general public, there is a small need to be concerned about the safe use of AGI, as these are in isolated research environments and are being developed to build more robust tools for the future. However, the safe use of AGI and even more capable AI models can be mitigated by a few factors beginning at the development stage. During Google DeepMind’s development of Gemini, the DeepMind team would continuously evaluate and assess the model, permitting time to fine-tune and correct for notable deficiencies before advancing the technology further. Additionally, with AI becoming a more integrated tool in daily life, concerns have reached the federal level of government with the White House’s release of an AI Bill of Rights to provide safeguards for users. On December 19th, 2023, the Biden Administration announced that it would seek public input over concerns with AI to develop the first set of standards for the safe development of generative artificial technology.
Conclusion
The events surrounding Sam Altman’s reinstatement as CEO of OpenAI, driven by the unwavering loyalty and commitment of over 730 employees open to joining him at a new organization, should reflect the ethical standards guiding OpenAI as they continue developing stronger models. This dedication to ethical principles is a testament to OpenAI’s core values and commitment to responsible AI development. As news continues to highlight the existence and capabilities of Q* and its role as a potential stepping stone towards AGI, it is crucial to approach this technology with understanding, respect, and appreciation for its transformative potential. Moving forward, the integration of AGI and its precursors should be guided by principles prioritizing the greater good, ensuring that these technologies become invaluable assets in addressing the complex challenges of our time.