Categories
AI News, CannabisAI, SEO Bullshit

What Is AGI: Artificial General Intelligence And New Ethical Implications

Ethical concerns follow any industry where innovation and progress rapidly occur, such as artificial intelligence. While AI has provided practical solutions that have become well accepted and widely implemented in daily life to new applications in industry, a new development lingers in the future with its ethical implications that once were associated with early AI – Artificial General Intelligence. 

 

Introduction

When OpenAI first introduced ChatGPT, it accrued over 100 million active monthly users after just two months of launching. UBS, Union Bank of Switzerland, an investment banking company providing market research, had never witnessed a consumer application grow as fast as ChatGPT would in their 20 years of internet analysis. Despite their size, it would take applications such as TikTok and Instagram nearly two and a half years to reach the same level of subscribers. December 6th, 2023, the AI community would receive the latest news in development from the DeepMind team at Google, releasing the first multi-modal AI solution with Gemini, which would follow the drama with OpenAI and the surrounding allegations of a project termed Q*. However, this would raise one universal question among those working with AI solutions and those just becoming familiar with it: What is AGI? 

 

What is AGI?

AGI is an acronym for Artificial General Intelligence, which is not just an advancement in the historical development of artificial intelligence but is also considered to be a paradigm shift in intelligence and problem-solving. As with many modern marvels, AI stems back as early as the 20th century, starting with Alan Turning, who developed the theoretical concept of a universal machine. This would lead to the development of early “computers” used during WWII, such as Colossus and ENIAC, that were built for specific purposes to perform specific calculations and would mark the pragmatic origin of electronic computing, launching the notion of the development of intelligent machines. During the Dartmouth Workshop in 1956, the term “artificial intelligence” coined at the workshop that included industry pioneers such as John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon would discuss the possibility of creating these intelligent machines and launching AI as a field of study. However, early research was limited, based on rule-based systems and symbolic reasoning with the notion that human-like intelligence could be achieved through a set of simple rules, developing early AI programs such as the General Problem Solver and the Logic Theorist that could solve specific problems. The next few decades would see a cooling down in development; early expectations were high, and little progress was made, resulting in a lack of funding for further research until a resurgence in the 90s and 2000s and machine learning. Machine learning is a subfield of AI focused on algorithms that can learn from data and would lead to developments in neural networks, support vector machines, and techniques that would lead to practical applications in computer vision and speech recognition. Deep learning would take place in the 2010s, a subset of machine learning using artificial neural networks. With easy access to a variety of datasets and far more powerful GPUs, deep learning would lead to developments in natural language processing and image recognition, where the field has stood but continues to advance.  Unlike common AI platforms, referred to as narrow AI, AGI is a generalist. In broad terms, AGI is theorized to be a form of artificial intelligence that has the potential to mimic the function of human-like cognitive abilities; it has the ability to learn, understand, and apply knowledge over a wide spectrum of subjects that can match or surpass human intellect. Narrow AI is optimized to work for a specialized application, such as large language models like ChatGPT and other companies that are optimized to understand and respond to written language. Currently, AGI is hypothetical, but with new solutions, such as Google Gemini, that are multimodal and implications around OpenAI’s Q* program, AGI could be stated to be under development and is the next advancement in artificial intelligence in general.  

 

What are the Ethical Implications of AGI?

When AI began its wide acceptance, various individuals were concerned about its ethical implications, with even the CEO of X and over one thousand other prominent names in technology asking for a six-month pause on any continued developments with artificial intelligence, citing ethical and philosophical concerns. The primary concerns surrounding AGI are inherent in what is implied in its ability to learn or act autonomously, specifically in potentially harnessing an ability to think along its own moral code that may not be in line with humanity. Artificial intelligence is exactly as the name implies, artificial. While a narrow model is created for specific purposes, a general model has the ability to learn, much like human intelligence, and could lead to the development of a moral code that is not in alignment with human ethics, which could harm humanity. It will be imperative to ensure that these systems are in alignment with human values. However, it will be challenging to design these systems to understand and respect human values; it will be one of the most critical areas in AGI research. Failure to solve the value alignment problem would be a severe miscalculation in the development of AGI, potentially unleashing systems that could act in a malicious manner. One way to counteract this could be restricting the level of autonomy and decision-making power that AGI systems have to prevent unintended consequences. Additionally, datasets are also a major concern, as data can contain biases. Should an AGI model have biased datasets and work off of them to amplify its capabilities, it could likely amplify the bias, leading to morally objectionable results. The ability for the model to learn and improve itself raises concerns, particularly as components become more capable AGI could develop itself at an exceptional rate, one that could easily surpass superhuman intelligence, which could limit the ability to control or even understand the decision-making process that the system developed for itself making it difficult to intervene in its moral code and to attribute accountability for its actions that could lead to a value drift and goals developing independently from the initial programming, leading to potentially dangerous behavior. Should this development not be out-rolled carefully and in controlled settings, it raises dire concerns surrounding the potential to utilize AGI systems deliberately with malicious intent.  

 

Future Progression of AGI

The future progression of AGI will lie within responsible development along with public education and awareness. Currently, there have been several approaches to ensure responsible development, including organizations such as the Partnership on AI and IEEE, helping to establish accountability and safety with continued artificial intelligence development. One key will lay within individual AI companies themselves, such as Google’s DeepMind, which outlined their process for the responsible development of the first multimodal artificial intelligence solution in their report. While it will be essential to provide regulatory oversight to reinforce safe development practices for AGI, and the industry will need to serve as experts to assist in developing safe frameworks, both will be equally as important as public education and knowledge. The latter will include educating the public on potential risks associated with AGI, along with benefits. In 2023, a few steps were taken by the White House to ensure public safety with artificial intelligence, including an executive order to assist in managing the risks in AI development, followed by the blueprint for an AI bill of rights targeted at both consumers and developers.  

 

Conclusion

The idea of AGI is one that has understandably invoked strong feelings, both those of a sense of innovation and understandable concerns. However, AGI is an artificial intelligence that is far beyond being the next step, and it will be a full shift into how the industry and services function, an anticipated evolution that surely will not be the final development in the history of AI. While that development does raise concerns, it is not as if they are being neglected by industry leaders and those who can provide oversight, though the development of regulatory framework and guidelines may be a work in progress while continuing to provide public education on AI, it should be a manageable solution for a smarter tomorrow.   

Calendar

December 2024
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Categories