Part of a series on |
Artificial intelligence |
---|
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that falls within the lower and upper limits of human cognitive capabilities across a wide range of cognitive tasks.[1][2][3][4][5][6][7]
This contrasts with narrow AI, which is limited to specific tasks.[8][9] Artificial superintelligence (ASI) refers to types of intelligence that range from being only marginally smarter than the upper limits of human intelligence to greatly exceeding human cognitive capabilities by orders of magnitude.[10] AGI is considered one of the definitions of strong AI.[11]
AGI intelligence may be comparable to, match, differ from, or even appear alien-like relative to human intelligence, encompassing a spectrum of possible cognitive architectures and capabilities that includes the spectrum of human-level intelligence.[12][13]</ref>[14]
Creating AGI is a primary goal of AI research and of companies such as OpenAI[15] and Meta.[16] A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.[17]
The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2024, some argue that it may be possible in years or decades;[18][19] others maintain it might take a century or longer;[20] a minority believe it may never be achieved,[21] while another minority says it already exists.[22] Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.[23]
There is debate on the exact definition of AGI, and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[24] AGI is a common topic in science fiction and futures studies.[25][26]
Contention exists over whether AGI represents an existential risk.[27][28][29] Many AI personalities have stated that mitigating the risk of human extinction from AI should be a global priority.[30][31] Others find the development of AGI to be too remote to present such a risk.[32][33]
AGI refers to AI systems with general cognitive abilities.
The ultimate goal of AI research is to create an artificial general intelligence.
An AGI would perform any intellectual task that a human can do.
The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans.
An agent that can perform well in a wide range of environments.
AGI aims at the original goal of AI: a thinking machine with the same generality as human intelligence.
Human-level AGI refers to AI systems that possess the same broad cognitive abilities as humans.
ANI is designed to perform a single task.
Artificial Narrow Intelligence systems are specialized in one area.
Superintelligence is any intellect that greatly exceeds the cognitive performance of humans in virtually all domains.
Strong AI claims that machines can be made to think on a level equal to humans.
Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
AGI systems may think differently from humans.
Our mission is to ensure that artificial general intelligence benefits all of humanity.
Our vision is to build AI that is better than human-level at all of the human senses.
We identified 72 AGI R&D projects.
Some experts believe AGI could be achieved within decades.
Experts estimate a 50% chance of AGI by 2050.
We are still far from achieving AGI.
Understanding consciousness may require new physics.
I believe we are already in the presence of AGI.
It is hard to see how you can prevent the bad actors from using it for bad things.
GPT-4 shows sparks of AGI.
All that you touch you change. All that you change changes you.
The Singularity is coming.
The real threat is not AI itself but the way we deploy it.
AGI could pose existential risks to humanity.
The first superintelligence will be the last invention that humanity needs to make.
Mitigating the risk of extinction from AI should be a global priority.
AI experts warn of risk of extinction from AI.
We are far from creating machines that can outthink us in general ways.
There is no reason to fear AI as an existential threat.