Artificial general intelligence

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that falls within the lower and upper limits of human cognitive capabilities across a wide range of cognitive tasks.[1][2][3][4][5][6][7]

This contrasts with narrow AI, which is limited to specific tasks.[8][9] Artificial superintelligence (ASI) refers to types of intelligence that range from being only marginally smarter than the upper limits of human intelligence to greatly exceeding human cognitive capabilities by orders of magnitude.[10] AGI is considered one of the definitions of strong AI.[11]

AGI intelligence may be comparable to, match, differ from, or even appear alien-like relative to human intelligence, encompassing a spectrum of possible cognitive architectures and capabilities that includes the spectrum of human-level intelligence.[12][13]</ref>[14]

Creating AGI is a primary goal of AI research and of companies such as OpenAI[15] and Meta.[16] A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.[17]

The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2024, some argue that it may be possible in years or decades;[18][19] others maintain it might take a century or longer;[20] a minority believe it may never be achieved,[21] while another minority says it already exists.[22] Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.[23]

There is debate on the exact definition of AGI, and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[24] AGI is a common topic in science fiction and futures studies.[25][26]

Contention exists over whether AGI represents an existential risk.[27][28][29] Many AI personalities have stated that mitigating the risk of human extinction from AI should be a global priority.[30][31] Others find the development of AGI to be too remote to present such a risk.[32][33]

  1. ^ Goertzel, Ben (2014). "Artificial General Intelligence: Concept, State of the Art, and Future Prospects". Journal of Artificial General Intelligence. 5 (1): 1–48. doi:10.2478/jagi-2014-0001. AGI refers to AI systems with general cognitive abilities.
  2. ^ Russell, Stuart; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. p. 27. ISBN 978-0-1360-4259-4. The ultimate goal of AI research is to create an artificial general intelligence.
  3. ^ Nilsson, Nils J. (2010). The Quest for Artificial Intelligence. Cambridge University Press. p. 15. ISBN 978-0-5211-2293-1. An AGI would perform any intellectual task that a human can do.
  4. ^ McCarthy, John (2007b). What is Artificial Intelligence?. Stanford University. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans.
  5. ^ Legg, Shane; Hutter, Marcus (2007). "Universal Intelligence: A Definition of Machine Intelligence". Minds and Machines. 17 (4): 391–444. doi:10.1007/s11023-007-9079-x. An agent that can perform well in a wide range of environments.
  6. ^ Wang, Pei (2008). "From NARS to a Thinking Machine". In Goertzel, Ben; Wang, Pei (eds.). Artificial General Intelligence 2008: Proceedings of the First AGI Conference. IOS Press. pp. 75–93. ISBN 978-1-5860-3833-5. AGI aims at the original goal of AI: a thinking machine with the same generality as human intelligence.
  7. ^ Adams, S.; Arel, I.; Bach, J. (2012). "Mapping the Landscape of Human-Level Artificial General Intelligence". AI Magazine. 33 (1): 25–42. doi:10.1609/aimag.v33i1.2322. Human-level AGI refers to AI systems that possess the same broad cognitive abilities as humans.
  8. ^ Krishna, Sri (9 February 2023). "What is artificial narrow intelligence (ANI)?". VentureBeat. Retrieved 1 March 2024. ANI is designed to perform a single task.
  9. ^ Ertel, Wolfgang (2018). Introduction to Artificial Intelligence. Springer. p. 12. ISBN 978-3-3195-8487-4. Artificial Narrow Intelligence systems are specialized in one area.
  10. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2. Superintelligence is any intellect that greatly exceeds the cognitive performance of humans in virtually all domains.
  11. ^ Haugeland, John (1989). Artificial Intelligence: The Very Idea. MIT Press. ISBN 978-0-2625-8095-3. Strong AI claims that machines can be made to think on a level equal to humans.
  12. ^ "Basic Questions". Stanford University. Retrieved 9 September 2024. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
  13. ^ Kurzweil 2005.
  14. ^ Voss, Peter (2007). "Essentials of General Intelligence: The Direct Path to Artificial General Intelligence". Artificial General Intelligence. Springer: 131–157. doi:10.1007/978-3-540-68677-4_8. AGI systems may think differently from humans.
  15. ^ "OpenAI Charter". OpenAI. Retrieved 6 April 2023. Our mission is to ensure that artificial general intelligence benefits all of humanity.
  16. ^ Heath, Alex (18 January 2024). "Mark Zuckerberg's new goal is creating artificial general intelligence". The Verge. Retrieved 13 June 2024. Our vision is to build AI that is better than human-level at all of the human senses.
  17. ^ Baum, Seth D. (2020). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF) (Report). Global Catastrophic Risk Institute. Retrieved 13 January 2022. We identified 72 AGI R&D projects.
  18. ^ Barrat, James (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. St. Martin's Press. ISBN 978-1-2500-5878-2. Some experts believe AGI could be achieved within decades.
  19. ^ Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (2018). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807. Experts estimate a 50% chance of AGI by 2050.
  20. ^ Marcus, Gary; Davis, Ernest (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books. ISBN 978-1-5247-4825-8. We are still far from achieving AGI.
  21. ^ Penrose, Roger (1989). The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press. ISBN 978-0-1985-1973-7. Understanding consciousness may require new physics.
  22. ^ Agüera y Arcas, Blaise (10 October 2023). "Artificial General Intelligence Is Already Here". Noema. I believe we are already in the presence of AGI.
  23. ^ "AI pioneer Geoffrey Hinton quits Google and warns of danger ahead". The New York Times. 1 May 2023. Retrieved 2 May 2023. It is hard to see how you can prevent the bad actors from using it for bad things.
  24. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv preprint. GPT-4 shows sparks of AGI.
  25. ^ Butler, Octavia E. (1993). Parable of the Sower. Grand Central Publishing. ISBN 978-0-4466-7550-5. All that you touch you change. All that you change changes you.
  26. ^ Vinge, Vernor (1992). A Fire Upon the Deep. Tor Books. ISBN 978-0-8125-1528-2. The Singularity is coming.
  27. ^ Morozov, Evgeny (30 June 2023). "The True Threat of Artificial Intelligence". The New York Times. The real threat is not AI itself but the way we deploy it.
  28. ^ "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 23 March 2023. Retrieved 6 April 2023. AGI could pose existential risks to humanity.
  29. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2. The first superintelligence will be the last invention that humanity needs to make.
  30. ^ Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. Mitigating the risk of extinction from AI should be a global priority.
  31. ^ "Statement on AI Risk". Center for AI Safety. Retrieved 1 March 2024. AI experts warn of risk of extinction from AI.
  32. ^ Mitchell, Melanie (30 May 2023). "Are AI's Doomsday Scenarios Worth Taking Seriously?". The New York Times. We are far from creating machines that can outthink us in general ways.
  33. ^ LeCun, Yann (June 2023). "AGI does not present an existential risk". Medium. There is no reason to fear AI as an existential threat.