Technological singularity

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.[4]

The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a "singularity" in the technological context.[5][6]

Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for the contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence," introduces the idea of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.[7]

Stanislaw Ulam reported in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[8] Subsequent authors have echoed this viewpoint.[3][9]

The concept and the term "singularity" were popularized by Vernor Vinge – first in 1983 (in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[10]) and later in his 1993 essay The Coming Technological Singularity,[4][9] (in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate). He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.[9]

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[11][12] The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[13] Jeff Hawkins,[14] John Holland, Jaron Lanier, Steven Pinker,[14] Theodore Modis,[15] and Gordon Moore.[14] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

  1. ^ Cadwalladr, Carole (22 February 2014). "Are the robots about to rise? Google's new director of engineering thinks so…". The Guardian. Retrieved 8 May 2022.
  2. ^ "Collection of sources defining "singularity"". singularitysymposium.com. Archived from the original on 17 April 2019. Retrieved 17 April 2019.
  3. ^ a b Eden, Amnon H.; Moor, James H.; Søraker, Johnny H.; Steinhart, Eric, eds. (2012). Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Dordrecht: Springer. pp. 1–2. doi:10.1007/978-3-642-32560-1. ISBN 9783642325601.
  4. ^ a b c Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era" Archived 2018-04-10 at the Wayback Machine, in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993. - "There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is 'yes, we can', then there is little doubt that beings more intelligent can be constructed shortly thereafter.)"
  5. ^ Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era". NASA Conference Proceedings.
  6. ^ Shanahan, Murray (7 August 2015). The Technological Singularity. MIT Press. p. 233. ISBN 978-0-262-52780-4.
  7. ^ "What is the Techological Singularity? | IBM". www.ibm.com. 13 August 2024. Retrieved 14 November 2024.
  8. ^ Cite error: The named reference ulam1958 was invoked but never defined (see the help page).
  9. ^ a b c Cite error: The named reference chalmers2010 was invoked but never defined (see the help page).
  10. ^ Cite error: The named reference dooling2008-88 was invoked but never defined (see the help page).
  11. ^ Sparkes, Matthew (13 January 2015). "Top scientists call for caution over artificial intelligence". The Telegraph (UK). Archived from the original on 7 April 2015. Retrieved 24 April 2015.
  12. ^ "Hawking: AI could end human race". BBC. 2 December 2014. Archived from the original on 30 October 2015. Retrieved 11 November 2017.
  13. ^ Cite error: The named reference Allen2011 was invoked but never defined (see the help page).
  14. ^ a b c Cite error: The named reference ieee-lumi was invoked but never defined (see the help page).
  15. ^ Cite error: The named reference modis2012 was invoked but never defined (see the help page).