Eliezer Yudkowsky | |
---|---|
Born | Eliezer Shlomo[a] Yudkowsky September 11, 1979 |
Organization | Machine Intelligence Research Institute |
Known for | Coining the term friendly artificial intelligence Research on AI safety Rationality writing Founder of LessWrong |
Website | www |
Eliezer S. Yudkowsky (/ˌɛliˈɛzər jʌdˈkaʊski/ EL-ee-EZ-ər yud-KOW-skee;[1] born September 11, 1979) is an American artificial intelligence researcher[2][3][4][5] and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.[6][7] He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California.[8] His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.[9]
Cite error: There are <ref group=lower-alpha>
tags or {{efn}}
templates on this page, but the references will not show without a {{reflist|group=lower-alpha}}
template or {{notelist}}
template (see the help page).
:1
was invoked but never defined (see the help page).