An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence.[1] Originally, IQ was a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score.[2] For modern IQ tests, the raw score is transformed to a normal distribution with mean 100 and standard deviation 15.[3] This results in approximately two-thirds of the population scoring between IQ 85 and IQ 115 and about 2 percent each above 130 and below 70.[4][5]
Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence".[6] IQ scores have been shown to be associated with such factors as nutrition,[7][8][9] parental socioeconomic status,[10][11]morbidity and mortality,[12][13] parental social status,[14] and perinatal environment.[15] While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[16][17] and the mechanisms of inheritance.[18]
IQ scores are used for educational placement, assessment of intellectual ability, and evaluating job applicants. In research contexts, they have been studied as predictors of job performance[19] and income.[20] They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.
^Markus Jokela; G. David Batty; Ian J. Deary; Catharine R. Gale; Mika Kivimäki (2009). "Low Childhood IQ and Early Adult Mortality: The Role of Explanatory Factors in the 1958 British Birth Cohort". Pediatrics. 124 (3): e380–e388. doi:10.1542/peds.2009-0334. PMID19706576. S2CID25256969.
^Cite error: The named reference Schmidt98 was invoked but never defined (see the help page).
^Cite error: The named reference Strenze2007 was invoked but never defined (see the help page).
^Winston, Andrew S. (29 May 2020). "Scientific Racism and North American Psychology". Oxford Research Encyclopedias: Psychology. The use of psychological concepts and data to promote ideas of an enduring racial hierarchy dates from the late 1800s and has continued to the present. The history of scientific racism in psychology is intertwined with broader debates, anxieties, and political issues in American society. With the rise of intelligence testing, joined with ideas of eugenic progress and dysgenic reproduction, psychological concepts and data came to play an important role in naturalizing racial inequality. Although racial comparisons were not the primary concern of most early mental testing, results were employed to justify beliefs regarding Black "educability" and the dangers of Southern and Eastern European immigration.
^Newitz, Annalee (4 June 2024). "Chapter 4". Stories Are Weapons: Psychological Warfare and the American Mind. W. W. Norton & Company.
^Bird, Kevin; Jackson, John P.; Winston, Andrew S. (2024). "Confronting Scientific Racism in Psychology: Lessons from Evolutionary Biology and Genetics". American Psychologist. 79 (4): 497–508. doi:10.1037/amp0001228. PMID39037836. Recent articles claim that the folk categories of race are genetically meaningful divisions, and that evolved genetic differences among races and nations are important for explaining immutable differences in cognitive ability, educational attainment, crime, sexual behavior, and wealth; all claims that are opposed by a strong scientific consensus to the contrary.