Nazi geneticists of the twentieth. Genetics unleashed the specter of scientific racism in the nineteenth century. Genomics, thankfully, has stuffed it back into its bottle. Or, as Aibee, the African-American maid, tells Mae Mobley plainly in The Help, “So, we’s the same. Just a different color.”
In 1994, the very year that Luigi Cavalli-Sforza published his comprehensive review of race and genetics, Americans were convulsing with anxiety around a very different kind of book about race and genes. Written by Richard Herrnstein, the behavioral psychologist, and Charles Murray, a political scientist, The Bell Curve was, as the Times described it, “a flame-throwing treatise on class, race and intelligence.” The Bell Curve offered a glimpse of how easily the language of genes and race can be distorted, and how potently those distortions can reverberate through a culture that is obsessed with heredity and race.
As public flame-throwers go, Herrnstein was an old hand: his 1985 book, Crime and Human Nature, had ignited its own firestorm of controversy by claiming that ingrained characteristics, such as personality and temperament, were linked to criminal behavior. A decade later, The Bell Curve made an even more incendiary set of claims. Murray and Herrnstein proposed that intelligence was also largely ingrained—i.e., genetic—and that it was unequally segregated among races. Whites and Asians possessed higher IQs on average, and Africans and African-Americans possessed lower IQs. This difference in “intellectual capacity,” Murray and Herrnstein claimed, was largely responsible for the chronic underperformance of African-Americans in social and economic spheres. African-Americans were lagging behind in the United States not because of systematic flaws in our social contracts, but because of systematic flaws in their mental constructs.
To understand The Bell Curve, we need to begin with a definition of “intelligence.” Predictably, Murray and Herrnstein chose a narrow definition of intelligence—one that brings us back to nineteenth-century biometrics and eugenics. Galton and his disciples, we might recall, were obsessed with the measurement of intelligence. Between 1890 and 1910, dozens of tests were devised in Europe and America that purported to measure intelligence in some unbiased and quantitative manner. In 1904, Charles Spearman, a British statistician, noted an important feature of these tests: people who did well in one test generally tended to do well in another test. Spearman hypothesized that this positive correlation existed because all the tests were obliquely measuring some mysterious common factor. This factor, Spearman proposed, was not knowledge itself, but the capacity to acquire and manipulate abstract knowledge. Spearman called it “general intelligence.” He labeled it g.
By the early twentieth century, g had caught the imagination of the public. First, it captivated early eugenicists. In 1916, the Stanford psychologist Lewis Terman, an avid supporter of the American eugenics movement, created a standardized test to rapidly and quantitatively assess general intelligence, hoping to use the test to select more intelligent humans for eugenic breeding. Recognizing that this measurement varied with age during childhood development, Terman advocated a new metric to quantify age-specific intelligence. If a subject’s “mental age” was the same as his or her physical age, their “intelligence quotient,” or IQ, was defined as exactly 100. If a subject lagged in mental age compared to physical age, the IQ was less than a hundred; if she was more mentally advanced, she was assigned an IQ above 100.
A numerical measure of intelligence was also particularly suited to the demands of the First and Second World Wars, during which recruits had to be assigned to wartime tasks requiring diverse skills based on rapid, quantitative assessments. When veterans returned to civilian life after the wars, they found their lives dominated by intelligence testing. By the early 1940s, such tests had become accepted as an inherent part of American culture. IQ tests were used to rank job applicants, place children in school, and recruit agents for the Secret Service. In the 1950s, Americans commonly listed their IQs on their résumés, submitted the results of a test for a job application, or even chose their spouses based on the test. IQ scores were pinned on the babies who were on display in Better Babies contests (although how IQ was measured in a two-year-old remained mysterious).
These rhetorical and historical shifts in the concept of intelligence are worth noting, for we will return to them in a few paragraphs. General intelligence (g) originated as a statistical correlation between tests given under particular circumstances to particular individuals. It morphed into the notion of “general intelligence” because of a hypothesis concerning the nature of human knowledge acquisition. And