We finished Part Six of this series by referencing the terms mean, mode and median. What do these terms tell us statistically? To explore them, consider the following fictional set of data about the appraised value of a set of houses in a small cluster of homes.

Number of Houses Value ($K)

1 1,000

2 300

1 200

2 100

3 50

1 40

It is emphasized again that this is fictional but it serves the purpose to explore the terms.

To determine the mean we add up the value of the houses ($2,190K) and divide to get $219K. The median is the value of the home at the middle of the list – number five or six, both valued at $100K. And the mode is the most common value or $50K. Summarizing:

Mean $219K

Median $100K

Mode $ 50K

Now consider the definition of average, quoted from Google:

“A number expressing the central or typical value of data, in particular the mode, median, or (most commonly) the mean, which is calculated by dividing the sum of the values in the set by their number.”

So the average value of houses in this cluster is either $219K, $100K, of $50K. If you are a statistician, you will reject the data above because the sample size is too small. But what if you are in real estate and want to demonstrate the “steal” price for a property in the cluster? What do you quote as the average value of a cluster house?

Next consider a very large population and sampling to get the distribution of intelligence, a very controversial factor called the intelligence quotient or IQ. English statistician Sir Frances Galton (1822-1922), a cousin of Charles Darwin, made the first attempt to create a standardized test for “measuring” a person’s intelligence and was a pioneer in the field of psychometrics, the measuring of knowledge, abilities, attitudes and personality traits, In 1905, Alfred Binet and colleagues published the Binet-Simon test which was revised by Lewis Ternau to become the Stanford-Binet Intelligence Scales. And in World War I, the U.S. military used “intelligence testing” in expanding the officer corps from nine thousand to 200 thousand, testing 1.75 million men in total.

Theoretically, if you could measure the intelligence of all human beings with a completely unbiased measuring device (test) and plotted the results, the graphic representation would look something like this:

This is a normal distribution in statistician speak of a bell curve which plots the value of every element of the population. The center value (or average) is the mean, mode, and median. In a normal distribution they are all the same and in a plot of human intelligence the value is 100.

There is another critical value associated with the representation of the data called the standard deviation. It is a defined numerical value used to calculate areas under the curve and estimate numbers of the population within a range of values. For the plot of human intelligence, the value of the standard deviation is 15. This means that 68.26 percent of all people evaluated will score between 85 and 115. For completeness, half to population will score between 90 and 110, which is considered “average” or normal.

If this seems too complicated, it gets more so. There are a dozen or more tests used for the measurement of IQ, all generally scaled to the normal distribution with a mean of 100 and a standard deviation of 15. And it allows for significant academic debate in the fields of psychometrics and quantitative psychology. While it is complicated, it is important to you.

Recall that in World War I the U.S. military used intelligence testing to assess inductees. It was used as a metric to sort people. According to Alan S. Kaufman in his 2009 book *IQ Testing 101*, the average (?) IQ for certain occupational groups is as follows:

Professional and Technical 112

Managers and Administers 104

Clerical, Sales, Skilled, Foremen 101

Semi-skilled 92

Unskilled 87

Is it right to classify and select people based on tests that result in a normal distribution, be it intelligence, motor skills, or other human attributes? More importantly, is it right to make decisions about people or groups of people based on statistical calculation? Clearly, in some instances the answer is yes, but when should it be no?

In the late 1800s, Francis Galton first used the term eugenics – the *science* of improving a human population by controlled breeding. The eugenics movement was popularized by Progressivism in the 1920s and 1930s. Galton promoted eugenics through selective breeding to eliminate undesirable characteristics, thereby improving the humanity. Henry H. Goddard promoted IQ testing for public schools, immigration, and the court of law. He wanted to eliminate the undesirable trait of “feeblemindedness” from the population, believing it was an inheritable trait, purely nature and no nurture. As a result, different states adopted sterilization laws and over 64,000 people were sterilized. A very crude calculation shows it to be equivalent to 199,000 today. Eugenics was an idea created in the United States, grown in California, and exported to Nazi Germany.

In the drawing of a normal distribution above, if you eliminate everything more than two standard deviations below the mean, you change the mean and the mode also. If the distribution is of intelligence, this equates to getting rid of the “feebleminded.” Is the idea ethically right and will it work? Did the elected officials and the voters who put them in office in the 1920s and 1930s know enough about statistics to make a proper and correct decision or were they influenced by assertion supposedly drawn from “scientific statistics”?

Till next time….