The most decisive conceptual event of twentieth century physics has been the discovery that the world is not deterministic. Causality, long the bastion of metaphysics, was toppled, or at least tilted: the past does not determine exactly what happens next… A space was cleared for chance.
– Ian Hacking, The Taming of Chance, 1990
In doing empirical psychology, the researcher attempts either to extend the applicability of a new or existing theory to a new range of phenomena. The researcher proposes a concrete hypothesis by adapting the abstract general theory to the specific empirical situation under study. Does the researcher’s hypothesis provide a better explanation of the data than the generally-accepted alternative explanation? This question is usually evaluated statistically, by investigating whether the pattern of empirical results varies significantly from what would be expected if the hypothesis were not true.
It’s often the case that the psychological researcher is exploring new territory: the kind of data s/he collects hasn’t previously been investigated scientifically. In that case the generally-accepted alternative is known as the “null hypothesis.” Usually the null hypothesis doesn’t take the form of a precise prediction about how the results will turn out; rather, it states that the results will not deviate from what one might expect to find by chance alone, unaffected by the theoretical forces which the researcher claims will affect the results in some predicted way. But “by chance alone” doesn’t mean unalloyed randomness; rather, it means that the results are expected to conform to the statistical distribution typically found in similar kinds of data sets. This is the normal distribution, better known as the bell curve, in which most subjects cluster around the arithmetic mean while the rest tail away toward the right and left of the mean. So if the mean for one group of subjects differs significantly from that of another group as predicted by the researcher’s hypothesis, taking into account the observed amount of random variation in the bell curve, then the null hypothesis is rejected: statistically it’s very likely that something other than randomness is affecting the results.
Why is it that, for so many measures of measurable human performance, randomness takes the shape of the normal distribution? In grad school and in subsequent practice I don’t recall that anyone ever really asked this question, let alone answered it satisfactorily. Here are some quotes from famous statisticians related to the issue, as cited by Ian Hacking in his fascinating book on the history of statistical thinking.
In a given state of society, a certain number of persons must put an end to their own life. This is the general law; and the special question as to who shall commit the crime depends of course upon special laws; which, however, in their total action, must obey the large social law to which they are all subordinate. And the power of the larger law is so irresistible, that neither the love of life nor the fear of another world can avail anything towards even checking its operation. (T.H. Buckle, 1857)
The irrational approval given to the so-called Calculus of Chances is enough to convince all men of sense how injurious to science has been this absence of control. Strange indeed would be the degeneration if the science of Calculation, the field in which the fundamental dogma of the invariability of Law first took its rise, were it to end its long course of progress in speculations that involve the hypotheses of the entire absence of Law. (August Comte, 1851)
‘By chance’ — that is the most ancient nobility of the world, and this I restored to all things: I delivered them from their bondage under purpose. (Friedrich Nietzsche in Zarathustra, 1884)
Collective tendencies have a reality of their own; they are forces as real as cosmic forces, though of another sort; they, likewise, affect the individual from without, though through other channels. The proof that the reality of collective tendencies is no less than that of cosmic forces is that this reality is demonstrated in the same way, by the uniformity of effects. (Emile Durkheim, 1897)
I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by ‘the law of error.’ A savage, if he could understand it, would worship it as a god. It reigns with severity in complete self-effacement amidst the wildest confusion. The huger the mob the greater the anarchy the more perfect is its sway. Let a large sample of chaotic elements be taken and marshalled in order of their magnitudes, and then, however wildly irregular they appeared, an unexpected and most beautiful form of regularity proves to have been present all along. (Francis Galton, 1886)
Galton turning over two different problems in his mind reached the conception of correlation: A is not the sole cause of B, but it contributes to the production of B; there may be other, many or few, causes at work, some of which we do not know and may never know… This measure of partial correlation was the germ of the broad category — that of correlation, which was to replace not only in the minds of many of us the old categories of causation, but deeply to influence our outlook on the universe. The concept of causation — unlimitedly profitable to the physicist — began to crumble to pieces. In no case was B simply and wholly caused by A, nor indeed by C, D, E, and F as well! It was really impossible to go on increasing the number of contributory causes until they might involve all the factors of the universe… Henceforward the philosophical view of the universe was to be that of a correlated system of variates, approaching but by no means reaching perfect correlation, i.e. absolute causality. (Karl Pearson, 1914)
Chance itself pours in at every avenue of sense: it is of all things the most obtrusive. That it is absolute is the most manifest of all intellectual perceptions. That it is a being, living and conscious, is what all the dullness that belongs to ratiocination’s self can self can scarce muster the hardihood to deny. (C.S. Peirce, 1893)



































