I found this interesting:
Quote:Assessing Randomness
But how do we interpret the randomness of events occurring around us? By what criteria do we decide whether our experiences are just coincidences, or represent a true pattern at work? Cohen (1960) summed up our deficiencies by saying that, based on his experimental results, “nothing is so alien to the human mind as the idea of randomness.” The problem appears to be two-fold: one is that “the very nature of randomness assures us that combing random data will yield some patterns,” and that “if the data set is large enough, coincidences are sure to appear,” (Martin, 1998). More generally, this can be summed up by the Ramsey theory (Graham and Spencer, 1990) in which Frank P. Ramsey proved mathematically that “Complete disorder is an impossibility… [e]very large set of numbers, points or objects necessarily contains a highly regular pattern.” If humans are pattern seekers, and randomness necessarily contains patterns, then we’ve arrived at our first stumbling block.
The second prong in our failure to detect randomness is the method by which the human mind assesses randomness. In 1937, the Zenith Corporation unwittingly provided a simplistic glimpse into this human perception (Goodfellow, 1938). During a series of radio broadcasts, psychics appeared on one of their programs and “transmitted” telepathically a five-digit randomly-generated sequence of binary digits4. Listeners were asked to record the sequence and send it in to the company to determine if “people are sensitive to psychic transmissions,” (Griffiths and Tennenbaum, 2001). Although there was no true correlation between the listeners’ sequences and those “transmitted,” the listeners were found to have a predilection to create certain “random” sequences in preference to others. For example, the top three sequences sent in were 00101, 00110, 01101, which were submitted about ten times as often as sequences such as 00000 or 00001. Importantly, the responses indicated that listeners believed alternations of numbers (e.g. 0101) were much more representative of randomness than long strings of the same digit. More simply, the listeners perceived randomness to be a change (alternation) from the previous digit.
Falk and Konold (1997), in a similar vein, conducted an experiment in which subjects were asked to assess the randomness of long strings of randomly generated binary digits. An ideally random sequence has a probability of alternation of 0.5, that is, the digits within the sequence alternate about half the time5. They found, however, that “sequences with overalternations are perceived [by the subjects] as more random than their [mathematically-assessed] randomness warrant6.” They go on to suggest that this human predilection for perceiving randomness in alternations is attributable to the core method humans use to assess randomness: difficulty of encoding (memorizing). This is related to the idea of compressibility of data – that an “ideally” random sequence is incompressible to a simpler form because the information encoded has no “patterns”. Therefore, a sequence with easy to memorize patterns (long strings of non-alternating digits) is perceived as being non-random, although, as given earlier, clumping of data or “runs” are a natural feature of randomness. Tellingly, Falk and Konold also noted that the time required for subjects to memorize a given sequence correlated directly to the randomness assigned to it by other subjects. That is, the sequences rated most random were also the most difficult to memorize.
— from Digital Bits Skeptic