Can nature create codes and specified complexity?

Can nature create codes and specified complexity?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is there an example of nature creating codes or specified complexity? It is said by Creationists that codes can only come from minds, and since DNA has a code, it was created by a mind.

Is this true? If false, can you explain how unguided, unintelligent nature could create coded specified information? Just as a note to show where I'm coming from, I am an Evolutionist.

But I have to admit, this appears to be a very, very strong argument in favor of special creation by an intelligence.

If you think of laws of nature as being codes, then we've found many instances of such codes that are in some sense not man-made. Of course, you could say that they are of "divine creation" or some such thing, but then you'll probably be ready to label everything seemingly intelligent, interesting or found as such. So let's not go there! These laws can be seen as codes, albeit not intentionally created ones.

An Introduction to Intelligent Design

Intelligent design — often called “ID” — is a scientific theory that holds that the emergence of some features of the universe and living things is best explained by an intelligent cause rather than an undirected process such as natural selection. ID theorists argue that design can be inferred by studying the informational properties of natural objects to determine if they bear the type of information that in our experience arises from an intelligent cause.

Proponents of neo-Darwinian evolution contend that the information in life arose via purposeless, blind, and unguided processes. ID proponents argue that this information arose via purposeful, intelligently guided processes. Both claims are scientifically testable using the standard methods of science. But ID theorists say that when we use the scientific method to explore nature, the evidence points away from unguided material causes, and reveals intelligent design.

The Full Essay Is Available Below

For many, the debate over the origins of life has been settled. It is often assumed that Charles Darwin rendered unnecessary any arguments that the complexity of life needed to be explained by something outside of nature. Darwin and his disciples have been confidently shoveling dirt over the opposition for generations. However, new arguments for intelligent design have arisen: the discovery of complex specified information in biological life has become both intelligent design’s greatest strength and naturalistic evolution’s greatest weakness.

2. The Fine-Tuning of the Universe

The term “Big Bang” conjures images of an explosion, and usually when we think of an explosion we imagine a highly chaotic, stochastic event that destroys any order that is present rather than creating or preserving order. The Big Bang was not that kind of an “explosion.” It’s much better understood as a “finely tuned expansion event,” where all the matter and energy in the universe were expanding from an unimaginably high energy state. However, matching that energy was control and guidance through natural laws that were designed to produce a habitable universe, a home for life.

Consider some of the finely tuned factors that make our universe possible:

  • If the strong nuclear force were slightly more powerful, then there would be no hydrogen, an essential element of life. If it was slightly weaker, then hydrogen would be the only element in existence.
  • If the weak nuclear force were slightly different, then either there would not be enough helium to generate heavy elements in stars, or stars would burn out too quickly and supernova explosions could not scatter heavy elements across the universe.
  • If the electromagnetic force were slightly stronger or weaker, atomic bonds, and thus complex molecules, could not form.
  • If the value of the gravitational constant were slightly larger, one consequence would be that stars would become too hot and burn out too quickly. If it were smaller, stars would never burn at all and heavy elements would not be produced.

The finely tuned laws and constants of the universe are an example of specified complexity in nature. They are complex in that their values and settings are highly unlikely. They are specified in that they match the specific requirements needed for life.

The following gives a sense of the degree of fine-tuning that must go into some of these values to yield a life-friendly universe:

  • Gravitational constant: 1 part in 10^34
  • Electromagnetic force versus force of gravity: 1 part in 10^37
  • Cosmological constant: 1 part in 10^120
  • Mass density of universe: 1 part in 10^59
  • Expansion rate of universe: 1 part in 10^55
  • Initial entropy: 1 part in 10^ (10^123)

The last item in the list — the initial entropy of the universe — shows an astounding degree of fine-tuning. What all this shares is an incredible, astronomically precise, purposeful care and planning that went into the crafting of the laws and constants of the universe, gesturing unmistakably to intelligent design. As Nobel laureate in physics Charles Townes stated:

Intelligent design, as one sees it from a scientific point of view, seems to be quite real. This is a very special universe: it’s remarkable that it came out just this way. If the laws of physics weren’t just the way they are, we couldn’t be here at all. The sun couldn’t be there, the laws of gravity and nuclear laws and magnetic theory, quantum mechanics, and so on have to be just the way they are for us to be here.

Some scientists respond, “Well, there must be an enormous number of universes and each one is a little different. This one just happened to turn out right.” That’s a postulate, and it’s a pretty fantastic postulate — it assumes there really are an enormous number of universes and that the laws could be different for each of them. One would like to get a look at the universe-generating machine responsible for this abundance. The other possibility is that our universe was planned, and that’s why it has come out so specially.

Again, William Lane Craig has a fantastic video explaining this:

3.1 Evidence of Intelligent Design in Biology

Despite the renewed interest in design among physicists and cosmologists, most biologists are still reluctant to consider such notions. Indeed, since the late-nineteenth century, most biologists have rejected the idea that biological organisms manifest evidence of intelligent design. While many acknowledge the appearance of design in biological systems, they insist that purely naturalistic mechanisms such as natural selection acting on random variations can fully account for the appearance of design in living things.

3.2 Molecular Machines

Nevertheless, the interest in design has begun to spread to biology. For example, in 1998 the leading journal, Cell, featured a special issue on “Macromolecular Machines”. Molecular machines are incredibly complex devices that all cells use to process information, build proteins, and move materials back and forth across their membranes. Bruce Alberts, President of the National Academy of Sciences, introduced this issue with an article entitled, “The Cell as a Collection of Protein Machines”. In it, he stated that:

We have always underestimated cells. … The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. 29

Alberts notes that molecular machines strongly resemble machines designed by human engineers, although as an orthodox neo-Darwinian he denies any role for actual, as opposed to apparent, design in the origin of these systems.

In recent years, however, a formidable challenge to this view has arisen within biology. In his book Darwin’s Black Box (1996), Lehigh University biochemist Michael Behe shows that neo-Darwinists have failed to explain the origin of complex molecular machines in living systems. For example, Behe looks at the ion-powered rotary engines that turn the whip-like flagella of certain bacteria. 30 He shows that the intricate machinery in this molecular motor — including a rotor, a stator, O-rings, bushings, and a drive shaft — requires the coordinated interaction of some forty complex protein parts. Yet the absence of any one of these proteins results in the complete loss of motor function. To assert that such an “irreducibly complex” engine emerged gradually in a Darwinian fashion strains credulity. According to Darwinian theory, natural selection selects functionally advantageous systems. 31 Yet motor function only ensues after all the necessary parts have independently self-assembled — an astronomically improbable event. Thus, Behe insists that Darwinian mechanisms cannot account for the origin of molecular motors and other “irreducibly complex systems” that require the coordinated interaction of multiple independent protein parts.

To emphasize his point, Behe has conducted a literature search of relevant technical journals. 32 He has found a complete absence of gradualistic Darwinian explanations for the origin of the systems and motors that he discusses. Behe concludes that neo-Darwinists have not explained, or in most cases even attempted to explain, how the appearance of design in “Irreducibly complex” systems arose naturalistically. Instead, he notes that we know of only one cause sufficient to produce functionally integrated, irreducibly complex systems, namely, intelligent design. Indeed, whenever we encounter irreducibly complex systems and we know how they arose, they were invariably designed by an intelligent agent. Thus, Behe concludes (on strong uniformitarian grounds) that the molecular machines and complex systems we observe in cells must also have had an intelligent source. In brief, molecular motors appear designed because they were designed.

3.3 The Complex Specificity of Cellular Components

As Dembski has shown elsewhere, 33 Behe’s notion of “irreducible complexity” constitutes a special case of the “complexity” and “specification” criteria that enables us to detect intelligent design. Yet a more direct application of Dembski’s criteria to biology can be made by analyzing proteins, the macromolecular components of the molecular machines that Behe examines inside the cell. In addition to building motors and other biological structures, proteins perform the vital biochemical functions — information processing, metabolic regulation, signal transduction — necessary to maintain and create cellular life.

During the 1950s, scientists quickly realized that proteins possess another remarkable property. In addition to their complexity, proteins also exhibit specificity, both as one-dimensional arrays and as three-dimensional structures. Whereas proteins are built from rather simple chemical building blocks known as amino acids, their function — whether as enzymes, signal transducers, or structural components in the cell — depends crucially upon the complex but specific sequencing of these building blocks.36 Molecular biologists such as Francis Crick quickly likened this feature of proteins to a linguistic text. Just as the meaning (or function) of an English text depends upon the sequential arrangement of letters in a text, so too does the function of a polypeptide (a sequence of amino acids) depend upon its specific sequencing. Moreover, in both cases, slight alterations in sequencing can quickly result in loss of function.

Biologists, from Darwin’s time to the late 1930s, assumed that proteins had simple, regular structures explicable by reference to mathematical laws. Beginning in the 1950s, however,

biologists made a series of discoveries that caused this simplistic view of proteins to change. Molecular biologist Fred Sanger determined the sequence of constituents in the protein molecule insulin. Sanger’s work showed that proteins are made of long nonrepetitive sequences of amino acids, rather like an irregular arrangement of colored beads on a string. 34 Later in the 1950s, work by John Kendrew on the structure of the protein myoglobin showed that proteins also exhibit a surprising three-dimensional complexity. Far from the simple structures that biologists had imagined, Kendrew’s work revealed an extraordinarily complex and irregular three-dimensional shape — a twisting, turning, tangled chain of amino acids. As Kendrew explained in 1958, “the big surprise was that it was so irregular … the arrangement seems to be almost totally lacking in the kind of regularity one instinctively anticipates, and it is more complicated than has been predicted by any theory of protein structure.” 35

In the biological case, the specific sequencing of amino acids gives rise to specific three-dimensional structures. This structure or shape in turn (largely) determines what function, if any, the amino acid chain can perform within the cell. A functioning protein’s three-dimensional shape gives it a “hand-in-glove” fit with other molecules in the cell, enabling it to catalyze specific chemical reactions or to build specific structures within the cell. Due to this specificity, one protein cannot usually substitute for another any more than one tool can substitute for another. A topoisomerase can no more perform the job of a polymerase, than a hatchet can perform the function of a soldering iron. Proteins can perform functions only by virtue of their three-dimensional specificity of fit with other equally specified and complex molecules within the cell. This threedimensional specificity derives in turn from a one-dimensional specificity of sequencing in the arrangement of the amino acids that form proteins.

3.4. The Sequence Specificity of DNA

According to Crick’s hypothesis, the specific arrangement of the nucleotide bases on the DNA molecule generates the specific arrangement of amino acids in proteins.41 The sequence hypothesis suggested that the nucleotide bases in DNA functioned like letters in an alphabet or characters in a machine code. Just as alphabetic letters in a written language may perform a communication function depending upon their sequencing, so too, Crick reasoned, the nucleotide bases in DNA may result in the production of a functional protein molecule depending upon their precise sequential arrangement. In both cases, function depends crucially upon sequencing. The nucleotide bases in DNA function in precisely the same way as symbols in a machine code or alphabetic characters in a book. In each case, the arrangement of the characters determines the function of the sequence as a whole. As Dawkins notes, “The machine code of the genes is uncannily computerlike.󈭾 Or, as software innovator Bill Gates explains, “DNA is like a computer program, but far, far more advanced than any software we’ve ever created.󈭿 In the case of a computer code, the specific arrangement of just two symbols (0 and 1) suffices to carry information. In the case of an English text, the twenty-six letters of the alphabet do the job. In the case of DNA, the complex but precise sequencing of the four nucleotide bases adenine, thymine, guanine, and cytosine (A, T, G, and C)—stores and transmits genetic information, information that finds expression in the construction of specific proteins. Thus, the sequence hypothesis implied not only the complexity but also the functional specificity of DNA base sequencing.

The discovery of the complexity and specificity of proteins has raised an important question. How did such complex but specific structures arise in the cell? This question recurred with particular urgency after Sanger revealed his results in the early 1950s. Clearly, proteins were too complex and functionally specific to arise “by chance”. Moreover, given their irregularity, it seemed unlikely that a general chemical law or regularity governed their assembly. Instead, as Nobel Prize winner Jacques Monod recalled, molecular biologists began to look for some source of information within the cell that could direct the construction of these highly specific structures. As Monod would later recall, to explain the presence of the specific sequencing of proteins, “you absolutely needed a code.” 37

In 1953, James Watson and Francis Crick elucidated the structure of the DNA molecule. 38 The structure they discovered suggested a means by which information or “specificity” of sequencing might be encoded along the spine of DNA’s sugar-phosphate backbone. 39 Their model suggested that variations in sequencing of the nucleotide bases might find expression in the sequencing of the amino acids that form proteins. Francis Crick proposed this idea in 1955, calling it the “sequence hypothesis”. 40


Watson, J. D. & Crick, F. H. C. A structure for deoxyribose nucleic acid. Nature 171, 737–738 (1953).

Brenner, S., Jacob, F. & Meselson, M. An unstable intermediate carrying information from genes to ribosomes for protein synthesis. Nature 190, 576–581 (1961).

Crick, F. H. C., Barnett, L., Brenner, S. & Watts-Tobin, R. J. General nature of the genetic code for proteins. Nature 192, 1227–1232 (1961).

Nirenberg, M. W. & Matthaei, J. H. The dependence of cell-free protein synthesis in E. coli upon naturally occurring or synthetic polynucleotides. Proc. Natl Acad. Sci. USA 47, 1588–1602 (1961).

Saiki, R. K. et al. Enzymatic amplification of β-globin sequences and restriction site analysis for diagnosis of sickle cell anemia. Science 230, 1350–1354 (1985).

Maxam, A. M. & Gilbert, W. A new method of sequencing DNA. Proc. Natl Acad. Sci. USA 74, 560–564 (1977).

Sanger, F. & Coulson, A. R. A rapid method for determining sequences in DNA by primed synthesis with DNA polymerase. J. Mol. Biol. 94, 444–448 (1975).

Sanger, F. et al. Nucleotide sequence of bacteriophage φX174. Nature 265, 678–695 (1977).

Hunkapiller, M. W. & Hood, L. New protein sequenator with increased sensitivity. Science 207, 523–525 (1980).

Horvath, S. J., Firca, J. R., Hunkapiller, T., Hunkapiller M. W. & Hood. L. An automated DNA synthesizer employing deoxynucleoside 3′ phosphoramidites. Methods Enzymol. 154, 314–326 (1987).

Kent, S. B. H., Hood, L. E., Beilan, H., Meister S. & Geiser, T. High yield chemical synthesis of biologically active peptides on an automated peptide synthesizer of novel design. Peptides 5, 185–188 (1984).

Smith, L. M. et al. Fluorescence detection in automated DNA sequence analysis. Nature 321, 674–679 (1986).

Collins, F. & Galas, D. J. A new five-year plan for the US Human Genome Project. Science 262, 43–46 (1993).

Venter, J. C. et al. The sequence of the human genome. Science 291, 1304–1351 (2001).

International Human Genome Sequencing Consortium. Initial sequencing and analysis of the human genome. Nature 409, 860–921 (2001).

Davidson, E. H. et al. A genomic regulatory network for development. Science 295, 1669–1678 (2002).

Ideker, T. et al. Integrated genomic and proteomic analyses of a systematically perturbed metabolic network. Science 292, 929–933 (2001).

Baliga, N. S. et al. Coordinate regulators of energy transduction modules in the Halobacterium sp. analyzed by a global systems approach. Proc. Natl Acad. Sci. USA 99, 14913–14918 (2002).

Aderem A. & Hood, L. Immunology in the post-genomic era. Nature Immunol. 2, 1–3 (2001).

Dennis, C. & Gallagher, R. (eds) The Human Genome (Palgrave, Basingstoke, 2001).


Evolutionary theory, like any other branch of science, achieves progress by testing new ideas. Some of these ideas will go on to change what we thought we knew, others will be found incorrect, and some will stagnate as they fail to gather clear evidence, for or against. For evolutionary theory, many suggestions have been made for new causal factors that are required to explain how genetic diversity has arisen. Intelligent Design, for example, proposes that some types of genetic information cannot evolve through natural processes unless we admit a role for an intelligent designer. This proposition claims testability by using a definition of information that usually refers to creation by an intelligent agent. Meanwhile, many biologists perceive that they are able to understand exactly where life’s genetic information comes from (the local environment) by thinking in terms of more fundamental and well-established definitions of information that do not involve Intelligent Design. A related suggestion is that current evolutionary theory cannot explain how natural processes could produce a genetic information system in the first place. I agree that we are far from a full understanding, but choose to outline some major themes in the scientific progress made since the discovery of life’s Central Dogma in 1966 to provide a context for the reader to judge for themselves whether it is time to conclude that this search has failed.

It would be remiss to finish an article in this journal without some comment on the theology of all this. If we accept the evolutionary explanations sketched above, then science is taking major steps towards understanding the mechanism by which life came into the universe. Some famous advocates of this science claim it presents a logical connection to an atheistic world-view. 48 Many others (myself included) perceive that any connection between evolution and spirituality is an act of faith–and faith in atheism is only one of many options. 49 For my part, I find excitement and challenge in the search to unravel this marvelous mystery. I choose to associate that inspiration with a loving, creator God whose universe I am exploring. I agree with Dawkins (and Darwin) that from a human standpoint, the suffering and death implicit to natural selection form questions for my faith–and I am grateful that scientists and theologians are able to discuss such issues in forums such as this, 50 where I can read, learn and grow my relationship with God through an exploration of science.

Notes & References

The content of this post was originally published as part of a paper in the ASA’s academic journal, PSCF. It is republished here with permission.

Box 1. An Introduction to Biological Coding and the Central Dogma of Molecular Biology

A code is a system of rules for converting information of one representation into another. For example Morse Code describes the conversion of information represented by a simple alphabet of dots and dashes to another, more complex alphabet of letters, numbers and punctuation. The code itself is the system of rules that connects these two representations. Genetic coding involves much the same principles, and it is remarkably uniform throughout life: genetic information is stored in the form of nucleic acid (DNA and RNA), but organisms are built by (and to a large extent from) interacting networks of proteins. Proteins and nucleic acids are utterly different types of molecule thus it is only by decoding genes into proteins that self-replicating organisms come into being, exposing genetic material to evolution. The decoding process occurs in two distinct stages: during transcription local portions of the DNA double-helix are unwound to expose individual genes as templates from which temporary copies are made (transcribed) in the chemical sister language RNA. These messenger RNA molecules (mRNA’s) are then translated into protein.

The language-based terminology reflects the fact that both genes and proteins are essentially 1-dimensional arrays of chemical letters. However, the nucleic acid alphabet comprises just 4 chemical letters (the 4 nucleotides are often abbreviated to ‘A’, ‘C’, ‘G’ and ‘T’ – but see footnote27), whereas proteins are built from 20 different amino acids. Clearly, no 1:1 mapping can connect nucleotides to amino acids. Instead nucleotides are translated as non-overlapping triplets known as codons. With 4 chemical letters grouped into codons of length 3, there are 4x4x4 = 64 possible codons. Each of these 64 codons is assigned to exactly one of 21 meanings (20 amino acids and a ‘stop translation’ signal found at the end of every gene.) The genetic code is quite simply the mapping of codons to amino acid meanings. One consequence of this mapping is that most of the amino acids are specified by more than one codon: this is commonly referred to as the redundancy of the code.

Although the molecular machinery that produces genetic coding is complex (and indeed, less than perfectly understood), the most essential elements for this discussion are the tRNA’s and ribosome. Each organism uses a set of slightly different tRNA’s that each bind a specific amino acid at one end, and recognize a specific codon or subset of codons at the other. As translation of a gene proceeds, appropriate tRNAs bind to successive codons, bringing the desired sequence of amino acids into close, linear proximity where they are chemically linked to form a protein translation product. In this sense, tRNA’s are adaptors and translators – between them, they represent the molecular basis of genetic coding. The ribosome is a much larger molecule, comprising both RNA and various proteins, which supervises the whole process of translation. It contains a tunnel through which the ribbon of messenger RNA feeds somewhere near to the center of the ribosome, a window exposes just enough genetic material for tRNA’s to compete with each other to bind the exposed codons.


1. This definition appears, for example, within the classic text-book for undergraduates: Futuyma, D. J. Evolution. (2005, Sunderland, Massachusetts: Sinauer Associates)

2. For an accessible discussion of this topic, see Neil Shubin’s book Your Inner Fish (2009, Random House Digital)

3. For example, see the review by K. Omland and D. Funk. “Species level paraphyly and polyphyly.” Annual Reviews in Ecology, Evolution and Systematics (2003) 34: 397-423.

4. Christopher E. Bird, Brenden S. Holland, Brian W. Bowen, Robert J. Toonen. “Diversification of sympatric broadcast-spawning limpets (Cellana spp.) within the Hawaiian archipelago.” Molecular Ecology (2011) 20: 2128

5. For example, see Creation: Facts of Life. Chapter 2: Darwin and biologic change (2006, New Leaf Press, Green Forest, Arkansas). This text is freely available online here.

6. For an excellent review of the history by which evolutionary thought absorbed and dismantled these ideas to reach the “(Neo-)Darwinian Synthesis” see Chapter 9, The Eclipse of Darwinism, within P.J. Bowler’s “Evolution: The History of an Idea” (1983, University of California Press, Berkely and London)

7. P. Senter “Using creation science to demonstrate evolution: application of a creationist method for visualizing gaps in the fossil record to a phylogenetic study of coelurosaurian dinosaurs.” Journal of Evolutionary Biology (2010) 23:1732–1743. Click here for a more accessible overview of this article.

8. A good, recent summary is presented by Jerry Coyne’s book Why evolution is true. (2009, Viking Penguin, New York)

9. Natural mutations can sometimes have large effects, particularly in genetic regions that influence deep developmental pathways of multicellular organisms (i.e. the genes that control how other genes are switched on and off to build an adult organism from a single fertilized egg-cell.) However, these changes are generally deleterious to the organism, and are therefore unusual components of an evolutionary lineage. A deeper discussion of this type of mutation can be found in Carroll S. B. “Homeotic genes and the evolution of arthropods and chordates”. Nature (2005) 376: 479–85. I would draw the reader’s attention to the broader context: these sorts of mutations are limited to relatively few events on one small branch of the tree of life. In terms of general macro-evolution for life on our planet, biologists do not view these events as typical to the formation of new species.

10. Some of these suggestions for “skyhooks” and “cranes” that would like to lift the natural processes of evolution to produce higher levels of genetic change are discussed in Chapter 3 of Daniel Dennett’s book Darwin’s Dangerous Idea: evolution and the meaning of life (1995, Simon and Schuster, New York)

11. See for example Simon Conway Morris’ book Life’s solution: inevitable humans in a lonely universe (2003, Cambridge University Press, Cambridge).

12. See, for example, Stephen Jay Gould’s book Wonderful Life (1989, W.W. Norton, New York). Gould is extreme in his view, but is closer to the position of mainstream evolutionary science, as can be seen from reviews of the books in which Morris (footnote 11) argues for inevitable humans (e.g. the review by the National Center for Science Education)

13. For example, see the multi-authored The Deep Structure of Biology: is convergence sufficiently ubiquitous to give a directional signal?, (2008, Templeton Foundation Press, West Conshoken PA)

14. For example, see Chapter 6 of William Dembski’s book Intelligent Design: The Bridge Between Science and Theology (1999, Inter Varsity Press)

15. “Intelligent Design as a Theory of Information,” William Dembski (1998). Web material copyrighted to William Dembski, available here.

16. Though nothing in evolutionary theory suggests that there must be an increase in the length or complexity of a DNA molecule over time: for example, many bacteria and viruses appear to have undergone extensive natural selection to reduce the size of their genetic material as a specific adaptation to make copies of themselves faster than their competitors. For a recent example, see: Nikoh N, Hosokawa T, Oshima K, Hattori M, Fukatsu T. “Reductive Evolution of Bacterial Genome in Insect Gut Environment.” Genome Biology and Evolution (2011)

17. R. Redon, S. Ishikawa, K. R. Fitch, L. Feuk, G. H. Perry, T. D. Andrews, H. Fiegler, M. H. Shapero, A. R. Carson, W. Chen, E. K. Cho, S. Dallaire, J. L. Freeman, J. R. González, M. Gratacòs, J. Huang, D. Kalaitzopoulos, D. Komura, J. R. MacDonald, C. R. Marshall, R. Mei, L. Montgomery, K. Nishimura, K. Okamura, F. Shen, M. J. Somerville, J. Tchinda, A. Valsesia, C. Woodwark, F. Yang, J. Zhang, T. Zerjal, J. Zhang, L. Armengol, D. F. Conrad, X. Estivill, C. Tyler-Smith, N. P. Carter, H. Aburatani, C. Lee, K. W. Jones, S. W. Scherer and M. E. Hurles “Global variation in copy number in the human genome” Nature (2006) 444: 444-454.

18. For an excellent, evolving review of the interesting topic of genome sizes, see Gregory, T.R. (2005). Animal Genome Size Database. Available here.

19. If you would like to consider the implications of combinatorial language in greater detail without any formal mathematics, try reading Jorge Luis Borges’ famous short story entitled “The library of Babel.” Available in English translation as pages 51-59 of “Labyrinths: selected stories and other writings” (1964, New Directions/Penguin, New York)

20. In fact, what is harder is to deduce is which of the many routes is most likely, if you assign slightly different probabilities to each different type of step. This is why the past couple of decades have seen considerable research effort go into developing computer algorithms that estimate the most likely series of mutation-steps that separate two versions of genetic material. To understand the level of complexity here, consider some different routes by which a series of letter-mutations could transform the word “evolution” into “creation”, and then scale that challenge upwards to do something similar for two sentences, two paragraphs, two novels. A good, recent overview is given in: Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S., “MEGA5: Molecular Evolutionary Genetics Analysis using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods.” Mol Biol Evol (2011) 28:2731-9.

21. For an accessible, eloquent discussion of where this can lead, see Chapter 3 (“Accumulating Small Change” pp. 43-77), from Richard Dawkins’ book “The Blind Watchmaker” (1986 New York: W. W. Norton & Company)2

22. On a different note, it is interesting to see how this same line of thought parallels theological examination of the famous biblical text that humanity was created in the image of God(Genesis, 1:27.) If each of us is built in the image of God, and each of us is different, then it follows that each of us is capable of developing a different relationship with God based on the unique perspective granted us. This observation provides a logical check to any theologies that assert necessary submission to a single, all-embracing interpretation of God’s revealed truth. Within the Gospels, Jesus’ personal encounters show a consistent emphasis on the unique point of connection between an individual’s perspective and God’s greater truth (e.g. compare John 3:1-7, John 4:1-29, Mark 17:10-22, Matthew 8:5-13, Luke 23:33-43), together with a consistent wariness towards group ideologies (e.g. Mark 12:18-27, Matthew 12:1-9, Mathew 15:1-11).

23. This reductionist description of evolution contains little that is new (scientifically) precisely because the aim of this essay is to explain how classic Neo-Darwinian orthodoxy addresses the issue of the origin of (new) genetic information. This view of evolution is probably best known through the popular works of writers such as Dawkins, and everything written here is in true alignment with insights expressed in books such as The Selfish Gene, The Blind Watchmaker and (most relevant to criticisms of reductionism) The Extended Phenotype. Behind these works lies an extensive primary research literature that has developed these ideas, before and after, with respect to genomics, genetics, biological development (“embryology”), animal behavior, morphology, life history strategies and so on. This reductionist view does not overlook the existence of phenotype as the filter through which the environment passes its information into DNA – this is why the Extended Phenotype is the most relevant popular work to discuss in this context – but as Dawkins explains so clearly in the Selfish Gene, environmental pressures that do not create a corresponding “match” within DNA are irrelevant to evolution precisely because heritability is one of the 3 tenets (variation, heritability and competition to reproduce) that lead to Darwin’s inescapable conclusion: heritable variations which increase the reproductive success of a lineage will, over time, accumulate.

24. For a fascinating and accessible discussion of the incorrect ideas that paved the way for these discoveries, see: B. Hayes: “The Invention of the Genetic Code,” American Scientist (1998) 86: 8 – 14

25. Watson J.D. and Crick F.H.C. “A Structure for Deoxyribose Nucleic Acid” Nature (1953) 171: 737-738

26. Frisch, L., (ed.) “The Genetic Code”, Cold Spring Harbor Symposia on Quantitative Biology (1966):1 – 747

27. More accurately, “A”, “T”, “G” and “C” refer to the four bases used in genetic coding. Bases are part of a whole nucleotide – the base must be added to a molecular of ribose and a phosphate to form a nucleotide. The ribose-phosphate construction is used as a universal scaffolding with which to join together sequences of bases. This technical differentiation becomes important to the origin of genetic information because bases are relatively easy to produce under prebiotic conditions, full nucleotides much less so. This and other subtleties are described further in a later section, explained well in Robert Shapiro’s work (footnote 42).

28. This key insight brought Christian Anfinsen a Nobel prize in 1972, and a brief overview is found in his classic paper: Anfinsen C.B. “Principles that govern the folding of protein chains” Science (1973) 181: 223–230

29. Crick, F. H. C. “ The origin of the genetic code”, J. Mol. Biol. (1968) 38: 367 – 379

30. Given that there are only really 64 different rules for converting genetic information into proteins, and an individual protein can be several hundred amino acids in length, most genes use each of these rules many times over

31. For a much more thorough and technical version of this section including several hundred references to the primary scientific literature, see Freeland S.J. (2009) “Terrestrial Amino Acids and their evolution” in Amino Acids, Peptides and Proteins within Organic Chemistry, Vol. 1 (ed. A. B. Hughes), Wiley VCH.

32. Knight, R. D., S. J. Freeland, Landweber L. F. “Rewiring the keyboard: evolvability of the genetic code”, Nature Reviews Genetics (2001) 2: 49-58.

33. For a brief overview, see “The 22nd amino acid.” Atkins JF, Gesteland R. Science. (2002) 296: 1409-10.

34. For an accessible overview of this topic, see Freeland S.J. and Hurst L.D. “Evolution Encoded,” Scientific American (2007) 290:84-91. A more technical and more recent treatment of this topic can be found in Novozhilov AS, Koonin EV. “Exceptional error minimization in putative primordial genetic codes.” Biology Direct (2009) 4:44.

35. For a detailed review, see Koonin EV, Novozhilov AS. “Origin and evolution of the genetic code: the universal enigma.” IUBMB Life (2009) 61:99-111.

36. More than fifty models are considered in Trifonov, E.N. “Consensus temporal order of amino acids and evolution of the triplet code.” Gene (2000) 261:139-51.

37. Compare the similarities in two recent reviews: Higgs, P. G. & Pudritz, R. E. (2009) “A thermodynamic basis for prebiotic amino acid synthesis and the nature of the first genetic code.” Astrobiology 9: 483-90 Cleaves, H.J. (2010) “The origin of the biologically coded amino acids” J. Theor. Biol. 263: 490-498

38. Philip GK, Freeland SJ. “Did evolution select a nonrandom “alphabet” of amino acids?” Astrobiology (2011)11:235-40.

39. The current status of data here is reviewed in Yarus, M., Widmann J.J. and Knight R. “RNA-amino acid binding: a stereochemical era for the genetic code.” J Mol Evol. (2009) 69:406-29.

40. For an introduction and references to more detailed material, see Cech TR “The ribosome is a ribozyme” Science (2000) 289:878-9

41. A brief review of data here is found in S.J. Freeland, R.D. Knight, L.F. Landweber “Do Proteins Predate DNA?” Science (1999) 286: 690-692.

42. Readers who are interested in this particular sub-topic are encouraged to read Robert Shapiro’s article “A Simpler Origin for Life”, Scientific American, June 2007 pp. 47-53. Shapiro’s passionate emphasis represents the best traditions of scientific skepticism, ruthlessly pointing out some very real problems with all current attempts to how a non-living universe could have produced RNA. In particular, widespread enthusiasm for the RNA-world has become so fashionable that even high-profile scientific publications which explicitly seek to demonstrate pre-biotic origins for the RNA world continue to ignore well-understood and long-standing criticism of the problems. For example, one recent, high-profile paper claims to demonstrate prebiotic plausibility for synthesis of nucleotides (Powner MW, Gerland B, Sutherland JD. “Synthesis of activated pyrimidine ribonucleotides in prebiotically plausible conditions.” Nature (2009) 459:239-42.) This interesting work shows that exactly the right purified solution of linear organic molecules can cyclize under the right conditions to present activated nucleotides. However, it entirely misses Shapiro’s “garbage bag” point – that one of the biggest challenges for understanding the evolution of an RNA-world is to understand how building blocks form into oligonucleotides when they are coming from any sort of messy molecular organic broth (rather than a purified solution of exactly the right reactants under exactly the right conditions.) There is no chemical reason why nucleotides should form and stick to one another rather than to other chemicals produced in the same broth – such as amino acids, alcohols, esters etc. Of further note, the chemistry reported in this Nature paper bears no resemblance to the reactions by which living organisms have been making nucleotides for more than 3 billion years. Maybe early life changed its metabolic pathways beyond recognition – but as yet we have absolutely no evidence for this whatsoever: Prebiotically possible and prebiotically plausible are subtly different concepts.

43. For example, see the comment by Steve Benner on page 52 of ref. 32. For a more detailed treatment, see Kim HJ, Ricardo A, Illangkoon HI, Kim MJ, Carrigan MA, Frye F, Benner SA., “Synthesis of carbohydrates in mineral-guided prebiotic cycles.”, Journal of the American Chemical Society (2011) 133:9457-68.

44. A very readable overview of this topic can be found in the first chapter of Nick Lane’s recent book “Life Ascending: The Ten Great Inventions of Evolution” (2009, W.W. Norton, New York)

45. One recent example is given by Mielke RE, Russell MJ, Wilson PR, McGlynn SE, Coleman M, Kidd R, Kanik I “Design, fabrication, and test of a hydrothermal reactor for origin-of-life experiments.” Astrobiology (2010) 10:799-810.

46. Although Cairns Smith’s ideas date back to the mid 1960’s, they are most accessibly presented in his later book: “Seven Clues to the Origin of Life.” (1985, Cambridge University Press, New York)

47. For a broad introduction to this progress, as of 2001, see “Life’s Rocky Start” by Robert M. Hazen, Scientific American (2001) 284: pp. 77-85

48. For example, see Chapter 4 (“God’s utility function” Pages 95-135) of Richard Dawkins book River out of Eden (Basic Books/Perseus, New York, 1995)

49. For example, see the letter(s) and signatories of the Clergy Letter Project:

50. For example, the excellent pair of articles: Junghyungkim “Naturalistic versus Eschatological Theologies of Evolution” Perspectives and Keith Miller “And God saw that it was good” – both within Perspectives in Science and Christian Faith (2011) 63(2).

Stephen Freeland

God's Word. God's World. Delivered to your inbox.

BioLogos shows the church and the world the harmony between science and biblical faith. Get resources, updates, and more.

How do we detect design?

People detect intelligent design all the time. For example, if we find arrowheads on a desert island, we can assume they were made by someone, even if we cannot see the designer. 1

There is an obvious difference between writing by an intelligent person, e.g. Shakespeare&rsquos plays, and a random letter sequence like WDLMNLTDTJBKWIRZREZLMQCOP. 2 There is also an obvious difference between Shakespeare and a repetitive sequence like ABCDABCDABCD. The latter is an example of order, which must be distinguished from Shakespeare, which is an example of specified complexity.

We can also tell the difference between messages written in sand and the results of wave and wind action. The carved heads of the U.S. presidents on Mt Rushmore are clearly different from erosional features. Again, this is specified complexity. Erosion produces either irregular shapes or highly ordered shapes like sand dunes, but not presidents&rsquo heads or writing.

Another example is the SETI program (Search for Extraterrestrial Intelligence). This would be pointless if there was no way of determining whether a certain type of signal from outer space would be proof of an intelligent sender. The criterion is, again, a signal with a high level of specified complexity&mdashthis would prove that there was an intelligent sender, even if we had no other idea of the sender&rsquos nature. But neither a random nor a repetitive sequence would be proof. Natural processes produce radio noise from outer space, while pulsars produce regular signals. Actually, pulsars were first mistaken for signals by people eager to believe in extraterrestrials, but this is because they mistook order for complexity. So evolutionists (as are nearly all SETI proponents) are prepared to use high specified complexity as proof of intelligence, when it suits their ideology. This shows once more how one&rsquos biases and assumptions affect one&rsquos interpretations of any data. See God and the Extraterrestrials for more SETI/UFO fallacies. 3

Life fits the design criterion

Life is also characterized by high specified complexity. The leading evolutionary origin-of-life researcher, Leslie Orgel, confirmed this:

Living things are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity mixtures of random polymers fail to qualify because they lack specificity. 4

Unfortunately, a materialist like Orgel here refuses to make the connection between specified complexity and design, even though this is the precise criterion of design.

To elaborate, a crystal is a repetitive arrangement of atoms, so is ordered. Such ordered structures usually have the lowest energy, so will form spontaneously at low enough temperatures. And the information of the crystals is already present in their building blocks for example, directional forces between atoms. But proteins and DNA, the most important large molecules of life, are not ordered (in the sense of repetitive), but have high specified complexity. Without specification external to the system, i.e., the programmed machinery of living things or the intelligent direction of an organic chemist, there is no natural tendency to form such complex specified arrangements at all. When their building blocks are combined (and even this requires special conditions 5 ), a random sequence is the result. The difference between a crystal and DNA is like the difference between a book containing nothing but ABCD repeated and a book of Shakespeare. However, this doesn&rsquot stop many evolutionists (ignorant of Orgel&rsquos distinction) claiming that crystals prove that specified complexity can arise naturally&mdashthey merely prove that order can arise naturally, which no creationist contests. 6

Intelligent Design: The Design Inference

In this section we will take a look at the evidences for design. William Dembski is an associate research professor at Baylor University and a senior fellow of the Discovery Institute's Center for the Renewal of Science and Culture. In his book Intelligent Design Dembski quotes Princeton theologian Charles Hodge: There are in the animal and vegetable world's innumerable instances of at least apparent contrivance, (evidence of the mental) which have excited the admiration of men in all ages. There are three ways of accounting for them. The first looks to an intelligent agent… In the external world there is always and everywhere indisputable evidence of the activity of two kinds of force: the one physical, the other mental. The physical belongs to matter and is due to the properties with which it has been endowed the other is the… mind of God.

The second method of accounting for contrivances in nature admits that they were foreseen and purposed by God, and that He endowed matter with forces which He foresaw and intended should produce such results. But here His agency stops. He never interferes to guide the operation of physical causes…

The third method is that which refers them to the blind operation of natural causes. This is the doctrine of the Materialists. (Dembski 1999)

The problem with science is its attempt at empiricism. Everything has to be measured, analyzed and accounted for. How do you measure God or what God has done, or might do in the future. Because of this, science and an intelligent designer part ways.

There are very definite advantages to severing the world from God. Thomas Huxley, for instance, found great comfort in not having to account for his sins to a creator. Naturalism promises to free humanity from the weight of sin by dissolving the very concept of sin. (Dembski 1999)

The fact remains, that intelligent causes have played, are playing and will continue to play an important role in science. Entire industries, economic and scientific, depend crucially on such notions as intelligence, intentionality and information. Included here are forensic science, intellectual property law, insurance claims investigation, cryptography, random number generation, archaeology and the search for extraterrestrial intelligence (SETI). (Dembski 1999)

Can distinctions be made between physical and intelligent causes? Are these distinctions reliable to denote marks of intelligence that signal the activity of an intelligent cause? Finding a reliable criterion for detecting the activity of intelligent causes has to date constituted the key obstacle facing Hodge's first method ….determining the mind of God. (Dembski 1999)

If we prescribe in advance that science must be limited to strictly natural causes, the science will necessarily be incapable of investigating God's interaction with the world. But if we permit science to investigate intelligent causes as they do already such as in the earlier example of forensic science, then God's interaction with world, insofar as it manifests the characteristic features of intelligent causation, becomes a legitimate domain for scientific investigation. (Dembski 1999)

Design as a scientific theory

Scientists are beginning to realize that design can be rigorously formulated as a scientific theory. What has kept design outside the scientific mainstream these last hundred and forty years is the absence of precise methods for distinguishing intelligently caused objects from unintelligently caused ones.

What has emerged is a new program for scientific research known as intelligent design. Within biology, intelligent design is a theory of biologically origins and development. Its fundamental claim is that intelligent causes are necessary to explain the complex, information-rich structures of biology and that these causes are empirically detectable. There exist well-defined methods that on the basis of observational features of the world are capable of reliably distinguishing intelligent causes from undirected natural causes. Such methods are found in already existing sciences such as mentioned earlier.

Whenever these methods detect intelligent causation, the underlying entity they uncover is information. Information becomes a reliable indicator of intelligent causation as well as a proper object for scientific investigation. Intelligent design is therefore not the study of intelligent causes per se but of informational pathways induced by intelligent causes. Intelligent design presupposes neither a creator nor miracles. Intelligent design is theologically minimalist. It detects intelligence without speculating about the nature of the intelligence. (Dembski 1999) Intelligent design does not try to get into the mind of a designer and figure out what a designer is thinking. The designer's thought processes lie outside the scope of intelligent design. As a scientific research program, intelligent design investigates the effects of intelligence and not intelligence as such. (Dembski 2004)

There's a joke that clarifies the difference between intelligent design and creation. Scientists come to God and claim they can do everything God can do. "Like what?" asks God. "Like creating human beings," say the scientists. "Show me, "says God. The scientists say, "Well, we start with some dust and then" - God interrupts, "Wait a second. Get your own dust". Creation asks for the ultimate resting place of explanation: the source of being of the world. Intelligent design, by contrast, inquires not into the ultimate source of matter and energy but into the cause of their present arrangements. (Dembski 2004) Scientific creationism's reliance on narrowly held prior assumptions undercuts its status as a scientific theory. Intelligent design's reliance on widely accepted scientific principles, on the other hand, ensures its legitimacy as a scientific theory. (Dembski 2004)

What will science look like once intelligent causes are readmitted to full scientific status? The worry is that intelligent design will stultify scientific inquiry. Suppose Paley was right about the mammalian eye exhibiting sure marks of intelligent causation. How would this recognition help us understand the eye any better as scientists? Actually it would help quite a bit. It would put a stop to all those unsubstantiated just-so-stories that evolutionists spin out in trying to account for the eye through a gradual succession of undirected natural causes. It would preclude certain types of scientific explanations. This is a contribution to science. Now science becomes a process whereby one intelligence is determining, what another intelligence has done. (Dembski 1999)

The Designer
The physical world of science is silent about the revelation of Christ in Scripture. Nothing prevents the physical world from independently testifying to the God revealed in the Scripture. Now intelligent design does just this - it puts our native intellect to work and thereby confirms that a designer of remarkable talents is responsible for the physical world. How this designer connects with the God of Scripture is then left for theology to determine. (Dembski 1999)

Why should anyone want to reinstate design into science? Chance and necessity have proven too thin an explanatory soup on which to nourish a robust science. In fact, by dogmatically excluding design from science, scientists are themselves stifling scientific inquiry. Richard Dawkins begins his book, The Blind Watchmaker by stating, "Biology is the study of complicated things that give the appearance of having been designed for a purpose." In What Mad Pursuit Francis Crick, Nobel laureate and co discoverer of the structure of DNA, writes, "Biologists must constantly keep in mind that what they see was not designed, but rather evolved." (Dembski 1999)

The Complexity-Specification Criterion
Whenever design is inferred, three things must be established: contingency, complexity and specification. Contingency ensures that the object in question is not the result of an automatic and therefore unintelligent process that had no choice in its production. Complexity ensures that the object is not so simple that it can readily be explained by chance. Finally, specification ensures that the object exhibits the type of pattern characteristic of intelligence.

The concept of contingency is further understood as an object, event or structure becoming irreducible to any underlying physical necessity. The sequencing of DNA bases is irreducible to the bonding affinities between the bases as an example. (Dembski 1999)

The Explanatory Filter
William Dembski has devised what he calls the "explanatory filter" to determine whether design is present or not.

First to be assessed is if the situation or object is contingent. If not the situation is attributable to necessity. To say something is necessary is to say that it has to happen and that it can happen in one and only one way. Consider a biological structure which results from necessity. It would have to form as reliably as water freezes when its temperature is suitably lowered. (Dembski 2004) The opposite of necessity is contingent. For something to be contingent is to say that it can happen in more than one way. Contingency presupposes a range of possibilities such as the possible results of spinning a roulette wheel. To get a handle on those possibilities, scientists typically assign them probabilities. (Dembski 2004) Either contingency is a blind, purposeless contingency - which is chance (whether pure chance or chance constrained by necessity) or it is a guided, purposeful contingency - which is intelligent causation. (Dembski 2002)

Secondly if something is determined to be contingent then the next question is "is it complex?" If complexity is not there the situation is attributable to chance.

Third if something is determined to be complex then is it specified? If it is not specified then the situation is attributable to chance. However, if specificity is determined then the situation is determined to be designed. According to the complexity-specification criterion, once the improbabilities become too vast and the specifications too tight, chance is eliminated and design is implicated. (Dembski 1999)

Whenever this criterion attributes design, it does so correctly. In every instance where the complexity-specification criterion attributes design and where the underlying causal story is known, it turns out design actually is present. It has the same logical status as concluding that all ravens are black given that all ravens observed to date have been found to be black. (Dembski 1999)

William Dembski in his book The Design Revolution provides us with an example from the movie Contact that illustrates how intelligent design can be detected.

After years of receiving apparently meaningless "random" signals, the Contact researchers discovered a pattern of beats and pauses that corresponds to the sequence of all the prime numbers between 2 and 101. That grabbed their attention, and they immediately detected intelligent design. When a sequence begins with two beats and then a pause, three beats and then a pause, and continues, through each prime number all the way to 101 beats researchers must infer the presence of an extraterrestrial intelligence.

Here's why. Nothing in the laws of physics requires radio signals to take one form or another, so the prim sequence is contingent rather than necessary. Also, the prime sequence is a long sequence and there for complex. Finally, it was not just complex, but it also exhibited an independently given pattern or specification. (It was not just any old sequence of numbers but a mathematically significant one - the prime numbers.) (Dembski 2004)

A second application of the Explanatory Filter is seen in the workings of a safe's combination lock. The safe's lock is marked with a hundred numbers ranging from 00 to 99 and that five turns in alternating directions are required to open the lock. We assume that one and only one sequence of numbers is involved in the sequence (e.g., 34-98-25-09-71). There are thus 10 billion possible combinations, of which precisely one opens the lock.

Feeding this situation into the Explanatory Filter we note first that there is no regularity or law of nature requires that the combination lock turn to the combination that opens it, therefore the opening of the bank's safe is contingent. Secondly - random twirling of the combinations lock's dial is exceedingly unlikely to open the lock. This makes the opening of the safe complex. Is the opening of the safe specified? If not specified, the opening of the safe could be attributed to chance. Since there is only one in 10 billion possibilities, the opening of the safe is also specified. This moves the problem to the area of design. Any sane bank worker would instantly recognize: somebody knew, and chose to design the lock to open using the prescribed numbers in proper rotation. (Dembski 2004)

Notice the word "chose" in the preceding sentence. With natural selection there is the concept of choice. To "select" is to choose. In ascribing the power to choose to unintelligent natural forces, Darwin perpetrated the greatest intellectual swindle in the history of ideas. Nature has no power to choose. All natural selection does is narrow the variability of incidental change by weeding out the less fit. It acts on the spur of the moment, based solely on what the environment at the present time deems fit and thus without any prevision of future possibilities. This blind process, when coupled with another blind process, namely incidental change, is supposed to produce designs that exceed the capacity of any designers in our experience. No wonder Daniel Dennett, in Darwin's Dangerous Ideas, credits Darwin with "the single best idea anyone has ever had." Getting design without a designer is a good trick indeed. Now with advances in technology as well as the information and life sciences, the Darwinian gig is now up. It's time to lay aside the tricks - the smokescreens and the hand-waving, the just-so-stories and the stonewalling, the bluster and the bluffing - and to explain scientifically what people have known all along, namely, why you can't get design without a designer. That's were intelligent design comes in. (Dembski 2004)

Why the Criterion Works
What makes intelligent agents detectable? The principle characteristic of intelligent agency is choice. Intelligence consists in choosing between. How do we recognize that an intelligent agent has made a choice? A random ink blot is unspecified a message written with ink on paper is specified. The exact message recorded may not be specified, but the characteristics of written language will nonetheless specify it. This is how we detect an intelligent agency.

A psychologist who observes a rat making no erroneous turns and in short order exiting a maze, will be convinced that the rat has indeed learned how to exit the maze and that this was not dumb luck. If the maze is sufficiently complex and the turns are of a highly specific nature, the more evidence the psychologist has that the rat did not accomplish this feat by chance. This general scheme for recognizing intelligent agency is but a thinly disguised form of the complexity-specification criterion. In general, to recognize intelligent agency we must observe an actualization of one among several competing possibilities, note which possibilities were ruled out and then be able to specify the possibility that was actualized. (Dembski 1999)

Therefore there exists a reliable criterion for detecting design. This criterion detects design strictly from observational features of the world. Moreover it belongs to probability and complexity theory, not to metaphysics and theology. And although it cannot achieve logical demonstration, it does achieve statistical justification so compelling as to demand assent. This criterion is relevant to biology, it detects design. In particular it shows that Michael Behe's irreducibly complex biochemical systems are designed. (Dembski 1999)

Information can be both complex and specified. Information that is both complex and specified will be called complex specified information, or CSI. The sixteen-digit number on your VISA card is an example of CSI. The complexity of this number ensures that a would-be thief cannot randomly pick a number and have it turn out to a valid VISA number.

Algorithms (mathematical procedures for solving problems) and natural laws are in principle incapable of explaining the origin of information. They can explain the flow of information. Indeed, algorithms and natural laws are ideally suited for transmitting already existing information. What they cannot do, however, is originate information. Instead of explaining the origin of CSI, algorithms and natural laws shift the problem elsewhere - in fact, to a place where the origin of CSI will be at least as difficult to explain as before. (Dembski 1999)

Take for example a computer algorithm that performs addition. The algorithm has a correctness proof so that it performs its additions correctly. Given the input data 2 + 2, can the algorithm output anything other than 4? Computer algorithms are wholly deterministic. They allow for no contingency (other option), and thus cannot generate no information. Contingency (options) cannot be produced. Without contingency laws cannot generate information, to say nothing of complex specified information. Time, chance and natural processes have limitations.

If not by means of laws, how then does contingency - and hence information - arise? Two possibilities arise. Either the contingency is a blind purposeless contingency, which is chance or it is a guided, purposeful contingency, which is intelligent causation.

Can chance generate Complex Specified Information? (CSI)
Chance can generate complex unspecified information, and chance can generate noncomplex specified information. What chance cannot generate is information that is both complex and specified.

A typist randomly typing a long sequence of letters will generate complex unspecified information: the precise sequence of letters typed will constitute a highly improbable unspecified event, yielding complex unspecified information. Even though a meaningful word might appear, random typing cannot produce an extended meaningful text, thereby generating information that is both complex and specified.

Why can't this happen by chance? The improbabilities become too vast and the specifications too tight, chance is eliminated and design is implicated. Just where the probabilistic cutoff is can be debated, but that there is a probabilistic cutoff beyond which chance becomes an unacceptable explanation is clear. The universe will experience heat death before random typing at a keyboard produces a Shakespearean sonnet.
(Dembski, 1999)

Any output of specified complexity requires a prior input of specified complexity. In the case of evolutionary algorithms, they can yield specified complexity only if they themselves are carefully front-loaded with the right information and thus carefully adapted to the problem at hand. Evolutionary algorithms there fore do not generate or create specified complexity, but merely harness already existing specified complexity. There is only one known generator of specified complexity, and that is intelligence. (Dembski 2002)

The Probability Factor
The French mathematician Emile Borel proposed 10 to the 50th. power as a universal probability bound below which chance could definitely be preclude. Borel's probability bound translates into 166 bits of information. William Dembski in his book The Design Inference describes a more stringent probability bound which takes into consideration the number of elementary particles in the observable universe, the duration of the observable universe until its heat death and the Planck time. A probability bound of 10 to the 150th power results, which translates into 500 bits of information. Dembski chooses this more stringent value. If we now define CSI as any specified information whose complexity exceeds 500 bits of information, it follows immediately that chance cannot generate CSI. (Dembski 1999) Any specified event of probability less than 1 in 10 to the 150th power will remain improbable even after all conceivable probabilistic resources from the observable universe have been factored in. It thus becomes a universal probability bound. (Dembski 2004)

To take the view that the specific sequence of the nucleotides in the DNA molecule of the first organism came about by a purely random process is the early history of the earth, CSI cries out for explanation, and pure chance won't do it. Richard Dawkins makes this point eloquently: "We can accept a certain amount of luck in our explanations, but not too much…this ration has, as its upper limit, the number of eligible planets in the universe … We therefore have at our disposal, if we want to use it, odds of 1 in 100 billion-billion as an upper limit to spend on our theory of the origin of life. Suppose we want to suggest, for instance, that life began when both DNA and its protein-based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion-billion to one." (Dembski 2004)

Unlimited Probabilistic Resources
Probabilistic resources address the concept regarding the number of ways an event might occur. Unlimited probabilistic resources not only include probabilities that maybe mathematical and known in the present scientific context, but resources that go beyond what is presently known today. Evolutionists will resort to this method when their backs are to the wall. They will appeal to the addition of resources that are not within our purview at the present time and look to some future set of conditions that might help their position. It's important to deal with the here and now, and the reality of the present. If the present methods of applying probabilities to an occurrence, such as the origin of life, and those probabilities are zero, why proceed into the unknown unless out of sheer desperation?

William Dembski illustrates the following concept. What if the known universe is but one of many possible universes, each of which is as real as the known universe but causally inaccessible to it? If so, are not the probabilistic resources needed to eliminate chance vastly increased and is not the validity of 10 to the 150th power as a universal probability bound thrown into question? This line of reasoning has gained widespread currency among scientists and philosophers in recent years. Is it not illegitimate to rescue chance by invoking probabilistic resources from outside the known universe? Should there not be independent evidence to invoke a resource? (Dembski 2002)

Was Arthur Rubinstein a great pianist or was it just that whenever he sat at the piano, he happened by chance to put his fingers on the right keys to produce beautiful music? It could happen by chance, and there is some possible world where everything is exactly as it is in this world except that the counterpart to Arthur Rubenstein cannot read music and happens to be incredibly lucky whenever he sits at the piano.

Perhaps Shakespeare was an imbecile who just by chance happened to string together a long sequence of apt phrases. Unlimited probabilistic resources ensure that we will never know. (Dembski 2002) Are not the probabilities on our side that Rubenstein and Shakespeare are consummate pianists and writers?

How can we know for sure that one is listening to Arthur Rubenstein the musical genius and not the lucky poseur? Rubinstein's musical skill (design) is that he was following a pre-specified concert program, and in this instance that he was playing a particular piece listed in the program note for note. His performance exhibited specified complexity. Specified complexity is how we eliminate bizarre possibilities in which chance is made to account for things that we would ordinarily attribute to design. (Dembski 2002)

There is an advantage for science of limiting probabilistic resources. Limited probabilistic resources open possibilities for knowledge and discovery that would otherwise be closed. Limits enable us to detect design where otherwise it would elude us. Also limitations protect us from the unwarranted confidence in natural causes that unlimited probabilistic resources invariably seem to engender. (Dembski 2002)

The Law of Conservation of Information
If chance has no chance of producing complex specified information, what about natural causes? Natural causes are incapable of generating CSI. Dembski calls this result the law of conservation of information or LCI.

In his book the Limits of Science, Peter Medawar proposes several corollaries:

(1) The CSI in a closed system of natural causes remains constant or decreases.

(2) CSI cannot be generated spontaneously, originate endogenously or organize

(3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system, though now closed, was not always closed).

(4) In particular any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.

To explain the origin of information in a closed system requires what is called a reductive explanation. Richard Dawkins, Daniel Dennett and many scientists and philosophers are convinced that proper scientific explanations must be reductive, moving from the complex to the simple. The law of conservation of information (LCI) cannot be explained reductively. To explain an instance of CSI requires at least as much CSI as we started with. A pencil-making machine is more complicated than the pencils it makes. (Dembski 1999)

The most interesting application of the law of conservation of information is the reproduction of organisms. In reproduction one organism transmits its CSI to the next generation. Most evolutionists would argue that the Darwinian mechanism of mutation and selection introduces novel CSI into an organism, supplementing the CSI of the parents with CSI from the environment. However, there is a feature of CSI that will count decisively against generating CSI from the environment via mutation and selection. The crucial feature of CSI is that it is holistic. To say that CSI is holistic means that individual items of information cannot simply be added together and thereby form a new item of complex specified information. CSI requires not only having the right collection of parts but also having the parts in proper relation. Adding random information to an already present body of information will distort or reduce the information already present. Even if two coherent bodies of information are combined the results, unless specified in some way will not be useful to the organism. A sentence with words scrambled is nonsensical, and contains no information. Also, two sentences that have no relationship with one another do not add to information already present. The specification that identifies METHINKS IT IS LIKE A WEASEL, and the specification that identifies IN THE BEGINNING GOD CREATED, do not form a joint juxtaposed line of information. CSI is not obtained by merely aggregating component parts or by arbitrarily stitching items of information together. (Dembski 1999)

The best thing to happen to a book on a library shelf is that it remains as it was when originally published and thus preserves the CSI inherent in its text. Over time, however, what usually happens is that a book get old, pages fall apart, and the information on the pages disintegrates. The Law of Conservation of Information is therefore more like a law of thermodynamics governing entropy than a conservation law governing energy, with the focus on degradation rather than conservation. The Law of the Conservation of Information is that natural causes can at best preserve CSI, may degrade it, but cannot generate it. Natural causes are ideally suited as conduits for CSI. It is in this sense, then, that natural causes can be said to "produce CSI." But natural causes never produce things de novo or ex-nihilo. When natural causes produce things, they do so by reworking other things. (Dembski 2002)

A classic example whereby information is degraded over time is seen in an experiment by Spiegelman in 1967. The experiment allowed a molecular replicating system to proceed in a test tube without any cellular organization around it.

The replicating molecules (the nucleic acid templates) require an energy source, building blocks (i.e., nucleotide bases), and an enzyme to help the polymerization process that is involved in self-copying of the templates. Then away it goes, making more copies of the specific nucleotide sequences that define the initial templates. But the interesting result was that these initial templates did not stay the same they were not accurately copied. They got shorter and shorter until they reached the minimal size compatible with the sequence retaining self-copying properties. And as they got shorter, the copying process went faster. So what happened with natural selection in a test tube: the shorter templates that copied themselves faster became more numerous, while the larger ones were gradually eliminated? This looks like Darwinian evolution in a test tube. But the interesting result was that this evolution went one way: toward greater simplicity. Actual evolution tends to go toward greater complexity, species becoming more elaborate in their structure and behavior, though the process can also go in reverse toward simplicity. But DNA on its own can go nowhere but toward greater simplicity. In order for the evolution of complexity to occur, DNA has to be within a cellular context the whole system evolves as a reproducing unit. (Dembski 2002)

Application to Evolutionary Biology
How does all this apply to evolutionary biology? Complex specified information (CSI) is abundant in the universe. Natural causes are able to shift it around and possibly express it in biological systems. What we wish to know, however, is how the CSI was first introduced into the organisms we see around us. In reference to the origin of life we want to know the informational pathway that takes the CSI inherent in a lifeless universe and translates it into the first organism. There are only so many options. CSI in an organism consists of CSI acquired at birth together with whatever CSI is acquired during the course of its life. CSI acquired at birth derives from inheritance with modification (mutation). Modification occurs by chance. CSI acquired after birth involves selection along with infusion or the direct introduction of novel information from outside the organism. Therefore inheritance with modification, selection and infusion - these three account for the CSI inherent in biological systems.

Modification includes - to name but a few - point mutations, base deletions, genetic crossover, transpositions, and recombination generally. Given the law of conservation of information, it follows that inheritance with modification by itself is incapable of explaining the increased complexity of CSI that organisms have exhibited in the course of natural history. Inheritance with modification or by mutations needs therefore to be supplemented. The candidate for this supplementation is selection. Selection can introduce new information into a population. Nonetheless this view places undue restrictions on the flow of biological information, restrictions that biological systems routinely violate.

For example we can use Michael Behe's bacterial flagellum. How does a bacterium without a flagellum evolve a flagellum by the processes so far discussed? We have already outlined the complexity issue of the flagellum. How does selection account for it? Selection cannot cumulate proteins, holding them in reserve until with the passing of many generations they're finally available to form a complete flagellum. Neither the environment nor the bacterial cell contains a prescribed plan or blueprint of the flagellum. Selection can only build on partial function, gradually generation after generation. But a flagellum without its full complement of proteins parts doesn't function at all. Consequently if selection and inheritance with modification are going to produce the flagellum, they have to do it on one generation. The CSI of a flagellum far exceeds 500 bits. Selection will only deselect any bacteria, that does not have the flagellum and a 500 bit novelty is far beyond any chance of occurring.

There remains only one source for the CSI in biological systems - infusion. Infusion becomes problematic once we start racing backwards the informational pathways of infused information. Plasmid exchange is well known in bacteria, which allows bacterial cells to acquire antibiotic resistance. Plasmids are small circular pieces of DNA that can be passed from one bacterial cell to another. Problems begin when we ask, where did the bacterium that released the plasmid in turn derive it? There is a regress here, and this regress always terminates in something non-organismal. If the plasmid is cumulatively complex, then the general evolutionary methods might apply. However, if the plasmid is irreducibly complex, whence could it have arisen? Because organisms have a finite trajectory back in time, biotic infusion must ultimately give way to abiotic infusion, and endogenous (intracellular), information must ultimately derive from exogenous (extra cellular) information.

Two final questions arise. (1) How is abiotically infused CSI transmitted to an organism? And, (2) where does this information reside prior to being transmitted? The obvious alternative is and must be a theological one. The information in biological systems can be traced back to the direct intervention of God. (Dembski 1999)

As Michael Behe's irreducibly complex biochemical system readily yields to design, so to does the fine-tuning of the universe. The complexity-specification criterion demonstrates that design pervades cosmology and biology. Moreover it is transcendent design, no reducible to the physical world. Indeed no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life.

Just as physicists reject perpetual motion machines because of what they know about the inherent constraints on energy and matter, so too design theorists reject any naturalistic reduction of specified complexity because of what they know about the inherent constraints on natural causes. (Dembski 1999)

Evolutionary biologists assert that design theorists have failed to take into account indirect Darwinian pathways by which the bacterial flagellum might have evolved through a series of intermediate systems that changed function and structure over time in ways that we do not yet understand. There is no convincing evidence for such pathways. Can the debate end with evolutionary biologists chiding design theorists for not working hard enough to discover those (unknown) indirect Darwinian pathways that lead to the emergence of irreducibly and minimally complex biological structures like the bacterial flagellum? Science must form its conclusions on the basis of available evidence, not on the possibility of future evidence. (Dembski 1999)

The Darwinian Extrapolation
According to the Darwinian Theory, organisms’ possess unlimited plasticity to diversify across all boundaries moreover, natural selection is said to have the capability of exploiting that plasticity and thereby delivering the spectacular diversity of living forms that we see.

Such a theory, however, necessarily commits an extrapolation. And as with all extrapolations, there is always the worry that what we are confident about in a limited domain may not hold more generally outside that domain. In the early days of Newtonian mechanics, physicists thought Newton's laws gave a total account of the constitution and dynamics of the universe. Maxwell, Einstein, and Heisenberg each showed that the proper domain of Newtonian mechanics was far more constricted. It is therefore fair to ask whether the Darwinian mechanism may not face similar limitations. With many extrapolations there is enough of a relationship between inputs and outputs so that the extrapolation is experimentally accessible. It then it becomes possible to confirm or disconfirm the extrapolation. This is not true for the Darwinian extrapolation. There are too many historical contingencies and too many missing data to form an accurate picture of precisely what happened. It is not possible presently to determine how the Darwinian mechanism actually transformed, say, a reptile into a mammal over the course of natural history. (Dembski 2002)

Let's now ask: Is intelligent design refutable? Is Darwinism refutable? Yes to the first question, no to the second. Intelligent design could in principle be readily refuted. Specified complexity, in general, and irreducible complexity in biology are, within the theory of intelligent design, key markers of an intelligent agency. If it could be shown that biological systems that are wonderfully complex, elegant and integrate - such as the bacterial flagellum - could have been formed by a gradual Darwinian process, then intelligent design would be refuted on the general grounds that one does not invoke intelligent causes when undirected natural causes will do.

By contrast, Darwinism seems effectively irrefutable. The problem is that Darwinists raise the standard for refutability too high. It is certainly possible to show that no Darwinian pathway could reasonably be expected to lead to an irreducibly complex biological structure. But Darwinists want something stronger, namely, to show that no conceivable Darwinian pathway could have led to that structure. Such as demonstration requires an exhaustive search of all conceptual possibilities and is effectively impossible to carry out. (Dembski 1999) What an odd set of circumstances. The methodology which has the more convincing and overwhelming evidence is ignored and the methodology that has little or no evidence is in vogue and irrefutable.

Let us turn to another aspect of testability - explanatory power. Underlying explanatory power is a view of explanation known as inference to the best explanation, in which a "best explanation" always presupposes at least two competing explanations. Obviously, a "best explanation" is one that comes out on top in a competition with other explanations. Design theorists have an edge up in explanatory power over natural selection. Darwinists, of course, see the matter differently.

What is the problem of having a design-theoretical tool chest added into a Darwinian tool chest? Much as some tools just sit there never to be used, design then has the option of just sitting there and possibly becoming superfluous. What is the fear of having a broad tool-chest? (Dembski 1999)

Is there any hope for the evolutionist in exploring, with an unlimited amount of time, indirect Darwinian pathways which have yet to be discovered? For the sake of clarification, an indirect Darwinian pathway is a way in which a complex specified biological pathway can be described by a Darwinian naturalistic methodology which has yet to present itself as a measurable entity to science.

William Dembski provides us with an illustration. Johnny is certain that there are leprechauns hiding in his room. Imagine this child were so ardent and convincing that he set all of Scotland Yard, onto the task of searching meticulously, tirelessly, decade after decade, for these supposed leprechauns, for any solid evidence at all of their prior habitation of the bedroom. Driven by gold fever for the leprechaun's treasure, postulating new ways of catching a glimpse of a leprechaun, a hair, a fingerprint, any clue at all the search continues. After many decades, what should one say to the aging parents of the now aging boy? Would it be logical to shake your finger at the parents and tell them, "Absence of evidence is not evidence of absence. Step aside and let the experts get back to work." That would be absurd. And yet that, essentially, is what evolutionary biologists are telling us concerning that utterly fruitless search for credible indirect Darwinian pathways to account for irreducible complexity. (Dembski 1999)

Crossing the bridge - Meeting the Designer
What if the designing intelligence responsible for biological complexity cannot be confined to physical objects? Why should this burst the bounds of science? In answering this criticism, let us first of all be clear that intelligent design does not require miracles (as does scientific creationism) in the sense of violations of natural law. Just as humans do not perform miracles every time they act as intelligent agents, so too there is no reason to assume that for a designer to act as an intelligent agent requires a violation of natural laws. How much more effective could science be if it includes intelligent causes? Intelligent causes can work with natural causes and help them to accomplish things that undirected natural causes cannot. Undirected natural causes can explain how ink gets applied to paper to form a random inkblot but cannot explain an arrangement of ink on paper that spells a meaningful message. Whether an intelligent cause is located within or outside nature is a separate question from whether an intelligent cause has acted within nature. Design has no prior commitment against naturalism or for supernaturalism unless one opens that door. Consequently science can offer no principled grounds for excluding design or relegating it to the sphere of religion automatically.

Decisions on this issue should be based upon which process has the greater explanatory power, undirected natural causes or intelligent causes. Does the designer need to be defined? Cannot the designing agent be a regulative principle- a conceptually useful device for making sense out of certain facts of biology- without assigning the designer any weight in reality? The status of the designer can then be taken up by philosophy and theology. The fact that the designing intelligence responsible for life can't be put under the microscope poses no obstacle to science. We learn of this intelligence as we learn of any other intelligence - not by studying it directly but through its effects. (Dembski 2004)

All of us have identified the effects of embodied designers. Our fellow human beings constitute our best example of such designers. A designer's embodiment is of no evidential significance for determining whether something was designed in the first place. We don't get into the mind of designers and thereby attribute design. Rather, we look at effects in the physical world that exhibit clear marks of intelligence and from those marks infer a designing intelligence. (Dembski 2004)

There is no principled way to argue that the work of embodied designers is detectable whereas the work of un-embodied designers isn't. Even if an un-embodied intelligence is responsible for the design displayed in some phenomenon, a science committed to the Naturalized Explanatory Filter (a filter which excludes God and thus design) will never discover it. A science that on a priori grounds refuses to consider the possibility of un-embodied designers artificially limits what it can discover. (Dembski 2004) What happens when a God is implicated in design? The Explanatory Filter doesn't consider design but becomes naturalized and takes the process back to square one with a decision between contingency and necessity. (Dembski 2002)

The Burden Of Proof
Dembski often lectures in university campuses about intelligent design. Often, he will say, a biologist in the audience will get up during the question-and-answer time to inform him that just because he doesn't know how complex biological systems might have formed by the Darwinian mechanism doesn't mean it didn't happen that way. He will then point out that the problem isn't that he personally doesn't know how such systems might have formed but that the biologist who raised the objection doesn't know how such systems might have formed - and that despite having a fabulous education in biology, a well-funded research laboratory, decades to put it all to use, security and prestige in the form of a tenured academic appointment, and the full backing of the biological community, which has also been desperately but unsuccessfully trying to discover how such systems are formed for more than one hundred years, still doesn't know. (Dembski 2004)

Many scientists have expressed their lack of knowledge of how any biochemical or cellular system could have evolved. Here are a few:

James Shapiro, a molecular biologist at the University of Chicago in the National Review, September 16, 1996, conceded that there are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations.

David Ray Griffin is a philosopher of religion with an interest in biological origins. He writes in his book Religion and Scientific Naturalism: There are, I am assured, evolutionists who have described how the transitions in question could have occurred. When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not in fact contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter someone who knows where they exist. It is up to the Darwinists to fill in the details. (Dembski 2004)

Let us look to the evolution of the eye as an example where we find a lack of information between evolutionary jumps. Darwinists, for instance, explain the human eye as having evolved from a light sensitive spot that successively became more complicated as increasing visual acuity conferred increased reproductive capacity on an organism. In such a just-so story, all the historical and biological details in the eye's construction are lost. How did a spot become innervated and thereby light-sensitive? How did a lens form within a pinhole camera? What changes in embryological development are required to go from a light-sensitive sheet to a light-sensitive cup? None of these questions receives an answer in purely Darwinian terms. Darwinian just-so stories have no more scientific content than Rudyard Kipling's original just-so stories about how the elephant got its trunk or the giraffe its neck. (Dembski 2002)

Are not the Darwinists applying blind faith to their theory? Listen to the remark by Harvard biologist Richard Lewontin in The New York Review of Books:

We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism (i.e., naturalism). It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counterintuitive, no matter how mystifying to the uninitiated. (Dembski 2002)

This raises another question. What is the responsibility of teachers in their classrooms? If teachers who are persuaded of intelligent design and yet are directed by the system to teach evolution, should teach Darwinian evolution and the evidence that supports it. At the same time, however, they should candidly report problems with the theory notably that its mechanism of transformation cannot account for the specified complexity we observe in biology.

Dembski - Major Points
1. Materialists search for contrivance through the activity of natural processes rather than contrivance as the mental process of an intelligence.
2. The option of a God's interaction with the world could be a legitimate domain for scientific investigation.
3. Intelligently caused events can be distinguished from unintelligently caused events.
4. Detection involves discovering information.
5. By excluding intelligent design from science stifles scientific inquiry.
6. The Explanatory Filter identifies information and thus a designer.
7. Choice is an important feature of intelligence.
8. Natural selection has no power to choose. It has no eye on the past or the future it is blind.
9. Information can be both complex and specified.
10. Complex specified information cannot arise by chance.
11. There exists a "probability bound", beyond which chance cannot overcome.
12. Existing information cannot increase on its own, but can remain stable for a period of time or be lost, as understood by the Law of Conservation of Information.
13. The origin of original information is a mystery to modern science.
14. The existence of a molecular machine such as the bacterial flagellum is beyond explanation by natural processes.
15. Darwin uses an extrapolation to make his case, but an extrapolation that has limited or no data to confirm it.
16. Darwinism, unlike Intelligent Design, is not subject to refutation.
17. Decisions should be made about origins, based upon which proposal has the best explanatory power.

Can nature create codes and specified complexity? - Biology

Meta 139: Dembski on "Explaining Specified Complexity"
[email protected] William Grassie
Meta 139. 1999/09/13. Approximately 1883 words.

Below is a column entitled "Explaining Specified Complexity" by William Dembski at Baylor University
in Texas. Dembski discusses whether evolutionary algorithms can generate actual "specified complexity"
in nature, as opposed to merely the appearances thereof (i.e., unspecified or randomly generated
complexity). Dembski believes these problems in probability make plausible a concept of intelligence
involved in evolution. Your comments are welcome on [email protected]

Michael Polanyi Center Baylor University Waco, Texas 76798

In his recent book The Fifth Miracle, Paul Davies suggests that any laws capable of explaining the origin of life must be radically different from scientific laws known to date. The problem, as he sees it, with currently known scientific laws, like the laws of chemistry and physics, is that they are not up to explaining the key feature of life that needs to be explained. That feature is specified complexity. Life is both complex and specified. The basic intuition here is straightforward. A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified.

Now, as Davies rightly notes, contingency can explain complexity but not specification. For instance, the exact time sequence of radioactive emissions from a chunk of uranium will be contingent, complex, but not specified. On the other hand, as Davies also rightly notes, laws can explain specification but not complexity. For instance, the formation of a salt crystal follows well-defined laws, produces an independently known repetitive pattern, and is therefore specified but that pattern will also be simple, not complex. The problem is to explain something like the genetic code, which is both complex and specified. As Davies puts it: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity" (p. 112).

How does the scientific community explain specified complexity? Usually via an evolutionary algorithm. By an evolutionary algorithm I mean any algorithm that generates contingency via some chance process and then sifts the so-generated contingency via some law-like process. The Darwinian mutation-selection mechanism, neural nets, and genetic algorithms all fall within this broad definition of evolutionary algorithms. Now the problem with invoking evolutionary algorithms to explain specified complexity at the origin of life is absence of any identifiable evolutionary algorithm that might account for it. Once life has started and self-replication has begun, the Darwinian mechanism is usually invoked to explain the specified complexity of living things.

But what is the relevant evolutionary algorithm that drives chemical evolution? No convincing answer has been given to date. To be sure, one can hope that an evolutionary algorithm that generates specified complexity at the origin of life exists and remains to be discovered. Manfred Eigen, for instance, writes, "Our task is to find an algorithm, a natural law that leads to the origin of information," where by "information" I understand him to mean specified complexity. But if some evolutionary algorithm can be found to account for the origin of life, it would not be a radically new law in Davies's sense. Rather, it would be a special case of a known process.

I submit that the problem of explaining specified complexity is even worse than Davies makes out in The Fifth Miracle. Not only have we yet to explain specified complexity at the origin of life, but evolutionary algorithms fail to explain it in the subsequent history of life as well. Given the growing popularity of evolutionary algorithms, such a claim may seem ill-conceived. But consider a well known example by Richard Dawkins (The Blind Watchmaker, pp. 47-48) in which he purports to show how a cumulative selection process acting on chance can generate specified complexity. He starts with the following target sequence, a putative instance of specified complexity:

(he considers only capital Roman letters and spaces, here represented by bullets-thus 27 possibilities at each location in a symbol string).

If we tried to attain this target sequence by pure chance (for example, by randomly shaking out scrabble pieces), the probability of getting it on the first try would be around 10 to the -40, and correspondingly it would take on average about 10 to the 40 tries to stand a better than even chance of getting it. Thus, if we depended on pure chance to attain this target sequence, we would in all likelihood be unsuccessful. As a problem for pure chance, attaining Dawkins's target sequence is an exercise in generating specified complexity, and it becomes clear that pure chance simply is not up to the task.

But consider next Dawkins's reframing of the problem. In place of pure chance, he considers the following evolutionary algorithm: (i) Start out with a randomly selected sequence of 28 capital Roman letters and spaces, e.g.,

(note that the length of Dawkins's target sequence, METHINKS*IT* IS*LIKE*A*WEASEL, comprises exactly 28 letters and spaces) (ii) randomly alter all the letters and spaces in this initial randomly-generated sequence (iii) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence.

In very short order this algorithm converges to Dawkins's target sequence. In The Blind Watchmaker, Dawkins (p. 48) provides the following computer simulation of this algorithm:

Thus, Dawkins's simulation converges on the target sequence in 43 steps. In place of 10 to the 40 tries on average for pure chance to generate the target sequence, it now takes on average only 40 tries to generate it via an evolutionary algorithm.

Although Dawkins uses this example to illustrate the power of evolutionary algorithms, the example in fact illustrates the inability of evolutionary algorithms to generate specified complexity. We can see this by posing the following question: Given Dawkins's evolutionary algorithm, what besides the target sequence can this algorithm attain? Think of it this way. Dawkins's evolutionary algorithm is chugging along what are the possible terminal points of this algorithm? Clearly, the algorithm is always going to converge on the target sequence (with probability 1 for that matter). An evolutionary algorithm acts as a probability amplifier. Whereas it would take pure chance on average 10 to the 40 tries to attain Dawkins's target sequence, his evolutionary algorithm on average gets it for you in the logarithm of that number, that is, on average in only 40 tries (and with virtual certainty in a few hundred tries).

But a probability amplifier is also a complexity attenuator. For something to be complex, there must be many live possibilities that could take its place. Increasingly numerous live possibilities correspond to increasing improbability of any one of these possibilities. To illustrate the connection between complexity and probability, consider a combination lock. The more possible combinations of the lock, the more complex the mechanism and correspondingly the more improbable that the mechanism can be opened by chance. Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability.

It follows that Dawkins's evolutionary algorithm, by vastly increasing the probability of getting the target sequence, vastly decreases the complexity inherent in that sequence. As the sole possibility that Dawkins's evolutionary algorithm can attain, the target sequence in fact has minimal complexity (i.e., the probability is 1 and the complexity, as measured by the usual information measure, is 0). In general, then, evolutionary algorithms generate not true complexity but only the appearance of complexity. And since they cannot generate complexity, they cannot generate specified complexity either.

This conclusion may seem counterintuitive, especially given all the marvelous properties that evolutionary algorithms do possess. But the conclusion holds. What's more, it is consistent with the "no free lunch" (NFL) theorems of David Wolpert and William Macready, which place significant restrictions on the range of problems genetic algorithms can solve.

The claim that evolutionary algorithms can only generate the appearance of specified complexity is reminiscent of a claim by Richard Dawkins. On the opening page of his The Blind Watchmaker he states, "Biology is the study of complicated things that give the appearance of having been designed for a purpose." Just as the Darwinian mechanism does not generate actual design but only its appearance, so too the Darwinian mechanism does not generate actual specified complexity but only its appearance.

But this raises the obvious question, whether there might not be a fundamental connection between intelligence or design on the one hand and specified complexity on the other. In fact there is. There's only one known source for producing actual specified complexity, and that's intelligence. In every case where we know the causal history responsible for an instance of specified complexity, an intelligent agent was involved. Most human artifacts, from Shakespearean sonnets to Dürer woodcuts to Cray supercomputers, are specified and complex. For a signal from outer space to convince astronomers that extraterrestrial life is real, it too will have to be complex and specified, thus indicating that the extraterrestrial is not only alive but also intelligent (hence the search for extraterrestrial intelligence-SETI).

Thus, to claim that laws, even radically new ones, can produce specified complexity is in my view to commit a category mistake. It is to attribute to laws something they are intrinsically incapable of delivering-indeed, all our evidence points to intelligence as the sole source for specified complexity. Even so, in arguing that evolutionary algorithms cannot generate specified complexity and in noting that specified complexity is reliably correlated with intelligence, I have not refuted Darwinism or denied the capacity of evolutionary algorithms to solve interesting problems. In the case of Darwinism, what I have established is that the Darwinian mechanism cannot generate actual specified complexity. What I have not established is that living things exhibit actual specified complexity. That is a separate question.

Does Davies's original problem of finding radically new laws to generate specified complexity thus turn into the slightly modified problem of finding find radically new laws that generate apparent-but not actual-specified complexity in nature? If so, then the scientific community faces a logically prior question, namely, whether nature exhibits actual specified complexity. Only after we have confirmed that nature does not exhibit actual specified complexity can it be safe to dispense with design and focus all our attentions on natural laws and how they might explain the appearance of specified complexity in nature.

Does nature exhibit actual specified complexity? This is the million dollar question. Michael Behe's notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin's Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.

Meta is an edited and moderated listserver and news service dedicated to promoting the constructive engagement of science and religion. Subscriptions are free. For more information, including archives and submission guidelines, go to .

There are now four separate meta-lists to which you can subscribe: [metaviews] is commentaries and bookreviews posted three to five times per week. [metanews] is announcements and news and is posted more frequently. [metamonthly] is a monthly digest. [reiterations] is a higher volume discussion list which is lightly moderated. You can subscribe to one or all of the meta-lists.

If you would like to unsubscribe or change your subscription options, simply go to and follow the links to subscribe or unsubscribe. Note that all subscription changes entered on the web forms, requires your confirmation by email.

Permission is granted to reproduce this e-mail and distribute it without restriction with the inclusion of the following credit line: This is another posting from the Meta-List . Copyright 1997, 1998, 1999. William Grassie.

Watch the video: BESTER SPIELER im NEUEN Supercell Shooter?! Als 1. Spieler Maxed OUT?! Frontlines Boom Beach (January 2023).