The Lure of Complexity (Part 1)

Stephen L. Talbott

From In Context #6 (Fall, 2001) | View article as PDF

In April, 1999, the prestigious journal, Science, informed its readers that “shortfalls in reductionism are increasingly apparent.... The much-used axiom that scientists ‘know more and more about less and less’ may have an element of truth.... Another problem is oversimplification. Witness the ‘gene-for’ syndrome (as in ‘gene for intelligence’ or ‘gene for sexual preference’), in which genes that contribute to human traits are instead taken to specify that trait” (Gallagher and Appenzeller 1999, p. 79).

These remarks occur in a special issue of Science devoted to complex systems. A news article in that issue carries the point about genes further:

The expression of individual genes is not being regulated by one, two, or five proteins but by dozens, says Shirley Tilghman, a molecular biologist at Princeton University. Some regulate specific genes; others work more broadly. Some sit on DNA all the time, while others bind temporarily. The complexity is becoming mind numbing, says Tilghman.

“When we get to a certain network complexity,” adds Adam Arkin, a physical chemist at Lawrence Berkeley National Laboratory, “we completely fail to understand how it works” (Service 1999, p. 81).

In recent years the study of complex systems, or complexity, has been widely proclaimed a scientific revolution. The revolution lends new currency to the idea of holism, and has popularized terms such as “self-organization,” “complexity,” and “chaos.” Many might take the aspirations of the complexity theorists as a fulfillment of the hope, often expressed in our Nature Institute publications, for a new and revitalized science. But it is a live question whether the current developments are indeed a renewal of science or instead represent a retrenchment and strengthening of the most serious limitations of traditional science.

In any case, we think readers of In Context will want to know something about this ongoing “revolution.” Unfortunately, a summary is not easy. There is no consensus definition of complexity studies, and its researchers seem to understand what they are doing more in terms of a style of theorizing than a specific subject matter. Indeed, the subject matter is often taken to be scarcely distinguishable from “everything,” which is perhaps why the disciplines at issue have so far yielded a richer harvest in vague hunches than concrete results.

Vagueness, however, has not made for shyness. Rarely, if ever, have the advocates of a new science been so effective at advertising the fundamental, “paradigm-shifting” importance of their own work before they had much to show for it. In addition to a new holism, the advertisements promise a rejection of reductionism, the discovery of almost mystical-sounding “emergent” and “self-organizing” properties of physical systems, and the overcoming of narrow specialization. Here I present a brief sketch of the new work, with this caveat: In what follows you will find a strange mixture of high aspirations and the crassest dismissal of nature you could possibly imagine. I try to present a sympathetic description, but you should not think that the views summarized here are those of the researchers at The Nature Institute. These views are, however, powerfully symptomatic of the scientific thinking of our day, and we would all do well to come to terms with such thinking. First, then, three “classic” pictures invoked in many complexity studies:

First Picture. If you drop grains of sand onto the middle of a table, you will eventually form a pile reaching all the way to the table’s edges. As you continue dropping the grains, some of the avalanches they provoke will send little sand cascades off the table. But, over time (and up to a point), the pile will continue to grow, with the sides getting steeper, and with some of the avalanches getting larger and larger. During the later stages the pile becomes susceptible to catastrophic collapse; as far as you can know, the next grain of sand may (and likely will) have only a tiny, local effect – but it may also trigger an avalanche that sends much of the pile cascading onto the floor. Nothing about the local collection of grains near the point of the next grain’s impact can tell you whether a catastrophic shift will occur. The necessary information is distributed throughout the pile as a whole.

Second Picture. You and an acquaintance are in prison, being separately interrogated about a crime the two of you may or may not have committed. The prosecutor gives you this choice: if you deny the crime and your acquaintance implicates you, you will get life in prison and he will go free. If you both deny the crime, you will receive a minimum sentence. If you both confess, you will receive a medium sentence. The same choice is offered to your acquaintance, so if he denies the crime and you implicate him, he will be the one sentenced to life and you will go free.

This is known as the Prisoner’s Dilemma. The scenario is truly devilish, for even if you and your partner previously agreed to maintain silence (therefore assuring yourselves of a light sentence), you both also know that the other may be tempted to get off scot free by confessing. So holding to your agreement could very possibly land you in prison for life. Can you risk that? Wouldn’t it be better to confess, knowing that you just might gain your freedom, while at worst you would be slapped with a medium sentence? And one further question: is evolution an iterative playing of the Prisoner’s Dilemma game, through which one organism continually seeks an advantage over the others? 

Third Picture. Imagine a pot with numerous “symbol strings” floating around in it. A symbol string is, in the simplest case, just an ordered group of zeroes and ones — for example, here are three strings:

011
101011
11100

Imagine further that these strings randomly “collide” with one another and that some of the collisions result, according to a set of “grammar rules,” in the transformation of one of the strings. For example, a rule might say:

If part of one colliding string consists of 011, and if part of the other string is 100, then the latter sequence of digits is changed to 11010.

You may, if you like, think of the first string as an “enzyme” that facilitates, or catalyzes, the transformation of the second string. The assumption is that the pot contains an adequate provision of zeroes and ones to supply any additional digits required for a catalytic reaction.

It is easy to simulate a given initial pot of strings and a given set of grammar rules by using a computer. The program simply selects pairs of strings at random and “collides” them by applying the grammar rules. In this way, the pot of strings can evolve. For example, given the right initial conditions, you might find that you get an “autocatalytic set” — that is, a set of symbol strings that proves stable, continually producing more of the very same strings it itself consists of. Such a set is self-regenerating, and is thought by some to provide crucial insight into life’s development from a primordial “soup pot” containing molecular “strings” of atoms.

Complex Themes

Each of these “pictures” has figured in the work of complexity theorists over the past few decades. We can use them to help us grasp several fundamental characteristics of the new work, as it is seen by its practitioners:

Screen Shot 2020-05-05 at 3.29.42 PM.png

Unprecedented Generality. “The convergence of chemistry, physics, biology, and engineering is upon us,” according to Stanford University biologist, Lucy Shapiro (quoted in Service 1999, p. 80). Complexity theorists are looking for the underlying laws governing such diverse phenomena as the fragile edge along the crest of a sand dune, the collective action of networks of neurons in the brain, ecologies of living organisms, and the behavior of financial markets. These theorists commonly express a yearning for “deep” truths – deep because possessed of the greatest possible generality.

For example, the Santa Fe Institute’s Stuart Kauffman is intrigued by the similarities between an E. coli bacterium and the IBM corporation. “Organisms, artifacts, and organizations are all evolved structures... What are the laws governing the emergence and coevolution of such structures?” (Kauffman 1995, p. 246). Referring to the pot of symbol strings and their “grammars,” Kauffman reflects,

Somehow the string images we have discussed press themselves on me. The swirl of transformations of ideologies, fashions begetting fashions begetting fashions, cuisines begetting cuisines, legal codes and precedents begetting the further creation of law, seem similar in as yet unclear ways to model grammar worlds .... (Kauffman 1995, p. 298)

Similarly reaching across disparate domains, the influential philosopher Daniel Dennett asks why trees in the forest expend so much energy growing tall. He answers: “For the very same reason that huge arrays of garish signs compete for our attention along commercial strips.... Each tree is looking out for itself and trying to get as much sunlight as possible.” Invoking the Prisoner’s Dilemma, he goes on:

If only those redwoods could get together and agree on some sensible zoning restrictions and stop competing with each other for sunlight, they could avoid the trouble of building those ridiculous and expensive trunks, stay low and thrifty shrubs, and get just as much sunlight as before!

But, like the prisoners, the trees cannot get together, and therefore “defection from any cooperative ‘agreement’ is bound to pay off if ever or whenever it occurs.” Such agreements would be “evolutionarily unenforceable” (Dennett 1995, pp. 253-55).

This drive toward generality — toward principles that can be applied to the development of cuisines and laws and brains and redwoods and commercial street signs – leads, as we will see, to most of the other key themes in complexity theory.

Maximum Abstraction. “A general theory of complex systems,” says Danish scientist Per Bak, “must necessarily be abstract.” Bak, who pioneered the investigation of sandpile models, believes that a general theory of life “cannot have any specific reference to actual species. The model may, perhaps, not even refer to basic chemical processes, or to the DNA molecules that are integral parts of any life form that we know.” After all, he wonders, what might life forms on Mars be like?

We must learn to free ourselves from seeing things the way they are! A radical scientific view indeed! If, following traditional scientific methods, we concentrate on an accurate description of the details, we lose perspective. A theory of life is likely to be a theory of process, not a detailed account of utterly accidental details of that process, such as the emergence of humans. (Bak 1996, p. 10)

The demand for abstraction is a demand for sharp-edged, unambiguous, precise terms, rid as far as possible of qualitative or phenomenal content. Numbers and the terms of logic are perhaps the primary abstractions, and Bak observes further that theories “must be statistical” — like the laws governing sandpile avalanches. John Holland, the University of Michigan theorist and “father of genetic algorithms,” speaks a great deal about the necessity for the scientist to “strip away details,” noting that “numbers go about as far as we can go in shearing away detail.

When we talk of numbers, nothing is left of shape, or color, or mass, or anything else that identifies an object, except the very fact of its existence. (Holland 1998, pp. 23-24).

The quest for generality dictates this resort to abstraction. To arrive at generalizations regarding phenomena, we have to strip away all the differences between the phenomena, looking only for what they have in common. This stripping away makes it possible to assign different things to the same class (for example, street signs and redwoods), and once we have done this we can, without ambiguity, count and measure the members of the classes we have formed and reason mathematically about them (for example, formulating laws about their height).

Parts/Whole by Martina Muller

Holism. As mentioned above, no information about local regions of the sandpile can tell you whether the next grain added to the pile will trigger a catastrophic collapse. The necessary information is distributed throughout the whole of the pile. It is a matter of the interlinked balances of force upon every grain in the pile, the shape of every grain, and so on. Therefore, the theorists of complexity say, understanding must proceed on a holistic basis.

“The whole is greater than the sum of its parts,” says Kauffman, repeating a common refrain (Kauffman 1995, p. 24). As a news item in Science reports, “understanding how parts of a biological system — genes or molecules — interact is just as important as understanding the parts themselves. It’s a realization that’s beginning to spread” (Service 1999, p. 80). The editors of Science, in their special issue devoted to complexity, note that “we have taken a ‘complex system’ to be one whose properties are not fully explained by an understanding of its component parts” (Gallagher and Appenzeller 1999). In the same spirit, Kauffman complains that,

we have lost an earlier image of cells and organisms as self-creating wholes. The entire explanatory burden is placed on the “genetic instructions” in DNA – master molecule of life — which in turn is crafted by natural selection. From there it is a short step to the notion of organisms as arbitrary, tinkered-together contraptions.

He adds: “Life has, I think, an inalienable wholeness” (Kauffman 1995, pp. 274-75).

Emergence. The difficult and rather obscure notion of emergence is close companion to holism. If the whole is greater than the sum of its parts, then (as these theorists seem to view the matter) somewhere along the way from parts to whole something in addition to the parts must have emerged. Holland tells us that emergence “occurs only when the activities of the parts do not simply sum to give activity of the whole.” He also says that “the hallmark of emergence is this sense of much coming from little.”

Holland’s examples of emergent phenomena may help to explain this. He speaks of ant colonies where, “despite the limited repertoire of the individual agents — the ants — the colony exhibits a remarkable flexibility in probing and exploiting its surroundings. Somehow the simple laws of the agents generate an emergent behavior far beyond their individual capacities. It is noteworthy that this emergent behavior occurs without direction by a central executive.”

In the same way, he speaks of collections of neurons, the immune system, the Internet, and the global economy as systems where the emergent “behavior of the whole is much more complex than the behavior of the parts.” Likewise, the complex dynamics of the solar system and galaxy would hardly have been foreseeable if we had merely been given Newton’s laws of motion to contemplate, and are therefore emergent (Holland 1998, pp. 1-12). In a similar vein, Bak remarks that “the emergence of the [complex avalanche dynamics] of the sandpile could not have been anticipated from the properties of the individual grains” (Bak 1996, p. 51).

All this makes clear that the holism we spoke of above does not refer to wholes independent of, or antecedent to, the parts. The term “emergence” testifies to a bottom-up conception of the whole: it is not that the whole generates, and manifests itself through, its parts, but rather that the parts, by interacting, generate the complex behavior of the whole that “emerges.” It is hardly clear, from the current literature, what this emergent whole is thought to be, beyond the sum of its parts.

Non-Reductionism. Science magazine introduced its special issue on complex systems with the heading, “Beyond Reductionism.” The claim to have escaped reductionism is common (though not universal) among investigators concerned with complexity. The idea is that if higher-level properties really do emerge in complex systems, yielding wholes that are more than the sum of their parts, then explanations of these systems must refer to the higher-level properties. Every thing cannot be “reduced” to descriptions of lower-level parts. As Bak puts it, when the growing sandpile reaches the state where it is subject to catastrophic collapse, the pile itself “is the functional unit, not the single grains of sand. No reductionist approach makes sense.” To predict a catastrophic avalanche in traditional, reductionist terms,

one would have to measure everything everywhere [in the pile] with absolute accuracy, which is impossible. Then one would have to perform an accurate computation based on this information, which is equally impossible. (Bak 1996, pp. 60-61)

These researchers therefore accept, for example, that there can be a legitimate science of economics, whose explanations need not be reducible — certainly not in any practical sense — to the motions of atoms. Humans and societies and commercial activities have all emerged in the course of evolution, and in order to understand them we have to speak directly of their emergent features — things like rational agents, markets, prices, interest rates, and so on — not just the lower-level entities from which they emerged. Depending on what we are trying to explain, we must resort to different levels of explanation, or description — to use a phrase that often turns up.

Self-organization. References to self-organization abound in the literature on complex systems. The sandpile, says Bak, has “organized itself” into the “critical state” where it is susceptible to unpredictable avalanches of all sizes. Kauffman’s pot of grammar-obeying symbol strings spontaneously organizes itself into a self-regenerating “autocatalytic set,” suggesting to him that an oceanic soup of primordial molecules could do the same — and this principle of self-organization, he believes, underwrites the entire evolutionary drama:

I propose that much of the order in organisms may not be the result of selection at all, but of the spontaneous order of self-organized systems. Order, vast and generative, not fought for against the entropic tides but freely available, undergirds all subsequent biological evolution. (Kauff-man 1995, p. 25)

Kauffman has practically made a mantra out of the phrase, “order for free.” Others are more modest; they do not say “for free” but only “somehow.” Speaking of the “spontaneous self-organization” through which individuals form economies, cells form organisms, birds form flocks, and atoms form molecules, Mitchell Waldrop observes:

In every case, groups of agents seeking mutual accommodation and self-consistency somehow manage to transcend themselves, acquiring collective properties such as life, thought, and purpose that they might never have possessed individually. (Waldrop 1992, p. 11)

Again, this notion of self-organization is integral to the others we have discussed. If a new and coherent whole emerges bottom-up from interacting parts, then, somehow, it appears that the parts have transcended themselves and “self-organized” so as to produce the whole.

Reliance on Models and Algorithms. The drive toward simplicity dictating the goals of generality and abstraction is also evident in an extreme reliance upon models. Holland (1998, p. 24) observes that “shearing away detail is the very essence of model building. Whatever else we require, a model must be simpler than the thing modeled.” We are a long way here from Goethe’s contention that the phenomenon, rightly and fully understood, is the theory, and that there is no need for an intervening model. Similarly, Bak writes,

The beauty of the model can be measured as the range between its own simplicity and the complexity of the phenomena that it describes, that is, by the degree to which it has allowed us to condense our descriptions of the real world. (Bak 1996, p. 44)

The model offering this condensed description is, of course, a mechanical one, and today this means more and more that the description is algorithmic, or recipe-like, in the way that computer programs are algorithmic. More likely than not, in fact, the model just is a computer simulation. Daniel Dennett sees three key features in all algorithmic explanations:

Substrate neutrality. It doesn’t matter what sort of material apparatus executes the algorithm as long as the logical structure of the recipe is preserved.

Underlying mindlessness. A dumb mechanism can do the job.

Guaranteed results. Follow the recipe and the result is assured.

You can think of these three principles as representing the movements toward abstraction, mechanism, and logical purity, respectively — which are actually a single movement. (Talbott 2000)


Looking Ahead

Those are some of the key themes and intellectual commitments guiding the work on complex systems, as voiced by a number of the pioneers in the field. In the next issue of In Context I will attempt an assessment of these themes and commitments. Here I would like merely to suggest one question that seems to me fundamental for any such assessment:

Are the rather obscure appeals to “emergence,” “self-organization,” and “holism” simply the result of reintroducing, magically and without sufficient justification, some of the richness of the original phenomena — richness that was “sheared away” in the drive toward generality and abstraction?

After all, if the complexity theorist’s explanations are to explain real phenomena, then somehow the qualitative phenomena that were sacrificed to abstraction and mechanical modeling have to be regained at the end of the explanatory process. But is saying that they just happened to “emerge” a satisfactory way to get them back into the picture? Or should we instead pursue a qualitative science that refuses to sacrifice the phenomena to abstraction in the first place?

References

Bak, Per (1996). How Nature Works: The Science of Self-Organized Criticality. New York: Springer-Verlag.

Dennett, Daniel C. (1995). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. New York: Simon and Schuster.

Gallagher, Richard and Tim Appenzeller (1999). “Beyond Reductionism.” Science, vol. 284 (April 2), p. 79.

Holland, John H. (1998). Emergence: From Chaos to Order. Reading MA: Addison-Wesley.

Kauffman, Stuart (1995). At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Oxford: Oxford University Press.

Service, Robert F. (1999). “Exploring the Systems of Life.” Science, vol. 284 (April 2), pp. 80-83.

Talbott, Steve (2000). “The Ghostly Machine.” In Context (newsletter of The Nature Institute) #4 (Fall), pp. 2-3, 20.

Waldrop, M. Mitchell (1992). Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon and Schuster.

This is the first part of a two-part essay. The second part, “The Lure of Complexity (Part 2)” appeared in In Context #7.