This morning Larval Subjects put up a post about emergence, using for illustrative purposes a deceptively simple video game that Dennett discusses in his book Darwin’s Dangerous Idea. “The Game of Life” consists of an array of on/off cells in a video display, an initial configuration of ons and offs, and a simple if/then algorithm by which the initial configuration is transformed iteratively into subsequent configurations. The printed narrative displayed during the demo says that the game “demonstrates how complexity can arise out of simple, low-level rules.”
The algorithm running the game is completely deterministic: all subsequent iterations of the video display can be predicted precisely from the initial configuration. At the lower level of systemic organization the algorithm assigns an on/off position for each of the cells, and that’s it. What features of the game are “emergent”? Jaegwon Kim identifies five “central doctrines” of emergentism:
- Systems with a higher level of complexity emerge from the coming together of lower-level entities in new structural configurations.
- Higher-level systems exhibit higher-level emergent properties arising from the lower-level properties and relations of its constituent parts.
- Emergent properties are not predictable from information about lower-level conditions.
- Emergent properties are not explainable or reducible to the lower-level conditions.
- Emergent properties have novel causal powers of their own.
The emergent properties of the Game of Life are its “repeating patterns of information,” as the demo’s narrative phrases it. The sum of individual on/off settings can anticipate neither the complex clusters and dispersions of ons and offs that illuminate the screen at any given time, nor the changes in the patterns over time as successive iterations are displayed. While most of the multicellular patterns flicker abstractly on the screen, some patterns bear uncanny resemblances to familiar objects moving through the simulated world, performing recognizable functions. Names have been assigned to some of the more compelling patterns: gliders, eaters, puffers, guns, trains, rakes, spaceships.
I detect the emergent properties when I watch the game go through its iterations. To what extent are they properties of the game itself? Certainly at the lower level the individual cells do light up or go dark. Certainly in the aggregate the lights form patterns. But what about those higher-level clusters of cells that appear to move across the screen over time: do they really move? They seem to eat other objects or fire weapons or propel themselves across the screen: are they really doing so?
The seemingly mobile and purposive objects that emerge from running the game aren’t physical objects being tracked by a camera or a computerized eye. I’d say that they’re optical illusions, imposed by our perceptual systems on the higher-level emergent optical outputs generated by the program. The illusion takes advantage of the human perceptual system’s ability to impose higher-order structure on sensory input so as to extract meaningful information from a visual array. So: at time t I see an illuminated rectangle of dimensionality L*H located at position XY on the grid; at time t+1 I see an illuminated L*H rectangle located at position X(Y+1). My visual perception system interprets this information as evidence that the original rectangle moved a little bit to the right. Inside the game’s algorithm, though, what happened is that the leftmost cell on the illuminated rectangle switched from on to off, while the cell just to the right of the rectangle swithed from off to on. This isn’t the same rectangle moving to the right; it’s two separate rectangles displayed sequentially.
One could use this video game to demonstrate why solipsism might be true: our retinas work more or less like the video game, with our brains interpreting the changing patterns of digitized retinal cell activation patterns as discrete, moving, even intentional objects. This is the premise behind virtual-reality paranoia tales like Total Recall, Existenz, and The Matrix: vision is a solipsistic illusion disguising some other hidden reality — or perhaps the absence of reality.
A realist, on the other hand, observes that the video game’s illusion works because it exploits visual and cognitive mechanisms for extracting information from the environment about real objects actually in motion. As supporting evidence for the reality of what we see, we know that we can move our heads to track the movement of an object appearing in our visual field — a flying bird, say — in such a way that the image recorded by our visual system does not move: we keep the bird constantly centered in our field of vision. We aren’t fooled by the static shot captured by our eyes into thinking that the bird is suspended mid-flight. Why not? Because we’re moving our heads, and also because the other objects in the visual field surrounding the bird appear to be in motion relative to the bird — as if everything other than the bird is moving backward. This is the sort of observation — a sort of ecological phenomenology of visual perception — that J.J. Gibson offered to empirical psychology back in the 60s and 70s, keeping the field’s nascent cognitivism from getting too solipsistic.
Similarly, a fiction writer can write a bunch of sentences and from those textual fragments a fictional character will emerge. It’s an illusion: the character isn’t real; the reader assembles from the author’s sentences a simulated person who looks, acts, speaks, and thinks in particular and consistent ways. We could argue that, because the fiction-writer’s trick works, we should regard the way in which we perceive and understand others who populate the real world as similarly fictional, and that all we encounter are the solipsistic projections we impose on them. But, as with the video game, the fictional character works because we’ve learned to extract information about real people populating our environment by attending to and understanding meaningful sentences uttered by and about them.
So: should we regard simulations of objects and people, these trompes l’oeil with emergent properties that depend on our ability to assemble and interpret information in self-deceptive ways, as real and autonomous objects? Or are the arrays of on/off cells and word strings really real, whereas the emergent objects and people that we assemble from the raw sensory input are unreal? Or are these emergent objects and people real to us, subjectively and intersubjectively, but not real in themselves?