[This post was stimulated by Levi Bryant’s recent post entitled I Guess My Ontology Ain’t So Flat. I wrote a series of preliminary responses to that thread on a related 4-year-old Ktismatics post called Eclipse as Object, beginning with comment 15. Now I’m summarizing more generally my views on the subject.]
The reality of a rainbow is effectively the same as anything else that can be perceived visually. Light reflects off the surfaces — raindrops suspended in air, the fur of the cat, the mountain range, the branches and needles of a pine tree — onto the surface of the eye. There are cells in the retina that respond to light within specific frequency bands; there are other cells that respond to the contours of contrast demarcating edges between differences in luminance; others detect changes in light intensity over very short time intervals. The light causes chemical changes to occur in the retinal cells; these chemical changes are passed synaptically along to other cells in the eye, the raw sensory information being sequentially pre-processed before being sent on for final processing in the brain. A lot of signal consolidation occurs in each eye, the signal being reinforced by redundant information while noise is eliminated, so that the information from 100 million retinal cells can be channeled through the 1.7 million cells of the optic nerve to the visual cortex. In the brain the discrete chemical signals of visual information from both eyes are assembled into larger perceptual units that combine information about the light detected in the environment: edges and expanses, colors and intensities. A 3-D perceptual array is assembled that constitutes the brain’s best guess about how this information maps onto the ambient 3-D environmental array of objects, spaces, and motion.
Direct or Indirect Perception? The details of how all this works at the level of cells, synapses, and neural networks are still being worked out. Still, visual perception has been the subject of scientific study for more than a hundred years, with the general contours being well established by data. Among neuroscientists who generally agree about the findings there is an ongoing debate about whether visual perception is “direct.” This debate hinges on two broad questions:
(1) Bottom-Up or Top-Down? Does the brain operate bottom-up, automatically and instinctively, in assembling optical signals into a perceived environment; or does the brain make top-down inferences about how to reassemble the optic information based on experiential knowledge and memory and expectation? There is no longer any doubt that vision involves both bottom-up and top-down processing of information. I look at the smear of dark green mottled with black on the mountainside and I see a forest. I could walk up the incline to confirm my visual hypothesis, watching as I approach the patterns of color articulate themselves into discrete trees. Still, when I look from a distance at a mountainside I’ve never observed before I can still immediately see the forest without even seeing the trees or consciously thinking about forests. It’s possible that my neural system is hard-wired via evolution to detect forests with no top-down inferences required. There is also no question that even bottom-up vision entails the extraction, transmission, processing, and assembly of light frequencies and intensities, re-presenting the invariants of the ambient optic array into percepts of the environment. In other words, even if I perceive directly I never see anything as itself; I always see only the light reflected from surfaces. Even a bottom-up percept entails a series of transformations or re-presentations of the raw light input, though the representation is constructed neurochemically rather than conceptually or linguistically.
(2) The World Itself or a Simulation? Does the brain assemble a visual simulation of the optical environmental array, a simulation that is “watched” by the observer inside the head; or does the observer watch the environment itself by means of the elaborate on-board neurochemical and electrical apparatus of the visual perception system? Certainly the environment doesn’t “look like” what we see: light within a certain bandwidth isn’t intrinsically green. The tree also reflects light at many frequencies undetectable by the retinal cells, so the perceived tree is a stripped-down version of the optical information afforded by the tree itself.
But if the visual system preserves environmental information about light frequency and intensity and edges such that an observer’s perception maps reliably onto that environmental information, isn’t it plausible to contend that the observer sees the thing that’s reflecting the light — a pine tree, say — rather than just a simulation of that tree? It’s a tricky problem, not easily decided by data. If I look through a telescope at a pine tree on the mountainside, am I still looking at the tree? If I attach a telescopic lens to my videocamera, feed the video image into my computer, and watch the video of the tree on my computer screen, am I still looking at the tree? The organic and the mechanical re-presentations both preserve light and edge information generated by the tree itself. There is a short but measurable delay in watching the video of the tree compared to looking directly at the mountainside — but there is also a short but measurable delay in the neural system’s processing of light information that hits the retina. And what about sound: even if we could process auditory information instantaneously (which we cannot), there is a delay in the world between the sound of the buzz-saw cutting down the tree we’re watching and the sound waves generated by the saw finally propagating themselves to where our ears can pick up the signal. Does this sound delay mean that we’re hearing not the saw but only the sound waves in the air immediately surrounding our ears?
Percept as Object. Is the visual percept of a pine tree the same thing as the pine tree? No: the percept is the result of a series of neurochemical and electrical transformations of light reflected off the surfaces of the tree. But is the visual percept of a tree a discrete object, distinct from the tree? I don’t know; it depends on what an “object” is. A visual percept is the continually-updated processing of light information generated by a structured array of neural cells. So does that make the percept an energy flow rather than a material thing? But the living tree is itself the continually-updated processing of cellular activity, and most of us are prepared to regard a tree as an object. It’s theoretically possible to capture the perceptual output at a specific point in time, following a discrete refreshing of the signal — sort of like a freeze-frame from a movie, or a chopped-down tree. Is that frozen percept, an output extracted from the process that generated it, an object? Sure: its properties and structure, the informational array it embodies, exists in its own right.
But a percept of the tree is a percept of the tree. Perception preserves specific invariant properties of the environment — the light reflected from surfaces. From this optical information the perceptual system reconstructs a 3-D assemblage of the environment — things and their positions relative to each other, their movements and the spaces between them — that generated the patterns of light detected by the retina. Vision is for navigating safely through the environment. From the perceiver’s point of view, the more accurate the visual reconstruction of the environment the better, especially when it comes to identifying environmental affordances that are particularly salient to the organism: sources of danger, sources of food, places to hide or to find shelter, mating opportunities. To objectify the percept in isolation from the thing perceived and from the perceiver is to isolate the percept from its function, from the processes that generate it, from the informational invariants it preserves through these processes, and from its internal and external relations. This sort of objectification can be done, but it demands that the observer perform an intentional work of abstraction.