Knowledge/Speculation

There are things about the real world that humans do not and cannot know. Purportedly the continental antirealist strain of philosophy has restricted itself to what humans can know and how they know it, turning philosophy into epistemology and hermeneutics and phenomenology. The new breed of continental realists speculate about what the real world might be like outside of human awareness.

Scientific realists explore the real by means of human knowledge. They assume that humans can know something about the real, even if that knowledge is distorted and incomplete. In the absence of knowledge, speculation is the only recourse. The scientist values speculation because it opens up new possibilities for seeking knowledge. The scientist wants to put speculation to the test, incrementally replacing imagination and ignorance with knowledge. For a scientist to ask “but how do you know?” isn’t to substitute epistemology for ontology. The scientist isn’t asking how humans acquire knowledge; the scientist wants to know that your speculation has some basis in the real.

There will always be aspects of the real that are beyond the reach of human knowledge. The more we know, the more we realize that we don’t know, and so the future of speculation is assured. From a scientific realist perspective, the first big mistake is to regard some aspect of the real as permanently insulated from human knowledge and thus permanently consigned to the realm of speculation. An even bigger mistake is to substitute speculation for knowledge as the basis for engaging reality in general: that’s the way of the rationalist, the idealist, the mystic, the fideist.

The scientist’s question is a refined version of what any curious child wants to know. “The kitty will find a good home,” the father asssures the crying child as they leave the stray at the pound. “But how do you know?” Well, you don’t know really:  you hope, you count on the odds, you speculate. The only way you can know is to come back to the pound in a month, find out what happened to the kitty, go interview the kitty’s new owners, inspect the kitty visually. Of course even then you aren’t 100% certain that the kitty has found a good home. But at least you’ve replaced some of your speculations with knowledge.

Some Thoughts on Phase Space

Hey, it’s just a blog, right?

Citing my favorite source, Wikipedia, phase space is

“a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space.”

A particular system might occupy only a fraction of its possible states during its existence. The phase space can be described as an array of probabilities that the system will actually occupy any particular state. So if an object is to include the entirety of its phase space, then the object contains not just its actual state in a given moment in time, nor even all the states it occupies during its lifetime, but all the possible states it could, with probability > 0, achieve during its lifetime.

This is a rather abstract description of an object, including a whole array of potentialities that are never actualized. So, e.g., the chair I’m sitting in would include in its objecthood all the possible physical places it could occupy in the universe. The probabilities are highest for its someday occupying some other space in the living room, but the probability is greater than zero that someday it might find itself sitting someplace in upstate New York. The chair’s potential to be pretty much anywhere on earth at some point during its lifetime could be regarded as an important aspect of the chair that doesn’t participate in its interactions with me or with the other stuff in the room it currently occupies. But I don’t see how the chair’s potential to be elsewhere is withdrawn from its current interactions here and now. The chair is indifferent to being moved; it resists only in a purely mechanical sense of being stationary and, as an inanimate object, incapable of autonomous movement. But if the moving men came and put the chair in a truck, the chair will cooperate. Potential doesn’t withdraw from its own actualization. Rather it’s a matter of probabilities, which seem neutral rather than withdrawn.

The quarter sitting on my nightstand is currently in the heads-up position, but it contains within itself the potential to be tails-up. If I flip the quarter it’s not going to resist coming up tails.

The probability approaches 100% that I will still be an embodied living human being when I post these thoughts on the blog. The probability approaches 100% that I won’t still be an ELHB 60 years from now. The possible trajectories I could have taken in the past are withdrawn save for one: the actual path that I took. The further into the future I project my potential existence, the greater the likelihood that one day I will visit Ulan Bator or  any other remote location on earth that’s part of my low-probability geographical phase space. Some day though, all my futures will join my pasts in being fully used up, fully withdrawn from actualization.

Some Thoughts on Difference

Just thinking out loud…

“Difference” implies “different-from.” Something that is is different from what it is not. Doesn’t it follow that the discrete uniqueness of an object, its essence as “different,” is defined in relation to everything else from which it differs? This isn’t just a language game, where the word for an object is defined as its difference from all other words. Nor is it just an epistemological matter, whereby an observer recognizes a discrete object relative to its surroundings. If something isn’t different from other stuff, then it’s the same as the other stuff, no?

If a reality is entirely uniform and stable, then any sort of change that emerges in this reality is differentiating. If a reality is entirely chaotic, random, noisy, unstable, then any sort of stability that emerges in this reality is differentiating. If a reality is comprised entirely of discrete things, stable yet distinct from one another, then any sort of unique pattern is differentiating. In any case, difference is different-from.

If difference is that which distinguishes a thing from the rest of the reality it occupies, then the uniqueness of a discrete thing is the combination of differences it contains. This n-dimensional differential vector might manifest itself in a variety of ways relative to other things in its larger reality. So, for example, a distinct genetic pattern will generate an organism that exhibits various kinds of distinct phenotypic differences in the ways it interacts with its environment. Whether one regards genotype or phenotype or both as the definitive “difference that makes a difference,”  in any case the essence of the discrete organism is still embedded in the vector of differences-from, which are intrinsically relational.

Suppose the essence of some discrete thing withdraws from all relations. If difference is always relational, then difference makes no difference to this discrete thing: it could hypothetically be identical to anything or everything else in its reality and still be a discrete thing in its withdrawn essence. Conversely, a thing’s difference-from other things can be multiple and extreme yet still not make any real difference for establishing its distinct reality. The only alternative I can think of is to propose a kind of difference that isn’t difference-from. The interior of a discrete thing into which its non-relational essence retreats: it would have to be a place outside of the reality in which relations occur, wouldn’t it?

Object-Reality Interdependence

Developmental systems theory emphasizes the interrelationships of organisms and environments. Genes aren’t regarded as predetermined trajectories that will inevitably unfold unless some environmental glitch gets in the way. Rather, genes and environments interact in complex ways, leading to any number of outcomes. The same genotype can manifest itself in very different ways phenotypically depending on variations in the local situation. Individual organisms don’t develop in isolation as autonomous entities; they must occupy a particular niche within their species, their local community, their family. Environmental variables like climate fluctuation, food scarcity, and population densities of one’s own species as well as predator/prey species can exert profound effects on the individual organism’s life course. But organisms don’t just adapt to their environments. Organisms actively shape their environments, building nests, laying down trails, pollenating plants, affecting the populations of predator/prey animals in the vicinity. While many kinds of organisms might share a common space, they don’t really share a common environment. Features of the world that afford safety and nourishment for one species might present a threat to another species while being met with complete indifference by a third species. In short, neither the organism nor the environment can be considered in isolation; they are interdependent.

The same sorts of insights hold for objects and realities. Objects aren’t hard-programmed to become that which their component parts and primal causative forces predetermine them to be. They follow idiosyncratic trajectories, based to a large degree on local differences in the texture of the reality they occupy. Each object is different from every other object. Still, objects do cluster themselves into categories that reflect real shared similarities with one another and real systematic differences from other kinds of objects. While all objects occupy the same material universe, they don’t all occupy the same reality. Some objects are affected by differentiating forces to which other objects remain invulnerable. Similarly, different kinds of objects can effect different kinds of differences in their surroundings. If an object is a “difference that makes a difference,” then a reality for that object consists of those sorts of differentiating forces in which it participates. A real object and the reality it occupies are interdependent.

Moby Dick Sub-Reality

Certainly we can agree on a few things. Fictional characters are real in the sense of being the subject matter of real books, the focus of real human conversations and literary analyses and flights of imagination.  But fictional characters aren’t real people. Fictional worlds too might be real, but they aren’t the real world. So where does that leave us?

In a recent post I acknowledged that I had written a particular fictional character in a very sketchy way, leaving the reader plenty of leeway to imagine what this character is “really” like. But creating characters wasn’t my main concern in writing the novel. Mostly I was trying to open up a window onto an alternate reality. The fictional characters serve as proxies, stand-ins for real people who might occupy this alternate reality. The characters also function as lures, attempting to draw readers into this alternate reality.

I could go into some detail describing the dimensions and contours of the reality I tried to open up in that book. The characters and events occupy pretty much the same time and space as the real material world we live in. What’s important in establishing the alterity of that particular fictional reality are the strands of meaning that link the characters together, that motivate their actions, nad that shape the imaginary trajectories they trace through the world.

There is no reason why real people, occupying the real world, couldn’t find their lives shaped by these same forces. They may in fact be so shaped, at least in part, without their consciously being aware of it. This possible overlap between fictional and ordinary realities isn’t true just of my book. Anyone could become entangled in obsessive vengeance, even if he’s not the captain  of a ship and the object of his passion isn’t a great white whale.

Still, you and I aren’t characters in Moby Dick — that fictional story does not include us as characters. Ahab isn’t real in our world, but by the same token we aren’t real in his. This isn’t to say that, as people, Ahab and you are of equal standing: you’re not. For one thing, Ahab is a lot more famous and influential in our world than you are; for another, Ahab has no material human existence in the real world and he never did.

There are strands of meaning and motivation that link fictional with nonfictional worlds. In understanding megalomania, Ahab presents an excellent case study. Of course we understand that he’s a fictional character. But Ahab is entwined in strands of meaning that affect us just as firmly as he is wrapped up in the harpooneer’s rope.

Sure, ultimately there is only one reality, even if it turns out that we occupy only one among countless universes in the multiverse. In our universe everything came out of the Big Bang, eventually including Herman Melville, his books, the characters who populate them, and the abstract themes that link them to us still. But isn’t it useful for certain purposes to partition the one reality into many?

The fictional reality created  inside Moby Dick involves certain characters doing various things in certain places that have a direct correspondence to the material world in which Melville lived. Real people alive at the time the book was written, as well as real places not directly mentioned in the book, as well as everyone who has ever read the book, do not exist inside that fictional reality. We live in a world that can be partitioned into a sub-reality consisting entirely of every novel ever written. Melville, though no longer alive in the real world, occupies a place in this sub-reality as an author. The original manuscript of Moby Dick may well be lost, but millions of physical copies of the book exist in various languages, as do online versions that can be downloaded onto computers. From the perspective of our sub-reality we can disregard all the physical and virtual copies, focusing on the single abstract object called “Moby Dick the novel.” Likewise we can disregard all the copies of Ahab residing in all the copies of the book, focusing on the single, abstract, never-alive but fictionally-real sea captain. And the theme of megalomania, though it’s never written in so many words in the book, emerges from the book as a theme that links Ahab to other fictional characters, to those of us who choose to occupy the sub-reality of all novels ever written.

In some other sub-reality, consisting of all printed documents, all those hard copies of Moby Dick do count as real. And in another sub-reality the megalomaniacal theme is real even to those who have never heard of Moby Dick.

Right-Brain Psychoanalysis

Last night I attended a presentation on learning styles at my daughter’s school. The speaker, an educational psychologist, pushed the left brain-right brain asymmetry as the source of two different cognitive styles: auditory-sequential and visual-spatial. I’m left-handed, so presumably my brain has more cross-wiring than right-handed people’s brains. Even so, characterizing the left hemisphere as “auditory” is misleading.

Briefly, the argument is this: Empirical evidence consistently demonstrates that (for right-handers especially) language is processed mostly in the left hemisphere. This is true for both spoken and written language. Language is processed sequentially, and sequence is a function of time. There is some evidence suggesting that the left hemisphere is more sensitive than the right in detecting short time intervals. What the right hemisphere adds to linguistic processing is the awareness of affect, attitude, interpersonal context: connotation rather than denotation, holistic rather than sequential. Many of the relevant connotational cues are visual: body language, facial expression. And there is independent evidence supporting right-hemispheric dominance in processing visual-spatial information.

However, other connotational cues are auditory: tone of voice and inflection, so-called “melodic speech,” which is also predominantly a right-hemispheric function in most people. The right brain is also presumably better at conjuring up mental images of what a string of language is talking about: the objects, events, and scenes being described, the array of signifieds toward which the linguistic signifiers point. The right brain is also better at divergent thinking: coming up with alternative ways of imagining or thinking about or representing something, which I believe implies the ability to generate alternative linguistic descriptions of something.

So now I find myself thinking about implications for psychotherapy and analysis. Language is the dominant medium for pretty much all techniques, suggesting a left-hemispheric bias. Cognitive-behavioral praxis involves a systematic parsing of thoughts and behaviors in an attempt to identify mismatches: irrational perceptions and attitudes and beliefs, inappropriate behavioral-linguistic responses. Treatment involves breaking into the sequence that links environmental cue, thought, and action, then consciously attempting to restructure this sequence in a more rational way.

In contrast, psychoanalytic technique deals primarily with the unexpressed, the repressed, the unformulated. As the person speaks, the analyst looks for clues to what is not being said: slips, tone of voice, facial tics, bodily movements. Through free association the client begins producing linguistic strings that haven’t been structured consciously into appropriate and rational discourse. Guided imagery encourages the client to picture memories or events or situations in the mind’s eye. Progress is made by bringing more and more unconscious material into awareness, playing with it, integrating it with conscious but discrepant thoughts, and eventually letting it settle into a holistic scheme of coherent personal meaning.

In short, doesn’t it seem that cognitive-behavioral therapy is a left-hemispheric praxis whereas psychoanalysis emphasizes right-brain activity?

Still, psychoanalysis focuses on linguistic expression. In part this is an artifact of analysis being an interpersonal process: it’s hard to know what someone else is thinking without their putting it into words. Also, though, there is a presumption that consciousness is inherently linguistic. Thought and language seem inextricably linked, such that thought is a kind of unspoken linguistic process and language is thought made accessible to others. Thoughts which cannot be expressed verbally aren’t really thoughts, it is argued. Further, analysis has historically depended on the analyst’s ability to interpret the client, and interpretation is always verbally communicated.

But what about images, pictures, physical structures? To create visual-spatial things requires conscious attention, contemplation, imagination, and manipulation. Collage, haphazard rearrangement of components, even demolition: these activities both embody and generate meaning, even if that meaning cannot be put into words. Must the analyst insist that the client drag the right-brain stuff across the corpus callosum into left-brain language processing? Why not just let the client express the non-linguistic stuff non-linguistically, through image, movement, intonation, manipulation? The explicitly analytic role of the analyst is regarded as less important than the client’s self-analysis. And even if the client never explicitly formulates his or her insights in words, the changes in perception, affect, energy, desire, proactivity, freedom of expression, personal integration, and so on are the most important outcomes.

On the other hand, perhaps because I’m left-handed I value bilateral integration. Being able to express divergent and holistic thoughts and images verbally seems like a good thing. And being able to deal with images and structures and intonations and affects without having to talk about them also seems like a good thing.

Concert Al Fresco

Running toward home along the South Boulder Creek Trail, I saw a girl standing astride her bicycle stopped along the pair-prairie-dogsedge of the path. Approaching nearer I heard her voice — I figured she must be talking on a hands-free cell phone. As I got a little closer I realized she was singing, and quite loudly too. Singing into the phone? No. When I got near enough to see her animated expression and subtle hand gestures, I realized she was performing for the prairie dogs. I don’t know if the she noticed the thumbs-up I gave her as I passed by.

[Photo from natures-desktop.com]

Who Is She Really?

“When Mrs. Dervain reached her hand out to me I thought she was extending a common kindness.”

That’s the first line of a novel I wrote. Mrs. Dervain is a central character, but I purposely revealed very little about her. I wanted to see what sort of person readers inferred her to be — likable or not, physical characteristics, and so on — based on the minimal information provided in her words, gestures, and actions. Those few people who have read the book seem to find Mrs. Dervain fascinating, even though they tend to ascribe very different characteristics to her.

I began writing this novel, abandoned it for maybe a year and a half, then came back to it. The other two main characters had prominent roles in the earlier fragment; Mrs. Dervain I introduced as a new character. I had passed the halfway point in the writing when I read again part of the older manuscript. It included an extended section featuring another woman character. I wondered: what if I turn this other woman character into Mrs. Dervain? With only the slightest forethought or planning I created a big piece of Mrs. Dervain’s back story simply by assigning her name to this earlier character.

Immediately I began seeing Mrs. Dervain in a different light. Now that she had been merged with this earlier character her gestures and remarks seemed to resonate more deeply, revealing greater complexity in motivation and attitude. But her deepened character resulted from an arbitrary, even capricious move on my part. Had this earlier textual fragment involved a very different story line, merging the woman character into Mrs. Dervain would have turned her into a quite different person.

Awhile back I wrote a post about “cyranoids,” — people who, in conversation, speak words fed to them via earpiece from someone else. The cyranoids’ interlocutors invariably ascribe a whole and integrated personality to a flesh-and-blood individual who is voicing the thoughts of two, three, even ten different people.

A reader could decide that I, the writer, should have last say in asserting what Mrs. Dervain is really like. But I was just writing the words, making it up as I went along. Editing tidied up some loose ends and eliminated some inconsistencies, but this was just surface polishing. I don’t know Mrs. Dervain any better than any other reader of the book.

Are Illusions Real?

Empirical psychologists frequently rely on deception and error in order to infer how cognitive processes work. When intersubjective agreement is total regarding some phenomenon, then it’s impossible to distinguish between the nature of that phenomenon and the way in which the human subject perceives that phenomenon. The research psychologist tries to open up a split. Optical illusions are common enough examples. In one well-known example, two perfectly parallel lines appear to bow apart from each other in the middle. The lines are constructed in such a way as to deceive the human perceptual system. Or researchers can construct deceptive problems for subjects to solve. If subjects tend to make particular kinds of errors on tasks for which the right solutions are well-defined a priori, then the errors can be attributed to quirks of human subjectivity that caused the subjects to misapprehend the nature of the problem.

In one study,  the researcher displays four brands of a particular product on a table, arrayed from left to right. The researcher asks the subjects to choose which brand they regard as best and why. Subjects make their choices and offer their rationales. In fact, all four displayed products are identical. Empirically, it turns out that, on average, subjects prefer objects on the right side of the display to those on the left. In explaining the basis for their choices, the subjects describe (nonexistent) differences in quality without ever showing any conscious awareness of what must actually have motivated their choices. The subject perceives differences in the individual objects, but these differences are illusory. They’re actually responding to an unconscious subjective preference for arraying objects that tends to be a characteristic bias of human subjectivity.

From the researcher’s point of view, subject’s errors in task performance and misattributions in accounting for their own behaviors are real enough. Errors and self-deceptions are counted, categorized, analyzed statistically, interpreted theoretically. Again, though, what motivates this sort of work is to use these errors as a means of distinguishing perceptual-cognitive processes from the external phenomena they’re processing. The errors are real in the paradoxical sense that they present real evidence of human limitations in discerning external reality accurately.

The physical sciences make progress by identifying and controlling for observational error caused by limitations and biases in human perceptual-cognitive capabilities. To the human eye, the moon and the sun appear to be just about the same size. But it turns out this is an illusion resulting from intrinsic human limitations in judging distance between the eye and the observed object, especially when the distances are enormous. The apparent size-equivalence of sun and moon to the naked eye is real enough, in the sense that it’s a real illusion pointing to real limitations in humans’ ability to perceive the external world accurately. What interests the astronomer, though, are the actual sizes and distances of the sun and moon.

I grasp the “object-oriented” contention that my perception of the size of the sun is just as real as the sun itself. Call me old-fashioned, but I can’t help finding this ontological equivalence rather dissatisfying when trying to distinguish illusion from fact, subjective from objective, the apparently real from the really real.

Prawn Basterds

*SPOILER ALERTS*  The Jews’ spaceship stalls in the airspace just above Warsaw. When it becomes apparent that the ship is no longer capable of transporting the Jews back to their Promised Land, the humans move the Jews, whom they disdainfully refer to as Prawns, into a big ghetto called The District. To accommodate rapid population increases among the Prawns, many more Districts are established throughout Europe. The Jews start to escape from the Districts, disguising themselves as humans by eating non-kosher food and learning to speak European languages. The Nazis decide they’ve had enough: they start implementing the Ultimate Solution to resolve “the Prawn problem.” Prawn Hunters scour the countryside rounding up the renegades.

Brad Pitt is a Nazi. One day, while clearing all the Jews from District 9 and sending them to “resettlement camps,” Brad comes across a secret moyel hideout and, while inspecting the “weapons,” accidentally circumcises himself. Brad considers cutting off the offending member but can’t make himself do it. Pretty soon the “infection” starts to spread: horrified, Brad finds himself gradually transforming from human into Jew. After awhile, though, he realizes that he kind of likes his new Jewishness. Pocketing the moyel’s knife, Brad starts roaming the countryside killing Nazis. The ones he lets go he circumcises.

Eventually Brad joins up with a ragtag group of American Jews. Together they form a vigilante posse known as the Prawn Basterds. One day they come across a Jew in hiding who has figured out a way to get the Jewish spaceship, still stalled over Warsaw, operational again. It turns out that the ship’s movie projector somehow jettisoned the fourth reel, which fell into a Warsaw junk heap where it was torn to bits by scavengers. Without the fourth reel, which contains all the footage describing the return voyage, the ship came to a standstill. Word reaches Brad that Louie B. Mayer and David O. Selsnick have painstakingly reconstructed the lost footage. The Prawn Basterds undertake the perilous voyage to Hollywood, where they secure the precious fourth reel remake. They head back to Europe.

fire girlBy the time they reach Warsaw the legendary Prawn Basterds have been decimated, leaving only Brad and one Jew, whose name is CHRISTopher, to fulfill the mission. The two of them become trapped in a movie theater with hundreds of Nazis, including Goebbels, Goering, Himmler, and Hitler. Brad, who is able to use his prawnized member to operate a super-sophisticated Jewish weapon, burns the theater to the ground, killing everyone in the theater, regrettably including CHRISTopher. Somehow Brad manages to escape; he recedes heartbroken into the war-torn Warsaw squalor.

Three days later, Brad casts his eyes skyward in astonishment to watch CHRISTopher miraculously rising from the smouldering pile of rubble. He carries under his arm the canister containing the Fourth Reel, which somehow survived the theatrical holocaust. He ascends bodily to the space ship, puts the reel into the projector, turns it on, and sits down at the controls. Brad, awestruck, watches the great airship rumble to life and veer off toward the southeast. Is CHRISTopher escaping, or will he return with a rescue party to ferry the rest of the Prawns home? The movie ends well-positioned for a sequel.

I hope I didn’t leave anything out…

Bullying

Bullies play a critical role in coming-of-age movies, embodying the fear we all have of one another. The bully never goes away; eventually he must be confronted. It’s never a matter of brute force that overwhelms the bully, but a combination of wits, leverage, and teamwork. Most important is bravery — not bravado, but rather a willingness to confront one’s fears, risking humiliation in order to attain some new measure of autonomy and self-assurance on the road to adulthood.

Teachers have watched these movies, surely. Why, then, when the schoolyard bullies have been neutralized, do they have to fill the void?

Adorno, in his essay “Taboos on the Teaching Profession,” denounces the stereotypical teacher as a “classroom tyrant,” a “caricature of despotism” whose power only parodies that of other educated professionals. In knowing more than his charges and in wielding power over those obligated to obey him, the teacher is “not fair, not a good sport.” To be good, a teacher must set aside these advantages accruing solely to his function:

“Success as an academic teacher is due to the absence of every kind of calculated influence, to the renunciation of persuasion.”

So here’s the story. Our daughter Kenzie is a junior in high school. As a sophomore she took advanced placement American History, which entailed a huge amount of work. She got an A in the class and passed the AP exam “above expectations,” earning university credit. This year every class in which she’d enrolled is either AP or IB (international baccalaureate, if anything even tougher than AP). Three days into the semester she decided that the IB World History class, which by all accounts imposes an even greater burden than the American History, was just too much on top of everything else. She decided therefore to switch into a regular section of the history class. Anne and I supported this decision.

On Monday Kenzie, fairly certain of her decision but a little nervous about rocking the boat, goes to school trying to reorganize her schedule. She stops in to discuss her rationale with her teacher from last year’s history class, a tough old broad who is an excellent teacher and whom our daughter respects a great deal. The teacher listens patiently and agrees with Kenzie’s decision. Two other history teachers, overhearing the conversation, start talking to one another. “What’s with these kids? Did they lose half of their brain cells over the summer?”

Next Kenzie has to get a signature from her current World History teacher in order to get out of his class. She tracks him down in his office. The guy isn’t going to make it easy for her. “It’s hard for me not to take this personally,” he tells Kenzie. Kenzie describes her overloaded schedule. “But  art?” he asks disdainfully. Kenzie is an artist first and foremost; every other class is optional, but not art. Apparently the other two eavesdroppers had “discussed” Kenzie’s case with this guy based on what they’d overheard in the previous discussion. Clearly they had come to the conclusion that this girl is a slacker, taking art just to keep the grade-point average high without doing any real work.

Next Kenzie goes to the counseling office to find out whether any sections of regular-intensity World History have any empty seats left. Luck is with her: there’s an opening in the 6th period section. The counselor tells Kenzie that she has to go back to see her current teacher again, the one who takes it personally and who hates art. Why? Because they had given her the wrong form to be signed. Kenzie balks: “I don’t want to go back there.” At that moment this history teacher passes through the counseling office. Here, sign this, the counselor tells him. He looks at Kenzie and stands there, not signing. “You approved it,” Kenzie reminds him. “I didn’t approve it. I don’t approve. I’ll go along with it, but I don’t approve.” He signs and walks off.

Eventually Kenzie got it all worked out, having passed through this rite of passage bruised but not crushed. But I ask you, is this bullshit really necessary?

The Adaptive Unconscious

I just finished reading Strangers to Ourselves. It’s the second book by that title I’ve read. The first was written by Lacanian psychoanalyst Julia Kristeva, which deals with the place of the stranger through the history of Western culture. The book I just read is by Timothy Wilson, a psychology professor at the U. of Virginia where I went to grad school. Tim focuses largely on humans’ limited ability to gain conscious access to the unconscious. He’s not an analyst or a therapist but a researcher in social psych, so he brings a different sort of information and interpretive framework to the conscious/unconscious division.

Based on a count of receptor cells and their neural connections, neuroscientists estimate that the human sensory system takes in more than 11 million pieces of information per second. Based on studies of processing speed on tasks like reading and detecting different flashes of light, cognitive psychologists estimate that people can process consciously about 40 pieces of information per second. What happens to the other 10,999,960? It’s processed unconsciously.

That’s how we acquire most of what we learn about environment, people, language, routine behaviors, and social interaction. We acquire this kind of knowledge not by assembling a series of discrete facts or events — the kinds of things consciousness is good at attending to — but by mastering complex patterns. The unconscious is particularly good at dealing with patterns, not through conscious calculations of algorithms but through intricate neural networks that compare already-stored arrays of information with new arrays continuously presented to it through broad-band environmental tracking systems. The 10 million bits of sensory input aren’t all lined up in a row, waiting for our perceptual systems to structure them. The sensory systems are broad-band matrices that are able to detect structure that already exists in the ambient environmental array.

Consciousness is useful when we want to pay particular attention to something: catching a ball, cooking dinner, reading a blog post. A lot of other stuff is happening around us that we’re not consciously attending to — traffic sounds outside, the breeze from the fan, small movements of the other people in the room. Still, we’re aware of the details of our environment even when we’re not focuing our attention on them. It’s adaptive to be in a constant state of awareness in case something happens that calls for us to react. It’s not adaptive, though, to pay conscious attention to all the little details, because then we lose focus on the main task at hand.

We can call much of this unconsciously-compiled information into conscious awareness pretty much on demand. The accessible stuff is mostly content: names of childhood neighbors, how to order a meal at McDonald’s, the color of pumpkins. It’s nearly impossible to retrieve unconscious processes: how we know where a baseball hit over our head is likely to land, why we take an immediate liking to certain people, why we suddenly feel apprehensive or giddy, how we usually come across to other people.

It turns out that introspecting about unconscious processes isn’t a very useful retrieval method. These processes didn’t start out in consciousness only to be repressed or forgotten; they never appeared in consciousness in the first place. Human cognition is more adaptive when most of it takes place in background mode, out of our awareness. There just aren’t very many direct neural pathways hauling this stuff up from the sensory and emotional and pattern-matching activities going on in our brains. “Look out, not in” is an appropriate rubric. Often it’s more reliable to observe our own behavior in particular situations and try to reverse-engineer what might have motivated it. This is how other people infer things about our goals and motivations and biases — by watching and evaluating behavior. Consequently it’s also informative to ask other people what they see in us — if we can bear hearing the truth. Or we can invent situational scenarios and imagine how we would likely react. We’re unconsciously equipped to communicate with other people, so talking can be a productive way of getting the unconscious material out of our heads and into words. Writing works too, as a sort of simulated conversational medium. Still:

“Although it may feel as though we are discovering important truths about ourselves when we introspect, we are not gaining direct access to the adaptive unconscious. Introspection is more like literary criticism in which we are the text to be understood. Just as there is no single truth that lies within a literary text, but many truths, so there are many truths about a person that can be constructed. The analogy I favor is introspection as personal narrative, whereby people construct stories about their lives.”

– Timothy Wilson, Strangers to Ourselves, p. 162

Hammer in the Head

An artifact is an object that’s been intentionally designed and built by humans. From a purely material standpoint an artifact is neither more nor less real than a naturally-occurring object. Usually, though, humans recognize the difference between nature and artifice. People tend to use artifacts for purposes intentionally built into them by the artificer; e.g., when I want to hammer something I look for a hammer. The hammer emits information signaling its designed-in utility, and this information is received by the human would-be hammerer. But since necessity is the mother of invention, I could also pick up a rock I happen to find out in the field and use it for hammering.

The rock isn’t an artifact, but it affords hammering. Is the rock’s hammer-ness an emergent property of the rock itself, or is it a property of the way I perceive the rock? Do I pick up information emitted by the rock that wasn’t designed into it, or does my intentional mental state actively construct hammer-ness, which I then impose on the rock? It would seem that both operations are in play. The rock is a material object that conveys higher-order information to humans about its utility for hammering. It’s certain that found objects like rocks were the first human tools — that’s why they call it the Stone Age. Hard, heavy, but not too heavy: the same information is conveyed by the naturally-occurring rock as by the specially-designed hammer. The history of human artifice entails the progressive shaping of naturally-occurring materials in ways that enhance their natural utility. Tool use and tool construction progress in parallel. This all seems non-controversial enough.

The found rock is a hammer by happenstance; that thing in the toolbox called “hammer” was designed and built for hammering. The rock was a rock even before I picked it up and used it to pound something; the hammer wasn’t a hammer until it got made into a hammer. But does the rock convey its hammer-ness to every thing and creature it encounters, regardless of whether they ever intentionally want to hammer something? Or is the idea of hammer-ness an abstract artifact in its own right, a thought about a particular kind of intentional agency that was invented by humans sometime in prehistory, such that the rock’s hammer-ness didn’t exist until the idea of hammer-ness was imposed on it?

My cat doesn’t get it: the rock and the hammer are just two hard and heavy physical objects occupying space in his environment. Even for me, the rock’s hammer-ness doesn’t occur to me until I need to pound something and I don’t have a hammer handy. Why do I think about using the rock for hammering rather than some tuft of grass or the cat? Because the rock possesses the physical properties of hardness and heaviness that work best for hammering. These properties exist in the rock independent of my thinking about them. But when I need something to pound with, I receive the rock’s already-existing hardness and heaviness as information about the rock’s hammer-ness.

My cat never intentionally thinks about hammering anything, and so he never gets the message from the rock. On the other hand, my cat can use his paw to swat things, in effect wielding his paw as a hammer. Early humans probably used their fists for pounding before they ever started using rocks.

Before I picked it up, the rock might have been resting in roughly the same place for ten million years. Did it acquire its hammerish properties only recently, after a hammer-wielding species evolved on the planet? No: the rock’s hardness and heaviness — features that make it useful for hammering — already existed in the rock before anybody thought of using it as a hammer. The rock has always had hammerish properties, ever since erosion pried it loose from the mountainside, turning it into a separate object, and gravity and the mountainside used the rock to hammer the pebbles, earth, plants, and small creatures it encountered as it rolled down to its resting place.

My fist, the rock, the hammer: the information about these three objects’ hardness and heaviness can be quantified and written down on a piece of paper. This hammer-ness information originates in these objects, is already part of them. What’s in my head but not in my cat’s head is the ability to receive and interpret that information relative to my intention to hammer something. When I pick up a rock with the intention of pounding something with it, I recognize information already embedded in the rock and interpret it with respect to my own intentionality.

All of this is well known and generally accepted. At the same time we see the renewed enthusiasm for a “flat ontology,” where the rock and the artifactual hammer are equivalent as objects and where the cascading rock’s crushing of objects in its path is equivalent to my intentionally picking up the rock in order to pound something with it. Maybe that’s why I’m inclined more toward psychology than to ontology. Intentionality, hammer-ness in the head, extraction of hammerish affordances in found objects, conscious design and construction of artifacts: each of these distinctly human activities emerged from its counterpart in the non-human world. But the separation of the distinctly human from the prehuman still strikes me as remarkable. And while humans still occasionally face the risk of being crushed by rocks tumbling down mountainsides, it’s largely a human-crafted environment in which we spend most of our lives and in which we exercise our distinctly human tricks.

What appeals to me is to think about psychology not exclusively according to empirical and therapeutic/analytic paradigms,  but also in ontological terms. I have far less background and experience to operate at this level, but the prospect energizes me.