On Not Knowing

We don’t really know what we want. We want what our genes tell us to want. We want what the marketplace wants to sell us. We want only what other people want. We want to be what other people want. We want to have sex with our mothers and to kill our fathers. We want what God wants through us. We want to be repressed. We don’t want anything. We want nothing. One thing’s for sure: what we want isn’t what we think we want.

We don’t really know who we are. We don’t know who anyone else is. We don’t know the world. We don’t know what we mean. We don’t know what anyone else means. We don’t know the point. We don’t know what we don’t know.

There are new gnostics among us, speaking the secret languages of hidden realms. They can translate what our subconscious is saying through us, or what our genes are saying, or what our society is saying. The gnostics can interpret what the war is really about, or the movie, or the empty spaces between the buildings. The way they speak, with facility and assurance but without obvious references to anything we already understand: is it truth or delirium? It’s another thing we don’t know.

We read that truths are propositions spoken in languages that float free of the world, and we wonder how such truths can be relied on. We read that truths are unveilings of the things themselves, but we find that even a thing fully exposed is just a thing. We read that truths are created and not discovered, and we wonder how this is any different from someone saying that the Holocaust never happened. We read that truth has been subsumed under interpretation, but we wonder what to call that thing that must be interpreted. We read that truths contain their own falsifications, and find ourselves wanting to take the easy way out.

These doubts don’t come naturally to us, and they certainly don’t help us concentrate our energies. Let’s step back into the world we left behind — it’s still there, full of people who speak a language we understand, doing things we know how to do. We’ve got skills: let’s go make some money.

There are more truths than we’re prepared to acknowledge.

The Refinements of Desire

People are different from one another; they live in different contexts; they pursue different trajectories. The world itself seems big enough and new enough to absorb unlimited infusions of excellence and differentiation. Why then do people seem to settle into the same kinds of routines, and why does the world seem so… mundane?Partly it’s because people aren’t naturally motivated to achieve either difference or excellence. As biological organisms we’re all genetically motivated to pursue a standard set of desires: food, safety, reproduction, pleasure, affiliation, competence, recognition. Repetition isn’t a death instinct; it’s staying with what works, with procedures that reliably satisfy. From birth we’re socialized into our culture, becoming familiar with the socially acceptable pathways for fulfilling our desires. This matching of biological desires with social channels of fulfillment works pretty well for most people. In fact, it’s surprising that human individuality or culture ever diversified or advanced much beyond our species’ primal circumstances.

At times change is motivated by inadequate or blocked channels for fulfilling desire: natural disasters, tyrannical social structures, individuals with excessive hunger. But there must also be a kind of refinement that the self injects into the basic desires, turning them into pursuits of knowledge and the creation of art and the extension of hospitality to powerless strangers. Otherwise you have to assert that these “higher” motivations drop down on us from some ideal world, infused into selves from outside themselves. Though that too is possible.

Are acts of discovery and creation the expression of desire or its fulfillment? Is it enough to paint a painting, or must the painting persist in the world after its completion and be seen by others? The doing, the separation of creation from self, its entry into the world, its injection into others’ awareness — each has its satisfactions. These refined transformative desires to discover and create probably aren’t the spontaneous unimpeded outflow of subpersonal instincts. The self has to add something to the flow of desire as it passes through. It’s not necessarily a sublimation or an ascetic denial of pleasure; rather, the self contributes its own desires to the flow coming up from underneath, shaping and intensifying it.

The self can block the flows of desire, or it can pass the desires through unalloyed, seeking their satisfaction through the most straightforward means and the simplest outlets. Or the self can notice the desires as they pass through, hold onto them awhile, reflect on them, subject them to experimentation, play with them, work on them, recreate them. Now the world looks different too. Some of the transformed desires find satisfaction where before there had been none. Other transformed desires find no fulfillments in the world, and so the self must create them too. Still others are blocked by the contours of the world and the forces that shape it. Then there is either change or frustration.

Assembling a Postmodern Self

I’d guess that the descriptions of self that I’ve been putting forward in recent posts do not resonate with postmodernists. Social learning, cognitive development, empirical research, the cultural ideal, evolutionary determinism — these are artifacts of a stale modernity, and haven’t we gotten past all that by now? Okay, imagine you’re a postmodernist inventing a theory of personality. What might this postmodern construct look like?

First, you’d say that there’s no way to know what the self is really like. Our knowledge is always biased by the sociohistorical context in which we’re embedded, so we can never get an objective reading what selves are. Even knowing myself is impossible: I’m too close to myself to be objective.

You’d say that the postmodern self is multiple. We’ve long been aware that we present different versions of ourselves to parents, lovers, neighbors, coworkers. We used to believe that these were merely facades, that beneath the multifaceted surface could be found a unified self. Now we’ve come to recognize that the unified self is a modernist myth. We really are different people in different contexts.

The postmodern self is fragmented and alienated. With the demise of the mythical true self, we realize that there’s nothing holding our multiple selves together. We are nothing in and of ourselves; each of us is an unstable nexus in a vast network of instincts, social groupings, activity patterns and cultural values in which we’re embedded. As individuals we’re in danger of disintegrating, or of being lost in the hall of mirrors. We can’t distinguish ourselves from the images of ourselves reflected back to us by ourselves, by others, by the marketplace and the media. We’re ironically self-aware, never wanting to commit to any particular version of ourselves. We recognize that everyone else is also decentralized: we’re never sure we’re really connecting with anyone. True person-to-person connection is impossible when we can’t locate either the self or the other. Though we are multiple and everywhere, we’re also hollow and alone.

The postmodern self is a narrative. We used to think about ourselves in terms of propositions: permanent features that describe us — intelligent, shy, aggressive, open-minded, and so on. We’ve come to recognize that these propositions aren’t absolute, that they’re contingent on time and circumstance. We are our lives as they unfold in space and time. Each of us is a story we tell ourselves. Each of us is a thread in a much larger and longer story spoken by the language of the world.

We are preparing to move beyond the obsession with self. At some point we have to let go of our angst and acknowledge the passing of the modern self, with its self-knowledge, its secure identity, its consistency, its autonomy. This idea of the self was an illusion, spawned by the rational individualism of modern science and the marketplace. So we’re multiple, decentralized, beseiged by unconscious desires and media messages — let’s stop lamenting what we’ve lost and get on with our lives. Let’s stop being so self-absorbed and narcissistic. Let’s accept ourselves as part of forces larger than ourselves and beyond our control. Let’s create ourselves as quirky and amusing short stories. Let’s try to find a little comfort and joy from all the other multiply fragmented characters who are also charting their own quirky and amusing paths through this multiply fragmented world.

Isn’t that the kind of self we’re ready to believe in? Are there other ways of elaborating on this postmodern construct of the self?

Evolutionary Psychology

The last post emphasized the role of genetics in shaping our attitudes and behaviors. Mostly we think about ways in which our genes predispose us individually in one direction as opposed to another: blue eyes or brown, high or low intelligence, introvert or extravert, high or low impulse control. Then of course there are the genes we all share, genes that make us uniquely human: hairless bipeds with opposable thumbs and really good language skills. Evolutionary psychologists have been looking at this second category of genetic inheritance. What are the invariant characteristics of the human mind that offered our forebears an adaptive edge in the environment within which the human species evolved? More precisely, what genetic mutations made it possible for the human genome to thrive?

Obviously this sort of work cannot be done experimentally. Some of it is ethological, comparing humans with other primates. Some is done with computer simulations, introducing hypothetical mutations into an otherwise stable gene pool to see what happens to population distributions after many generations have passed. Some of it takes the form of pure thought experiment. All of it tries to account for empirical findings among human populations obtained by the usual psychological research methods.

Here’s an example from Richard Dawkins. If I have a child, half of its DNA is mine, half is the mother’s. Say there’s a fire. I can either save myself or my child. I might have all sorts of brave or cowardly thoughts running through my head at the moment of decision, but my genes have already decided: one complete copy of my DNA (me) is worth more than half a copy (my child), so my instinct is to save myself and let my child die. What if instead of one child I have three? Now it’s a choice between one full cohort of my genes versus 3 half-cohorts, which is the equivalent of 1.5 copies of the full DNA string. Now I’m instinctively drawn to sacrificing myself for the sake of the kids. But I’m only the father, and fathers can never be as certain as mothers that the children carry their DNA — so maybe apply a bit of a discount to the value per child. Note that this hypothesized calculation takes place beneath the level of conscious awareness. Fathers who sacrificed themselves for only one child wouldn’t have passed on as many copies of their genes as the fathers who saved themselves — so over time the prevalence of this self-sacrificial gene would have gradually dwindled in the gene pool.

Mating preferences. Anatomical symmetry is associated with robust health, so your genes would associate sexual attractiveness with symmetry. Physical strength and stature would be attractive in men, who gather food for mother and dependent children. Cleverness might be even more important, since our advantage as a species isn’t so much physical as mental — crafting tools, figuring out good places to look for edible plants, anticipating migration patterns of game herds, etc. A woman with a narrow waist is probably not currently pregnant, so she’s worth inseminating. Wider hips would suggest both that the woman has reached sexual maturity and that her pelvis is wide enough for a newborn to pass through, and larger breasts suggest more lactation potential — hence the hourglass figure should be naturally attractive sexually to men. Women too would need to be clever in conserving food stores, using older children to help rear the younger ones, etc. People of high status and people with lots of friends might be able to call in favors during times of duress, so successful and popular people should be more attractive sexually. Men who demonstrate loyalty and nurturing abilities are more likely to stay with the woman after children are born, providing for them, keeping them alive — so this sort of one-woman man would be an attractive sexual partner.

The genes are all about propagating themselves to the next generation. A newborn is totally dependent on others for survival, so those adults who find children appealing are more likely to keep their children — and their genes — alive. Similarly, children who don’t make themselves overly obnoxious to their parents are less likely to be deserted — so a child who carries genes that stimulate her affection-seeking and obedience to adults’ rules would give her a survival edge over the unresponsive and rebellious child. Once the child reaches puberty the game changes. Now the child can reproduce and carry the genes along, and can also provide for herself rather than being dependent on adults. Consequently it’s more important for the adolescent to make herself attractive as a potential mate — physically, emotionally, socially — than to continue to placate her parents.

The Nurture Assumption

[This post is based mostly on The Nurture Assumption, 1998 by Judith Harris.]

Nature versus nurture? We know how to answer such questions: a little of both. As a parent I wonder how much impact my crappy parenting is likely to have on my daughter’s development. We assume there’s a big parenting impact, but even our own experiences aren’t so clear-cut. My father isn’t a particularly warm person, and neither am I. Did I learn to be that way from him, or did I inerit his coolness genes? Similarly, my daughter doesn’t much like hugging, and neither do I. Nature or nurture?

Since most of us are raised by our biological parents, it’s hard to parse apart the influences. All is not lost, though, when we live in a society where it’s possible to identify identical twins who were separated in infancy and raised in different homes. The gist is this: as adults, twins raised apart are quite similar to one another on all sorts of personality measures. So what about twins raised together, in the same home, by the same parents — how similar must they be? In fact, says Harris, they are no more alike than identical twins raised apart. Behavioral genetic studies continue to show that the family home has few, if any, lasting effects on the people who grew up in it.

In the sixties psychologists studied the effects of three contrasting parenting styles: authoritarian, permissive, and authoritative — or what Harris calls too hard, too soft, and just right. The “just right” combines the love and approval of the “too soft” with the setting of enforceable limits of the “too hard.” In short, Just Right parents are exactly what end-of-the-twentieth-century middle-class Americans of European descent think that parents ought to be. There is modest empirical support for the conclusion that children of Just Right parents get along better in social situations and do better in school. However, in Asian-American families the Too Hard parenting style works just as well as the Just Right. In Asian cultures, strict parenting is the cultural norm; in America it’s not. American parents who employ the Too Hard method either are out of touch with cultural expectations — i.e., the parents are social misfits or prone to aggression — or else their kids don’t respond well to the Just Right style — i.e., they’re temperamentally resistant to ordinary techniques of socialization. Or both: inappropriately aggressive adults pass their genes on to the next generation, producing inappropriately aggressive children.

In recent posts we discussed findings showing that parents who are attuned to their infants establish a more effective learning environment for their children. The child senses her similarity to her parent, making it easier to take the parent’s perspective in joint attention tasks. Also, consistent emotional attunement provides a more secure base from which the child can explore the environment. In fact, the attentive, supportive, empathic attitudes and behaviors these parents exhibit are precisely the kind that we would expect to see their children exhibit. Maybe these parents children simply inherit their parent’s orientation genetically rather than learning it from them. Again, the twins-reared-apart studies show no measurable lasting effects of parenting techniques.

Children raised in broken homes are more likely to have difficulties in maintaining stable marriage-type relationships. Again Harris looks at studies of twins and siblings.

The analysis churned out by the researchers’ computer was boringly similar to those of other behavioral genetic studies: about half of the variation in the risk of divorce could be attributed to genetic influences — to genes shared with twins or parents. The other half was due to environmental causes. But none of the variation could be blamed on the home the twins grew up in. Any similarities in their marital histories could be fully accounted for by the genes they share. Their shared experiences — experienced at the same age, since they were twins — of parental harmony or conflict, of parental togetherness or apartness, had no detectable effect… Don’t look for a divorce gene. Look instead for traits that increase the risk of almost any kind of unfavorable outcome in life. Traits that make people harder to get along with — aggressiveness, insensitivity to the feelings of others. Traits that increase the chances they will make unwise choices — impulsiveness, a tendency to be easily bored.

Constructing the Self

[My primary source for this post is The Construction of the Self, 1999 by Susan Harter]

The young child becomes able to recognize her own reflection in a mirror at about the same time that she begins saying things like “that’s mine” or look at me” or “my name is Anna.” Also at around the same time, the child begins using language to represent parental rules and expectations and her ability to meet them. By about age 3 1/2 the child becomes able to construct a verbal self-narrative that includes both self-evaluations (“I am a good girl”) and remembered events (“Mommy yelled at me yesterday”). Self-description enables the child to develop a stable sense of self, but it also drives a wedge between self as lived and self as verbally represented, which can be distorted to conform to others’ expectations and to one’s own fantasies.

In general, young children feel pretty good about themselves. As they develop they get better at doing things that matter to them. They tend to confound actual with desired competency, leading them to overestimate themselves. Beginning in middle childhood, the perceived gap between actual and ideal self widens. Increasingly they compare themselves with their peers, and this source of evaluation comes to compete with (though not to usurp) parental evaluations. In adolescence children become increasingly introspective and morbidly preoccupied with what others think of her. The adolescent realizes that she presents herself differently, and is evaluated by others differently, as she passes from one social context to another. By observing and internalizing multiple perspectives on the self, the adolescent simultaneously develops a more accurate understanding of her strengths and weaknesses at the same time as she develops higher standards for herself. Self-concept becomes unstable, self-contradictory, multiple: “Which one is the real me?” By late adolescence self-esteem tends to go back up, as the individual exercises greater autonomy, chooses to elicit social support from those who hold her in high regard, and becomes more adept at balancing multiple social roles.

Children evaluate themselves largely in terms of competencies valued by themselves and significant others (parents, close friends, peer group). In Western cultures those valued competencies are, in order: physical appearance, scholastic competence, social acceptance, behavioral conduct, and athletic competence. Perception of one’s competency in these domains is more important than the person’s actual competency.

More physically attractive infants get more positive attention from adults. In middle childhood kids generally think they look pretty good. The importance of appearance to self-worth increases through adolescence. Boys don’t tend to change their ratings of their own attractiveness as they get older; girls, on the other hand, show a continual deterioration throughout adolescence in their own perceived attractiveness. Girls also regard physical attractiveness as more important than do boys. Not surprisingly, girls’ perceived self-worth deteriorates through the adolescent years. This is especially true for stereotypically feminine girls; self-estem among more androgynous girls is less closely related to physical appearance and doesn’t decline significantly over adolescence.

Kids with higher levels of approval and support from significant others have higher self-worth. Through evaluations of changes over time, child development researchers have constructed a causal model for predicting adolescent self-worth and mood:

  • Physical appearance, likability by peers, and athletic competence lead to peer approval and support.
  • Scholastic competence and behavioral conduct lead to parental approval and support.
  • Approval and support from peers and parents lead to self-worth, hopefulness, and cheerfulness.

Depression and anger are associated with low self-worth. Adolescent depression tends to be caused by the same factors that cause low self-worth: dissatisfaction with one’s physical appearance, competence, or social interactions. Rejection from and conflict with peers is a primary source of depression, anger, and low self-worth. Parental conflict and rejection is much less strongly associated with adolescent anger and depression.

The Specular Image and the Social Self

In 1949 Jacques Lacan wrote an essay called “The Mirror Stage,” in which he outlined a theory of how children develop a sense of self. The idea derives from the young child’s ability to recognize his own image in a mirror.

This event can take place, as we have known since Baldwin, from the age of six months, and its repetition has made me reflect upon the startling spectacle of the infant in front of the mirror. Unable as yet to walk, or even to stand up, held tightly as he is by some support, human or artificial, he nevertheless overcomes, in a flutter of jubilant activity, the obstructions of his support and, fixing his attitude in a slightly leaning-forward position, in order to hold it in its gaze, brings back an instantaneous aspect of the image.

Lacan offers a nicely nuanced empirical observation. Problem is, it’s not true. Researchers have consistently found that a child doesn’t recognize his reflected image until 18 to 24 months (see yesterday’s post) — after standing, after walking, after the beginning of language acquisition. But we continue. Lacan says that the child identifies with his own reflection.

This jubilant assumption of his specular image by the child at the infancy stage, still sunk in his motor incapacity and nurseling dependence, would seem to exhibit in an exemplary situation the symbolic matrix in which the I is precipitated in a primordial form, before it is objectified in the dialectic of identification with the other, and before language restores to it, in the universal, its function as subject.

Again Lacan’s developmental sequence is off. A few posts ago we saw how as early as 9 months the child enters into the referential triangle of self and adult jointly orienting themselves toward the world. Learning within the triangle, the child already participates in the “dialectic of identification with the other,” seeing himself as being similar to the other in intentionality and orientation toward the world. Thus the child can follow the adult’s pointing finger to an object, and even the verbal instruction to look at the named object, long before the child can recognize his own reflection.

The important point, says Lacan, is that this specular self-image situates the agency of the ego, before its social determination, in a fictional direction. It is the “ideal-I,” a sense of the self as a whole and integrated being rather than a chaotic assembly of body parts and the seeminly random motion that animates them. But it’s an exterior, two-dimensional view, an “image,” an imaginary unity. Whatever the child subsequently learns about himself through social interaction and language will never replace the specular image. He ends up internally doubled, the socially-constructed I forever alienated from the specular I, the I of reality always lacking in comparison with the ideal-I that precedes it. Subsequently the child attempts to construct a unified and autonomous self to match the ideal-I. But it’s futile, resulting in an ego that is a rigid, dead, hollow superstructure, like a fortress or a mannequin, accompanied by the paranoiac fear of total self-dissolution.

But emirical findings summarized by Tomasello strongly suggest that the self-image first emerges in social interaction. In early infancy the child learns to take the other’s perspective in jointly attending to the world. If the other points to the child, then the child begins to see himself as something in the world that the other can recognize. It’s likely that this prior social self-pointing gives the child the self-objectification necessary to recognize himself in the mirror.

So, Lacan locates the origin of neurosis in the sense of loss: a primal self-integration and plenitude that’s been lost, perhaps stolen, in social and linguistic interaction with others. The self then becomes motivated both to recover the lost sense of self and to compensate for the hole in the self where the integrated self used to be. But if the sense of self emerges first from social-linguistic interaction, then this specular, imaginary, fictional, ideal sense of self, if it exists at all, would have developed after and as an artifact of the socially constructed sense of self. The cascade of effects for Lacanian psychopathology would seem profound.

Complexity is Built on Scaffolding

It is a point so obvious that it is seldom, if ever, mentioned. If children did not have available to them adult instruction through language, pictures, and other symbolic media, they would know the same amount about dinosaurs as did Plato and Aristotle, namely, zero.

– Tomasello, The Cultural Origins of Human Cognition, 1999

What we know depends almost entirely on the accumulated knowledge of the cultures we live in, transmitted mostly through language. Even our own experiential knowledge is shaped largely by conceptual categories we acquire through language, categories we probably wouldn’t have come up with on our own. Kids learn quickly, but not automatically. People don’t always agree, so the child is exposed to alternative explanations. The child may be misunderstand or disagree with what someone else has to say, so she may seek clarification or argue in an effort to refine her understanding. The child’s knowledge is also subjected to verbal critique by adults who (presumably) know better. Knowledge acquisition isn’t just linear and cumulative; it also demands evaluation and alignment of perspectives.

Children reach 3 to 5 years of age before they realize that others have different ideas and beliefs from their own. Still, the process for acquiring this sort of knowledge about other minds is similar to acquiring knowledge about the world: joint attention, contextual framing, dialogue, exposure to alternative views, clarification, argumentation, critique. Exposure to conflicting ideas of other young children seems particularly important in understanding that others have minds similar to, yet different from, their own.

Understanding other minds requires the child to take the perspective of the other, simulating in her own mind the thoughts the other might be thinking. Between 5 and 7 years children begin monitoring and managing the impressions they make on others: “She thinks that I think X.” This observation requires perspective-taking at a second remove: my perspective on her perspective on my thinking. I remember my daughter asking her friend at around age 7: “Do you know how you sound when you say that?” This is third remove: my perspective on her perspective on my perspective on her thinking.

The child also begins to develop the ability to view something from multiple perspectives in mind at the same time. For example, analogy and metaphor depend on grasping the literal meaning of a verbal expression and simultaneously applying it in a figurative context. The child also becomes reflexively self-aware, observing her own perspective, describing it to herself, consciously trying to make it more systematic.

It’s possible that a child develops a sense of self by seeing and imitating how others see her. It seems more likely, based on the natural progression of cognitive development, that the child develops a sense of others by simulating from within the self what it might be like to be the other. Then, from within the simulated other’s perspective, the child can begin to see herself as others see her. Finally, the child can self-reflect, seeing herself as if she were an other. The child’s understanding — of others, of culture, of self — advances from simple to complex, always building on joint attention and contextual framing within the basic referential triangle of self, other, and world. Learning depends on scaffolding — just like Odile has been telling us.

The Veil

[I previously posted one of our brochures from when we sold our house in Colorado. Here’s the first one in the series — photo by Anne Doyle. For me it evokes heterutopia and apokalypto and the eternal return.]

sconcebottomlit.jpgImagine a part of the world where houses are indistinguishable one from another. (I have been to such places.) Every house the same architectural style, the same color. All the houses on a block run together: it’s not clear where one house ends and the next begins. No street addresses.

Encountering the indistinguishable exteriors, an onlooker is tempted to infer that the occupants of these houses likewise are indistinguishable one from another.

This would be a mistake.

Inside, each house explodes in a riot of diversity. Strange food preparation rituals bring forth delicacies unknown in the bazaars. Harem girls sigh behind perfumed silken curtains, while eunuchs play games of chance for stakes meted out in drams, essences, human souls. Someone writes a history of times that never were in a language that has never been spoken. To one entering such a home no personal favor can be denied, for this visitor has been inside and can never forget.

When the Jews entered the land of promise, their God instructed them to build a tabernacle. Rare and beautiful decorations were to be placed in the tabernacle: purple and scarlet cloth, acacia wood and porpoise skins, altars for burnt offerings and peace offerings, anointing oils and fragrant incense, onyx stones and golden rings. Within the tabernacle God would establish His dwelling place. In His home were placed the Ark of the Covenant, containing the stone tablets brought down from the mountain. The Ark was crowned by the mercy seat, from above which Yahweh had spoken to His people during their long desert sojourn. Ark, tablets, mercy seat – all were to be placed behind a veil, separating the Holy of Holies from the place of meeting. None but the high priest could enter behind the veil and live, so overwhelming was the presence of the Most High.

From outside, the Holy of Holies revealed none of its mysteries. All power and glory were concealed from view, behind the veil.

* * * *

5695 Aurora Place, Boulder

A modest exterior cloaks the mystery residing within.

  • Architectural features and color schemes accentuate the horizontal axis, subtly disguising what is actually the tallest structure on the street.
  • The bumpy texture of the stucco exterior creates a virtually nonreflective surface, a principle embodied in “stealth” technologies for surveillance aircraft and in acoustically optimized sound stages.
  • Taupe — drawing from the palette of the Flatirons and surrounding grasslands, the home camouflages itself without resorting to neutrality.
  • Wood shakes, grandfathered into compliance with code, naturalize the two-level roof line.
  • Positioned at the “back of the sac,” the home is veiled from the onlooking gaze of pedestrians and motorists traversing the through street.
  • All interior spaces are up, back, to the side — concealed from street-level view.
  • Trees shield the front porch from the passing parade; you can see without being seen.
  • Open space in the back — even voyeurs equipped with high-powered telescopes positioned on the other side would find it impossible to verify the presence of nudes in the hot tub.

Language and Contextual Framing

Invoking language as an evolutionary cause of human cognition is like invoking money as an evolutionary cause of economic activity… Just as money is a symbolically embodied social institution that arose historically from previously existing social-communicative activities.

Yesterday’s post discussed Tomasello’s “referential triangle” of infant and adult jointly attending to objects in the world. It is on this social formation that the child learns to use language. Language is a social act, where participants in the referential triangle invoke socially shared symbols for construing phenomena that are the object of joint attention. In order to use language effectively, the child must not only be able to manipulate the linguistic symbols effectively. She must also be able to see herself from the adult’s perspective in joint attentional scene in which language is being used.

In learning by imitating an adult, the child effectively substitutes herself for the adult. But when the adult uses a new word in reference to an object, imitation doesn’t work. That’s because the adult uses the word to direct the child’s attention toward some aspect of the world. If the child imitates the adult’s language use, she ends up speaking the new word to herself. What’s needed is “role-reversal imitation”: the child must direct the word toward the adult in the same way the adult directed the word toward the child. Besides substituting herself for the adult in using the word, the child must also substitute the adult for herself as the target of the intentional act of speaking.

The joint attentional scene is the child’s learning laboratory for acquiring language. Children who spend more time with their mothers in joint attentional activities between 12 and 18 months of age have larger vocabularies at 18 months. Vocabulary growth is even stronger if the mother describes in language what the child is already attending to rather than using words to redirect the child’s attention. This maternal tracking of the child’s activities has scaffolding value in very early language acquisition, but it fades in importance as the child becomes more adept at determining communicative intentions in more ambiguous and varied learning contexts. Children quickly learn to use words appropriate to the contextual frame in which the conversation is embedded; e.g., by calling the same piece of real estate the shore or the coast or the beach; or to refer to a particular object as wet or blue or mine. A child can overlay a given scene with any number of alternative contextual frames, choosing language accordingly.

The point is not just that linguistic symbols provide handy tags for human concepts or even that they influence or determine the shape of those concepts, though they do both of these things. The point is that the intersubjectivity of human linguistic symbols — and their perspectival as one offshoot of this intersubjectivity — means that linguistic symbols do not represent the world more or less directly, in the manner of perceptual or sensory-motor representations, but rather are used by people to induce others to construe certain conceptual/perceptual situations — to attend to them — in one way rather than another. The users of linguistic symbols are thus implicitly aware that any given experiential scene may be construed from many different perspectives simultaneously, and this breaks these symbols away from the sensory-motor world of objects in space, and puts them instead into the realm of the human ability to view the world in whatever way is convenient for the communicative purpose at hand.

Tomasello’s empirical evidence strongly suggests that joint orienting of interpretive horizons isn’t just a hermeneutical device for bridging cultural gaps in understanding one another. Joint orientation is the foundational context for infants’ language acquisition. Language users don’t just see the world from a single perspective; they can frame the same situation in many different ways, depending on conversational context. This capacity for contextual flexibility, combined with the ability to take the other’s perspective in the joint attentional scene, are the skills necessary for learning to understand each other in conversation. These are skills we all acquired in early childhood when we were first learning to use language. Consequently there’s hope that the adult therapeutic client can draw on these basic skills in becoming a more effective interpreter of others’ behaviors and intentions — as well as his own.

The Referential Triangle

What is the self? Philosophers, theologians, and therapists offer various perspectives. I’m going to summarize some of the empirical findings related to self, beginning with Michael Tomasello’s book The Cultural Origins of Human Cognition.

There’s a 99% overlap in the DNA sequences of humans and chimpanzees. There’s just one major difference, says Tomasello, and that is the fact that human beings “identify” with conspecifics more deeply than do other primates. The young child comes to recognize herself as an “intentional agent,” with goals and strategies for attaining her desires. Soon thereafter, the child experiences herself as a “mental agent,” having thoughts and beliefs that differ from other people and from the rest of the world. The child thereby comes to recognize that others are also intentional and mental agents like herself. This is the big difference from other apes, who individually are intentional and mental agents but who don’t seem to realize that their fellow apes are too. In interacting with the physical world, humans understand cause-effect relationships at a much deeper level than do other primates. Because of the unique human ability to understand intentionality and causality, people can make tools, learn from one another, and cooperate in performing complex tasks.

At first the human infant’s developmental trajectory isn’t much different from other apes. Then comes “the nine-month revolution,” which begins the cascade of developmental achievements that definitively mark humans as unique. At six months an infant will interact with objects in the world, and she will interact one-on-one with another person. At around nine months the infant begins attending jointly to objects and people, forming a referential triangle of child, adult, and the object or event to which they share attention… In short, it is at this age that infants for the first time begin to “tune in” to the attention and behavior of adults toward outside entities.

Joint attention triggers a series of related achievements. By 12 months a child can follow the adult’s point or gaze toward an object or event — even adult chimpanzees can’t do that. By 15 months the child can direct the adult’s attention by pointing. The young child develops an awareness of intentionality in herself and in the adult, but it’s not clear whether self-awareness or other-awareness comes first. The sense of both self and other as intentional beings seems to emerge simultaneously. Tomasello elaborates on the importance of joint attention:

Human beings are designed [sic] to work in a certain kind of social environment, and without it developing youngsters (assuming some way to keep them alive) would not develop normally either socially or cognitively. That certain kind of social environment we call culture, and it is simply the species-typical and species-unique “ontogenic niche” for human development.

Through joint attention the child enters into the “habitus” of the people among whom she grows up — the kinds of living arrangements, routine activities and normal social practices that comprise the child’s “raw materials” for learning. The adult human inducts the young child into the habitus by drawing her attention to its components, demonstrating routine behaviors of the culture, and helping her perform some of the ordinary childhood activities. Tomasello believes that joint attention also facilitates the development of self-awareness during this same revolutionary developmental interval:

The idea is this. As infants begin to follow into and direct the attention of others to outside entities at nine to twelve months of age, it happens on occasion that the other person whose attention an infant is monitoring focuses on the infant herself. The infant then monitors that person’s attention to her in a way that was not possible previously… From this point on the infant’s face-to-face interactions with others — which appear on the surface to be continuous with her face-to-face interactions from early infancy — are radically transformed. She now knows she is interacting with an intentional agent who perceives her and intends things toward her. When the infant did not understand that others perceive and intend things toward an outside world, there could be no question of how they perceived and intended things toward me… By something like this same process infants at this age also become able to monitor adults’ emotional attitudes toward them as well — a kind of social referencing of others’ attitudes toward the self. This new understanding of how others feel about me opens up the possibility for the development of shyness, self-consciousness, and a sense of self-esteem. Evidence for this is the fact that within a few months after the social-cognitive revolution, infants begin showing the first signs of shyness and coyness in front of other persons and mirrors.

The empirical evidence supports the idea that a child simultaneously develops an awareness of causality, of others’ intentionality, and of the self. Joint attention in the referential triangle of child, adult, and object is the spark that sets off the developmental explosion. This developmental synchrony sets the stage for language acquisition…

Becoming a Really Good Other

I’ve been assuming that a therapeutic relationship is tilted toward the client. The therapist establishes a context of caring and trust for the client; topics of conversation stem from the client’s experiences; the process is intended to enhance the client’s life. And yet I have a sense that to focus too much attention on the client is to locate both the problem and the solution inside the self. If I believe that problems stem from misaligned interpretations between the client and others, then the solution seems to lie in interpretive realignment. This doesn’t mean that the client has to change his outlook; it does mean that the client has to become a more astute interpreter of his own and others’ words and actions. The therapist’s job, then, is to enable the client to loosen up his rigid interpretations so as to be able to see and create alternative interpretations.

Zeddies talks about the importance of the therapeutic relationship in making change possible.

The view that therapist and patient coconstruct meaning and understanding reflects the idea that the material that is recognized as meaningful, how it is discussed, and the understandings reached all emerge from the therapeutic relationship and dialogue… The clinical focus is expanded from that of trying to locate meaning inside the patient to include a thoroughgoing exploration of the relentlessly expanding and contracting relational process between therapist and patient. In this view, there is no strict division between inside and outside, here-and-now and then-and-there, fantasy and reality, intrapsychic and interpersonal.

Zeddies proposes that the relationship is a milieu that facilitates not just understanding of self and other but also the creation of new experience. This seems reasonable. If the focus of therapy is to enhance awareness of misaligned interpretations between people in relationship, what better way to explore the challenges of realignment than in relationship?

The implication is that the therapist’s job isn’t just to realign his interpretive alignments so as to be able to see what the client sees. The therapist also must be the other in the conversation, manifesting an interpretive outlook that is different from the client’s. If the client is to become a more astute interpreter, he must learn to see what the other sees. In therapy the therapist is the other. To bring the client along in the process, the therapist needs to be self-aware of his otherness. He should be able to communicate his interpretations not as expert dispenser of truth but as an other who knows how to communicate his otherness. He should be able to respond to the client’s questions without defensiveness, knowing that the client needs to learn how to ask such questions of the other. Only through the openness of the therapist-as-other can the client gain understanding, realign his own interpretive horizons, and reduce the frustration and alienation of chronically irresolvable misalignment with the other.

The therapeutic relationship can remain tilted in the direction of the client’s problems, experiences, understanding, loosening, realignment. But the therapist can’t just align his interpretive perspective with the client’s in order to offer care, safe support, empathy, and interpretation. The therapist also has to be an idealized other in the relationship. He must be able to recognize how the client responds to him as other, to see the client as the other sees him, to be able to explicate this other perspective with patience and openness. Only by retaining his otherness while also seeing the relationship as tilted toward the client can the therapist offer an opportunity for the client to get out of his own head and to negotiate a creative relationship. The therapist need not become the independent observer nor the client’s double; rather, the therapist needs to become a really good other.

Bicycle Traces

As I was walking along the beach yesterday I came across a little girl sitting on her bicycle. All of a sudden she tipped over. The first thing she did when she hit the ground was to turn her head and look over to the side. I followed her gaze. There, sitting on a bench, was a woman who was probably the girl’s mother. She smiled at the girl. The girl smiled back, picked up her bicycle, and hopped back on. No words were spoken.

You can imagine other reactions. Probably the most typical one is the mom with a worried look on her face running up to the little girl to see if she’s alright. Or perhaps a quick lecture on how to keep from falling over again. Maybe scoffing at the kid for falling. Or the mom might not have been paying attention, and so she wouldn’t have met the little girl’s gaze.

I mentioned this event to Anne. She said she witnessed a nearly identical circumstance earlier in the day. A little boy riding his bike, training wheels still on, took a corner and fell off. His mom was with him. The kid got up, whimpering a little, and glanced at his mom. The mom, a stern look on her face, smacked him upside the head. The kid got up, righted his bike and climbed back on. No words were spoken.

A little while after seeing the girl fall off her bike I heard a kid crying. A little girl was holding her dad’s hand as they headed toward the beach. The dad, grim-looking, held a tricycle in his other hand. I wonder what their story was.

The Father of Logos

Socrates engages in a dialogue with Phaedrus about the propriety and impropriety of writing:

Socrates: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific (pharmakon) both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific (pharmakon) which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality…

I cannot help feeling, Phaedrus, that writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves.

Phaedrus: That again is most true.

Socrates: Is there not another kind of word or speech far better than this, and having far greater power — a son of the same family, but lawfully begotten?

Phaedrus: Whom do you mean, and what is his origin?

Socrates: I mean an intelligent word graven in the soul of the learner, which can defend itself, and knows when to speak and when to be silent.

Phaedrus: You mean the living word of knowledge which has a soul, and of which written word is properly no more than an image?

Socrates: Yes, of course that is what I mean.

* * * *

In “Plato’s Pharmacy” (1972) Derrida (psycho)analyzes Socrates’ mythic tale. Theoth presents his invention to Thamus as a gift offered by a vassal to his lord. He extols its value as a mnemonic aid. The king doesn’t know how to write, but he doesn’t need to: he can speak. He rejects and demeans Theoth’s gift, saying that its purported benefit is actually its greatest flaw. The cure (pharmakon) is really a poison (pharmakon).

The lord is the father who speaks the word (logos), says Derrida:

Not that logos is the father, either. But the origin of logos is its father. One could say anachronously that the “speaking subject” is the father of his speech… Logos is a son, then, a son that would be destroyed in his very presence without a present attendance of his father. Without his father, he would be nothing but, in fact, writing. At least that is what is said by the one who says: it is the father’s thesis.

In speech the word comes forth from the speaker like a son from a father. The spoken logos depends on the father’s wisdom and memory as the son depends on the father. But writing cuts the logos off from the speaker, the son from the father. You could even say that writing depends on the absence of the father — in effect, writing is patricide. Logos the son, now orphaned, is free: he no longer needs to rely on the father to be brought forth as speech. The son no longer needs the father’s memory — he no longer needs to remember the father — because he has absorbed the memories of the father into himself. Logos the son becomes autonomous.

Socrates agrees with mythical king Thamus about the inferiority of written words: “if they are maltreated or abused, they have no parent to protect them.” There’s a hidden threat in Socrates’ speech, like a mafia don offering his protection from a violence that he himself might inflict. But Socrates is also expressing his own fear and vulnerability. Writing is a poison, reaching back into the king’s memory and erasing it, killing the king from inside himself:

From the position of the holder of the scepter, the desire of writing is indicated, designated, and denounced as a desire for orphanhood and patricidal subversion. Isn’t this pharmakon then a criminal thing, a poisoned present?

Derrida sees in Socrates’ discourse the mythic origin of the “metaphysics of presence” that has dominated Western thought ever since.

In contrast to writing, living logos is alive in that it has a living father (whereas the orphan is already half dead), a father that is present, standing near it, behind it, within it, sustaining it with his rectitude, attending it in person in his own name. Living logos, for its part, recognizes its debt, lives off that recognition, and forbids itself, thinks it can forbid itself patricide… For only the “living” discourse, only a spoken word (and not a speech’s theme, object, or subject) can have a father… the logoi are the children. Alive enough to protest on occasion and to let themselves be questioned; capable, too, in contrast to written things, of responding when their father is there. They are their father’s responsible presence.

Already half dead, says Derrida. For Socrates, writing is cut off from the speaker, from the life that animates the writing. Yet the writing still speaks and remembers even in its father’s absence, even after his death, even after the son kills the father by emptying him of his words and his memories. Speech ends and memory fails, but the written word and the archive can go on forever. Half-dead eternal killer, never really present but not absent, writing is the speaker’s uncanny double. For writing has no essence or value of its own, whether positive or negative. It plays within the simulacrum.

Socrates recounted this myth of writing in a conversation with his student Phaedrus. The king, the father of speech, has thus asserted his authority over the father of writing. Plato, another of Socrates’ students and the father of the metaphysics of presence, is also the one who kills his master and father by writing down his logoi. So the metaphysics of presence from the beginning already contains its own death.

In Derrida’s interpretation, writing is no longer an expression of the author. A text is an autonomous thing, capable of speaking for itself without remaining under its father’s protection. But, Derrida insinuates, once you make the move of detaching the writing from the author, why stop there? What about speech? The speaker is the father of logos, but no one would know the father unless the son reveals him. Why not entertain the possibility that logoi reveal themselves from the beginning, that the speaker comes into being through the words that he speaks, that the father issues forth from the son?