24 March 2013

Wherein I Recall My Prior Life as a Mad Scientist

Filed under: Ktismata, Psychology, Reflections — ktismatics @ 8:10 am

If the glass is half full, that means it’s also half empty.

After finishing my doctorate I did a postdoc in an AI lab. These were the early, heady days of expert systems, a technology predicated on making explicit the tacit knowledge of human experts, converting the heuristics of human decision-making into conceptual objects and rules for manipulating them that could be run on computers. Our core group consisted of cognitive psychologists and computer scientists, and in building systems we would collaborate with “domain experts” in medicine, business, law, engineering, and other practical disciplines. A standard division of labor was established: the domain experts provided the expertise; the psychologists did the “knowledge engineering,” which consisted of making explicit what the experts knew and how they used that knowledge; the computer scientists designed and built the computer systems encoding the engineered expert knowledge.

Early on I came to a sobering realization: human experts aren’t nearly as good as computers at using knowledge. Humans have limited processing capacity, and so they can’t remember very many things at once, can’t pay attention to very many features of the task in front of them, can’t deal with very many variables at the same time. To compensate for their limitations, humans take various short-cuts and work-arounds in solving complex problems. Computers have limitations too, especially in their ability to acquire new knowledge, but in their ability to process lots of information they vastly outperform humans. Equipped with knowledge already learned by human experts, computers can manipulate this knowledge more efficiently, and more accurately, than can the human experts.

I remember giving a talk in DC to a gathering of all the AI postdocs funded under the same national grant program, working in labs at MIT, Harvard, Stanford, U. of Minnesota, UC San Diego, maybe others (my memory has degraded since then). Most of the talks were about AI work in progress. I talked about the differences between human and computer decision-making. Instead of fancy slides I drew overheads by hand with a black marker. I drew out a simple binary decision tree that went maybe 7 layers deep, pointing out ways in which knowledge and logic interact in actual decision-making tasks, describing how computers are not vulnerable to the same sorts of biases as humans in working through even a fairly simple decision. I remember one of the colleagues at my university telling me afterward that he thought my talk sucked. But I also remember discussing the implications of my presentation with the overall head of the grant program nationwide and one of the pioneering figures in expert systems. It turned out that his group was moving away from having computers imitate human heuristic knowledge toward more reliance on what computers are best at: manipulating numerical information via quantitative algorithms.

While I did some work on a pediatric cardiology expert system, I spent most of my time as a postdoc doing knowledge engineering on two other projects. One was a system for designing so-called fractional factorial experiments, where the domain expert was a statistics professor in the business school. The other was a system for making credit decisions, the domain expert being a professional credit analyst in the insurance industry. In both cases, through conversation and observation, I was gradually able to identify the information the experts looked for in the “task domain” and the ways in which they used this information to render decisions. As had been the case in other domains, these experts used short-cuts and rules of thumb to compensate for human processing limitations. I put together alternative “inference engines” for both of these task domains, with decision-making processes predicated on the heavy number-crunching capacity of computers. I also went ahead and did the programming on both of these systems.

The results should have been predictable. Both the experimental design system and the credit rating system were excellent at performing their respective tasks. Where it was possible to evaluate their decisions in comparison with the “right” answers, the computer systems outperformed the human experts. The human experts acknowledged their machinic doubles’ excellence, even at times conceding their superiority. But they didn’t trust these hybrid expert systems, using their own human knowledge but processing it algorithmically rather than heuristically. They couldn’t understand how these systems thought, how they arrived at their decisions. The systems’ reasoning procedures, more efficient, more consistent, and arguably more accurate than their own, were too opaque, too alien for the human experts to grasp. I concluded that the only way systems like the ones I built would ever be used in real-world decision-making would be if the human experts weren’t sitting around looking over the expert systems’ shoulders second-guessing their decisions. You would need lower-level human technicians to feed the computer systems with data, to read the output, and to enact the systems’ decisions without constantly grousing about robots ruling the world and all the rest of the tedious all-too-human resentment my systems seemed to provoke.

16 March 2013

The Brain’s Glass is Half Full

Filed under: Ktismata, Psychology — ktismatics @ 6:06 am

My brain doesn’t have to understand its own workings in order to work. Even a frog can see a fly, hop toward it, and catch it mid-flight with its tongue, all without knowing how its neuromuscular apparatus accomplishes these feats. I don’t know through introspection how I see and run and catch a ball, how I feel warmth or hunger or sexual arousal, how I understand spoken language or remember the name of my elementary school. Why should I expect my ability to decide and to take intentional action to be any more accessible to introspection than any of these other neurological functions?

Humans are at least partially aware of their own limitations. I don’t have much body fur, but if I turn on the heat inside and put on a coat when I go out I can survive in a cold climate. I can’t outrun a zebra, but if I get in my Jeep and drive after it I can overtake the zebra. I have a hard time remembering a 9-digit number, and even then my memory degrades rapidly, but if I write the 9 digits down I can retrieve them when I need them. Humans build and use tools largely to compensate for their mental and physical limitations: this ability is paradigmatic of human intentionality.

Cognitive psychology as an empirical subdiscipline emerged in the late 60s not from philosophical idealism but from behaviorism, which regarded all behavior as an automatic stimulus-response mechanism unmediated by thought. Cognitive psychology presented empirical evidence supporting the alternative contention that there is a black box intervening between S and R, processing inputs and preparing outputs. Neurologists are exploring more directly how the black box works. But explanation won’t change functionality. When Copernicus figured out that the earth rotates on its axis and revolves around the sun, and when Galileo confirmed the heliocentric system observationally, people didn’t suddenly spin off the surface of the world and float into space, nor did they suddenly stop seeing the sun rise in the east and set in the west. If a satisfactory empirical explanation of intentionality is achieved, that won’t mean that people will suddenly stop intending or realize that they’d never in their lives actually intended anything.

13 March 2013

Intentionality as Adaptive Mutation

Filed under: Ktismata, Psychology — ktismatics @ 5:18 pm

[This post follows my prior posts on Terrence Deacon’s 2012 Incomplete Nature entitled Why Life? and Reducing the Intentionality Problem.]

I don’t know why or how life evolved from nonlife. Other self-organizing systems, like heat convection or atmospheric currents, are dissipative structures that accelerate the production of entropy in far-from-equilibrium conditions. Organisms do it too, maintaining their negentropic functions by using free energy from their environment, thereby accelerating overall entropy. Maybe that’s what organisms are for: to accelerate the inevitable heat death of the universe. Certainly humans are highly efficient entropy production systems, using not just their own bodily metabolisms but the artifacts they create to suck free energy out of the universe, replacing it with waste, exhaust, and other entropic byproducts.

Regardless of how and why they came into existence, organisms do maintain and reproduce themselves. Organisms that through random mutations achieve incrementally better abilities to obtain access to free energy and to metabolize that energy are more likely to survive and to reproduce. A bacterium doesn’t have to have intentions motivating it to waggle its flagella in search of sunlight and nourishment. A bacterium is a self-organizing system: it spontaneously perpetuates its own equilibrium by means of genetically encoded drives that are sensitive to indicators of environmental energy sources. Presumably it’s cause-effect all the way down.

Suppose the environmental sources of metabolic energy — food — available to an organism are uncertain, quantities are limited, and access is difficult. If following its genetic program the organism pursues an unfruitful path toward food, it will die. Suppose this organism carries a set of mutations that permits it to evaluate the relative likelihood of finding food by pursuing different uncertain trajectories. Suppose the organism is further mutated such that it is able to identify and work around obstacles standing between itself and the food source. These mutations would be adaptive, enhancing the organism’s survival odds, if the extra energy expended in the exercise of its mutated food-finding abilities are more than offset by increased access to sources of energy replenishment.

This whole mutated apparatus is still following straight cause-effect, motivated by genetic instincts attuned to environmental affordances. There is still no need to invoke intentionality. Even if through more mutations this organism became aware of its own enhanced food-finding capabilities, the self-awareness does not imply or require intentionality. I’m aware that I’m presently digesting my supper, but that doesn’t imply that digestion is the result of my intentions.

What if some further mutation occurred in which the organism does achieve intentionality? This mutant creature plans for its next meal even when it has no immediate need to replenish its energy stores, even when there are no signs of food being present in the organism’s immediate environment. Would this mutation prove adaptive? The same conditions are in effect: if intentionality works, and if the exercise of intentionality more than replaces the calories it burns up, then it should enhance the organism’s survival. Is intentionality a straight-ahead cause-effect mechanism? I think it would be better to regard it as a mechanism that anticipates cause-effect based on prior experience — a temporal feed-forward loop. Intentionality is predicated on the anticipated desirable future effects of causal mechanisms that the organism itself puts into operation: if I cause myself to go to the watering hole, this action will probably result in my finding some food there; if my speed covering the distance to the watering hole causes two hours to elapse, then as a result I will probably be hungry by the time I arrive there.

Another mutation: the organism becomes aware of other organisms’ techniques for finding food, whether those techniques are intentional or not. This organism observes a creature locomoting in some direction and infers that the creature is on the trail of some food source; it then follows the creature in search of its own food. It observes a creature evading complicated obstacles to obtain food; it imitates the other creature’s behaviors and secures its own food. This organism would need the sort of intentionality that enables it to infer that the other creature’s motivated behavior is relevant to its own motivations and therefore worth imitating as a cause that will likely generate a desired effect. Adaptive? Same rules apply. Cause-effect? The feed-forward loop of intentionality is augmented by a feedback loop of observing and imitating others’ behaviors.

In short, intentionality can be built incrementally on unintentional survival mechanisms without transcending cause-effect, and intentionality offers survival benefits if it isn’t too much of an energy drain to operate.

7 March 2013

Limitations to the Cleverness of Squirrels

Filed under: Psychology, Reflections — ktismatics @ 1:21 pm

Recently we replaced the old bird feeder, which had been gnawed beyond functionality by the squirrels, with a new supposedly squirrel-proof model. The design is fairly ingenious. Like ordinary feeders, the cylindrical tube containing the seeds has holes drilled into its sides with pegs mounted under the holes, allowing birds to perch while extracting seeds through the holes with their beaks. This feeder has a separate shell surrounding the cylinder, spring-mounted so that when a creature heavier than a bird climbs onto the feeder the shell sags down under the creature’s weight, closing the holes and thus denying access to the seeds within.

But squirrels are nothing if not persistent: if there is a design flaw they will eventually discover it. The base of this feeder is attached not to the outer shell but to the inner cylinder. Consequently, a squirrel standing with its back feet on the base puts no weight on the spring-loaded shell and thus the holes remain open. A squirrel figuring out this trick can stand there as long as it likes gorging on seeds.

Two squirrels live in our back yard. One of them has figured out how to outwit the squirrel-proof feeder; so far the other one has not. It took several days for the successful one to zero in on the invariants of the trick. After a few days of frustration it began to bounce up and down on the feeder, causing the gravity-activated shell to bounce too. When in the low-gravity “up” position the shell would slide up and the holes would re-open momentarily. During this brief interval of low relative gravity the squirrel would stick its paw into one of the holes and try to pull out a seed before the gravity of the downward bounce closed the aperture again. Eventually the squirrel discovered that crawling down onto the feeder, spinning 180 degrees vertically so that its head is facing up, and then resting its weight on the feeder’s base is a successful behavior sequence for keeping the holes open and the food accessible. It isn’t necessary for the successful squirrel to acquire explicit understanding of the cause-effect relationships involved; the squirrel need only recognize that its behavior has achieved the desired result. The other squirrel, the one that hasn’t yet succeeded, seems equally motivated, repeatedly climbing onto the feeder, gnawing at the lid and the wire mesh with which the gravity-activated shell is surrounded. I suspect that eventually the failing squirrel too will succeed.

Squirrels are clever. They’re good at figuring out complicated behavior sequences that give them access to food. Once they figure out the trick they remember it, performing the maneuver more quickly and efficiently over repeated sessions. What squirrels aren’t very good at is learning by imitation. You’d think that the failing squirrel would learn the trick by watching the successful one. But this requires the failing squirrel to realize that: (1) the successful squirrel’s behavior is motivated, even if that motivation is unconscious to the squirrel; (2) the failing squirrel shares the same motivation as the successful squirrel; and so (3) it would be a good idea to imitate the successful squirrel’s motivated behavior.

Squirrels are independent experiential learners. However, squirrels do not occupy joint attentional scenes with their fellow squirrels, and so they’re poor imitative learners. Humans are very good imitators. I can imagine two seed-loving humans living in the back yard. One of them struggles to figure out how to outwit the feeder, the other sits under the tree and waits. Once the experimental innovator succeeds, the observer watches, imitates, and succeeds too, without all the fuss and frustration of learning the trick by trial and error. I once took an MBA course in organizational innovation at the university where I got my doctorate. “Be a quick second,” was the key advice proffered by the professor.

Create a free website or blog at WordPress.com.