24 March 2013

Wherein I Recall My Prior Life as a Mad Scientist

Filed under: Ktismata, Psychology, Reflections — ktismatics @ 8:10 am

If the glass is half full, that means it’s also half empty.

After finishing my doctorate I did a postdoc in an AI lab. These were the early, heady days of expert systems, a technology predicated on making explicit the tacit knowledge of human experts, converting the heuristics of human decision-making into conceptual objects and rules for manipulating them that could be run on computers. Our core group consisted of cognitive psychologists and computer scientists, and in building systems we would collaborate with “domain experts” in medicine, business, law, engineering, and other practical disciplines. A standard division of labor was established: the domain experts provided the expertise; the psychologists did the “knowledge engineering,” which consisted of making explicit what the experts knew and how they used that knowledge; the computer scientists designed and built the computer systems encoding the engineered expert knowledge.

Early on I came to a sobering realization: human experts aren’t nearly as good as computers at using knowledge. Humans have limited processing capacity, and so they can’t remember very many things at once, can’t pay attention to very many features of the task in front of them, can’t deal with very many variables at the same time. To compensate for their limitations, humans take various short-cuts and work-arounds in solving complex problems. Computers have limitations too, especially in their ability to acquire new knowledge, but in their ability to process lots of information they vastly outperform humans. Equipped with knowledge already learned by human experts, computers can manipulate this knowledge more efficiently, and more accurately, than can the human experts.

I remember giving a talk in DC to a gathering of all the AI postdocs funded under the same national grant program, working in labs at MIT, Harvard, Stanford, U. of Minnesota, UC San Diego, maybe others (my memory has degraded since then). Most of the talks were about AI work in progress. I talked about the differences between human and computer decision-making. Instead of fancy slides I drew overheads by hand with a black marker. I drew out a simple binary decision tree that went maybe 7 layers deep, pointing out ways in which knowledge and logic interact in actual decision-making tasks, describing how computers are not vulnerable to the same sorts of biases as humans in working through even a fairly simple decision. I remember one of the colleagues at my university telling me afterward that he thought my talk sucked. But I also remember discussing the implications of my presentation with the overall head of the grant program nationwide and one of the pioneering figures in expert systems. It turned out that his group was moving away from having computers imitate human heuristic knowledge toward more reliance on what computers are best at: manipulating numerical information via quantitative algorithms.

While I did some work on a pediatric cardiology expert system, I spent most of my time as a postdoc doing knowledge engineering on two other projects. One was a system for designing so-called fractional factorial experiments, where the domain expert was a statistics professor in the business school. The other was a system for making credit decisions, the domain expert being a professional credit analyst in the insurance industry. In both cases, through conversation and observation, I was gradually able to identify the information the experts looked for in the “task domain” and the ways in which they used this information to render decisions. As had been the case in other domains, these experts used short-cuts and rules of thumb to compensate for human processing limitations. I put together alternative “inference engines” for both of these task domains, with decision-making processes predicated on the heavy number-crunching capacity of computers. I also went ahead and did the programming on both of these systems.

The results should have been predictable. Both the experimental design system and the credit rating system were excellent at performing their respective tasks. Where it was possible to evaluate their decisions in comparison with the “right” answers, the computer systems outperformed the human experts. The human experts acknowledged their machinic doubles’ excellence, even at times conceding their superiority. But they didn’t trust these hybrid expert systems, using their own human knowledge but processing it algorithmically rather than heuristically. They couldn’t understand how these systems thought, how they arrived at their decisions. The systems’ reasoning procedures, more efficient, more consistent, and arguably more accurate than their own, were too opaque, too alien for the human experts to grasp. I concluded that the only way systems like the ones I built would ever be used in real-world decision-making would be if the human experts weren’t sitting around looking over the expert systems’ shoulders second-guessing their decisions. You would need lower-level human technicians to feed the computer systems with data, to read the output, and to enact the systems’ decisions without constantly grousing about robots ruling the world and all the rest of the tedious all-too-human resentment my systems seemed to provoke.



  1. Sometime along the way, humans have started to accept the superiority of the machines. I don’t think we think about it all that much though, perhaps not enough anyway.


    Comment by ponnvandu — 24 March 2013 @ 3:07 pm

  2. This is very interesting. I don’t know why they would be annoyed at the speeding up of credit approval since it is decided on the basis of a human decision process vastly faster of course. Nagel’s Chinese Room comes into this. What the machine can’t do is process the information that the proposed loan is to the nephew of a man that I play golf with, at whose house I frequently dine and vice versa.


    Comment by ombhurbhuva — 24 March 2013 @ 3:58 pm

  3. As Sam observes, attitudes have changed since then. At the time even the bosses preferred the human decision-makers, but they were operations people who had worked their way up. The bosses’ bosses probably would have liked my stuff if I’d had access to them: they’re the ones who would eventually acknowledge the superior return on investment possible from automating white-collar jobs, replacing humans with machines. If the workers owned the means of production, then the automated workforce might be getting the same pay for a 12-hour workweek; instead the huge increases in worker productivity accrue to the owners, while former loan officers and accountants are unemployed or cleaning out airplane toilets like my old high school chum, a CPA and MBA who can’t get a job in his profession.


    Comment by ktismatics — 24 March 2013 @ 4:12 pm

  4. Have you read “The Robot Will See You Now” in last month’s Atlantic? Your post reminded me of that article, where the point is made without apology: machines will be able to process symptoms for diagnoses and treatment more quickly and with less bias. The implication drawn from the author was that the future should pair machines with doctors, working together….Do you see something like that happening in the medical world? Machines helping to save costs and eliminate bias, while still utilizing the best in human decision making?


    Comment by erdman31 — 24 March 2013 @ 4:14 pm

  5. I didn’t see the article, Erdman, but if I remember I’ll have a look on my next trip to the library. But this semi-automation of medical decision-making is already happening, as is the offloading of previously high-level doctoring work to technicians who get paid much less than the physicians. What’s notable about medicine is that, while labor costs continue to decline through automation and task downgrading, prices for healthcare keep going up faster than the cost of living. I’ve probably complained about this trend before.


    Comment by ktismatics — 24 March 2013 @ 4:45 pm

  6. “What the machine can’t do is process the information that the proposed loan is to the nephew of a man that I play golf with, at whose house I frequently dine and vice versa.”

    This sounds like George Bailey from the good old Building & Loan facing down a smirking cigar-chomping Mr. Potter. Last week I actually found a place to stick my It’s a Wonderful Life blogrant, written 6 Christmases ago I see, into the novel I’m writing now. The mad scientist phase is tangentially covered too as I set about redeeming my past life in fiction. That project didn’t work so well for Briony in Atonement; it probably won’t for me either. In my case though I’m not atoning for past sins; it’s past futility that stirs my regret — les temps perdu.


    Comment by ktismatics — 25 March 2013 @ 6:10 am

  7. I’m imagining Slavoj Zizek on It’s a Wonderful Life

    You know who is the capitalist running dog, Henry Potter who was once Harry Potter or George Bailey who is dealing in sub-prime mortgages giving a loaf and salt and a bottle of wine with every sale. And why is Potter a cripple, is this Lord Chatterly and Bailey the gamekeeper and so on and so on. I want to look at this very carefully because it represents the socialism that dare not speak its name, a chicken in every pot. Blood will be got from those stones or breeze blocks….


    Comment by ombhurbhuva — 25 March 2013 @ 2:27 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at WordPress.com.

%d bloggers like this: