Loose Onto-Scientific Ends

Rather than either writing separate posts here or comments elsewhere, I’m making a few brief observations about issues that arose in our extended recent engagement with Pete here at Ktismatics and in his ongoing debate with Levi. Most of my issues are scientific rather than metaphysical.

1.  I agree with Pete that science, considered as a collective endeavor of discovery, progresses incrementally toward increasing objectivity, continually (and sometimes radically) purifying its methods of investigation and thereby its findings from subjective and intersubjective biases. Nevertheless, the topics subjected to objectifying purification of science aren’t always selected based on the greatest potential for increasing objective knowledge. Money and power often decide what is going to be studied, how it will be studied, and to whom the results will be made known. Often these “impure” forces are intentionally hidden from view or unintentionally ignored, even by the scientists themselves.

2.  Levi asserts that every object exceeds the actual state it happens to manifest at any given time and place. I can’t believe that for Levi the “real water” inside a closed container is solid and liquid and gas at the same time, and the “real me” isn’t only here or there but everywhere at once. Levi doesn’t typically equate the object’s excess with all the potential other states in the object’s repertoire that it isn’t currently manifesting. Rather, he says the object has a “virtual proper being,” a dynamic capacity that generates its potential and actual states. I agree. Levi insists, though, that no object ever directly encounters the virtual proper being of another object; rather, an object only encounters the qualities made manifest in the states the other object assumes. I don’t believe this is true. Science doesn’t just document the states that water occupies and the conditions under which it occupies them. Rather, science investigates the “virtual proper being” of water — its molecular structure, covalent bonds, potential energy, vibrations, and so on that cause it to manifest itself in various states.

3.  Almost always the states which an object assumes or the properties it manifests are a joint function of the object’s virtual proper being, its interactions with other objects, and the energy forces operating on them. The solid/liquid/gaseous state of the water in the container is a function of the molecular structure of water, the pressure inside the container, and the heat inside the container. To me this suggests that reality isn’t composed exclusively of objects, but of objects and the energy forces and fields in which they’re embedded.

4.  As others have observed, it seems bizarre to contend that objects never directly encounter each other. It’s argued that objects encounter one another only inside some composite object that includes both objects. So when two billiard balls collide, they form a two-ball system in which the two balls affect one another’s movements. A simpler explanation — one you might find in a middle-school science text — is that the two objects directly interact, affecting each other by a transfer of energy. The trajectories and speeds of the two balls are changed, but the energy is conserved, as are the sums of the angular momenta of the two balls. In all sorts of inter-object “translations,” science goes about its business of clarifying what stays the same, what changes, how, and why.

5.  On the conflation of epistemology and ontology, if you’re going to make a statement about what things are, you’re still making a statement. What claims are you making in this statement, and on what grounds do you justify them? Of course not all objects are subjects, and not all interactions between objects involve one of them trying to understand the other. But ontological claims are produced by subjects claiming to understand other objects. An ontologist’s stated understandings are specific manifestations of his/her virtual proper being interacting with the objects and energy forces to be understood. What is it about the ontologist that generates his/her particular ontological understandings of the world? What is the relationship between the ontologist’s statements about the world and the aspects of the world referenced in those statements? Answering this sort of question is a kind of epistemological investigation that can’t be waived off.

6.  Scientific knowledge is “produced” in the sense that scientists perform work to produce their systematic observations of the world, the analyses of their observations, and the statements in which they embed the knowledge they’ve discovered. But scientific statements refer to features in the world that aren’t created by the scientist. “The water in the jar is frozen” is a statement produced by the speaker; the jar, the water, and the frozen state already existed prior to the production of the statement. A microscope produces a magnified image of an object, but it’s still an image of the object. A yardstick might be used to produce a quantitative measurement of an object, but it’s still etc. etc. Granted, knowledge about an object isn’t identical to the object itself. And the cumulative body of knowledge is increased and refined and preserved over time, and that takes work. But not all work produces things; some kinds of work disassemble things, sort things, discover things, describe things. To insist that scientific investigation produces knowledge is to conflate ontology with epistemology, creation with discovery. It’s also a strong form of correlationism, denying the existence of real objects independent of the observer, such that observation creates the things it observes.

There are probably other loose ends I’d like to lay out on the table, even if I can’t tie them up into neat little bows. Maybe I’ll add new ones to the list as I think of them. But I think I’m going to stop writing new posts for awhile. I feel like I’ve been blogging my ass off for the past 2+ months, generally to excellent effect as far as I’m concerned. I’ve put on my empirical hat for the first time in years and it still fits pretty well, and it’s been oddly satisfying to be among the few who wear this sort of hat around these parts. I could surely keep up the blogging pace, since my interest hasn’t really flagged. But I’ve got a novel to write, and I want to immerse myself in that mindset, that alternate reality, for awhile.

Probably Won’t Watch the Cup Final

I know that it’s Spain versus the Netherlands, but that’s about all I know about it. As far as I can recall I’ve never played the game. It seems to work a lot like hockey, but with penalty shots instead of a penalty box. I could probably get into it as a spectator sport, but I’d rather not. I believe that I’ve watched extended portions of only two soccer games in my life. Recently I tuned in the US-Ghana match after the Americans had tied the score. The announcers kept telling me what an exciting game it was, but to me it just looked like an extended exercise in collective frustration punctuated by one brilliant individual move executed by one of the Ghanians.

I also watched the second half of the 2006 World Cup final, France versus Italy. I didn’t even realize that that was the most recent championship game until Italy was eliminated in the first round a couple of weeks ago. Reigning champions ousted, said the news story. So I looked it up and, sure enough, the Cup is contested only every four years. Who knew?

Four years ago we were living in Antibes, which is a town on the southern coast of France about 50 miles from the Italian border. All the cafés in the old town had set up televisions outside so the al-fresco diners could enjoy the game, and big crowds of flâneurs stopped to watch as well. A lot of Italians take weekend jaunts into France, so the crowd in Antibes wasn’t entirely behind Les Bleus. Had this been an American scene, everybody would have been getting drunk and rowdy. Instead it was the usual subdued and affable soirée scene, with people sipping their wine rather than swilling it down, but nevertheless intensely engaged in watching the game. At one of the cafés a drunken young French mec was standing next to the tv set directing obscene remarks and gestures toward the referees and the Italian players, but it was all in good fun. When Zidane performed his notorious head-butt and got ejected near the end of the game, the crowd murmured among themselves more puzzled than incensed. The tie-breaking free kicks seemed like kind of a let-down after the intense play, in which France was clearly the superior team, except for that great Italian goalkeeper.

When Italy finally prevailed a few of the Italians cheered but mostly the French either wandered off or continued their after-dinner conversations. The mood was a sort of fatigued melancholy tempered by the easy camaraderie of the weekend nighttime promenade. By now it was pretty late so I walked home. Through the early morning hours I heard the occasional Italian celebratory car honking its way through the streets. No riots, no burning cars, no drunken hooligans throwing bottles. Maybe things were different on the Italian side of the border.

<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>

UPDATE: By the time I first thought about checking the score, the game was over. ¡Viva España! — I suppose.

Free College for Everyone!

Yesterday I came across this table showing that, between 1975 and 2007, the percentage of all US college faculty members who are tenured or tenure-track declined from 45% to 25%, while part-timers increased from 24% to 41%. This is a trend of which I’ve been made aware by several bloggers, including Shahar, who put up a post today citing these same statistics. What surprised me is that the total number of faculty jobs increased by 116% over that same 33-year interval. According to this Dept. of Education source, the number of college students increased by only 63% over that time period. What the heck? Have colleges been massively over-hiring for the last few decades?

Of course we have to take into consideration the fact that so many of these faculty jobs are part-time adjunct positions. How big is the “part” in part time? I have no ready access to the relevant data, but let’s assume that on average the adjuncts teach a half-time course load. Grad student teachers are included in the statistics too: assume that they teach quarter-time. So now, recalculating based on those adjustments, the total number of faculty FTEs increased 94% from 1975 to 2007. That’s still disproportionately high relative to student enrollment increases.

Here’s a thing: Looking at full-timers only — tenured, tenure-track, and non-tenure-track — the number of faculty jobs increased by 56% from 1975 to 2007. This is only slightly below the 63% growth rate for student enrollment. Suppose the colleges made up for the gap with adjuncts: they’d have had 356K of them on the payroll in 2007, each working an average of half-time. In fact the number of 2007 part-timers was 685K. By implication, the American college system could be operating at a teacher-to-student level proportionate to 1975 by eliminating 320K part-time faculty jobs.

Let’s assume that an adjunct earns an average of $15K per year teaching college courses. Multiply that figure by 320K redundancies and the American colleges would have saved $4.8 billion in 2007. There were about 14.8 million FTE college students that year, so the savings would have been $324 per student. Even if we tack on the usual university-imposed 100% markup for overhead that’s not much savings, considering that colleges charge around $15K per student per year.

Now about those full-time faculty jobs: According to Table 13 in this AAUP report, full-time college faculty earn an average annual salary of $80K; factor in the benefits and the total compensation package comes to $103K per year. That’s pretty good dough, placing it in the top quarter of US jobs earnings-wise. And according to Table A from the same source, the pay has gotten considerably better over the years: since 1980, full-time college faculty salaries have increased 38% faster than the American cost of living. Suppose, then, that instead of getting rid of 320K half-time faculty at $15K each, the colleges eliminated 160K full-time faculty at $100K each. Now the savings would be $16 billion. Again, double that figure for overhead markup, and it’s a price reduction of over $2,100 per student per year.

So why have faculty jobs increased so much faster than the increase in college student enrollment over the past 35 years? Smaller class sizes? It’s worth noting that empirical studies consistently show that increasing class size has no measurable effect on student learning. Maybe offering more variety in course offerings means attracting more students overall but fewer students per class. Smaller teaching loads for full-timers? More professors on grant money buying their way out of teaching? More professors moving up to administrative jobs? The academicians’ guild making room for the newly-minted PhDs coming out of the pipeline every year, even if it means part-time work for crappy pay and no benefits?

How about this:

(1)  Go back to 1975 staffing levels — this would increase class sizes by an average of 63%.
(2)  Make all faculty jobs full-time, tenure track positions.
(3)  Set the average faculty compensation package at $80K per year.

Under this scenario the savings would be around $4800 per student per year, without incurring measurably adverse effects on student learning. In the US the government pays about $7K per public university student per year, so tuition would drop to $2K per year. I suspect we could find at least another $2K’s worth of administrative fat to trim. It’s a tuition-free university education for everyone!

Propositional versus Experiential Realism

In his recent post on The World and the Real, Pete Wolfendale of Deontologistics summarizes certain key themes which he elaborates in greater detail in his “Essay on Transcendental Realism” (PDF link embedded in Pete’s post). He begins by asserting that only sapient beings can achieve a progressively more accurate understanding of reality. A sapient being is self-aware: it recognizes that it makes mistakes in its perceptions or beliefs about the world, and consequently that the world might be something other than the sapient being’s subjective experience of it. A sapient being is also self-corrective: it can take deliberate steps to compensate for its perceptual limitations or interpretive mistakes in understanding the world, thereby incrementally closing the gap between subjective understanding and objective reality. Guided by mutually honored norms of objectivity and rationality, we self-aware and self-correcting beings can work together toward achieving a progressively more accurate understanding of the world. This normative, rational pursuit of objectivity takes the form of an ever-growing set of propositions or truth claims about reality. These propositions are continually subjected to a work of purification that replaces misperceived and erroneous truth claims with more and more accurate ones.

Pete says that, in doing ontology, grasping the epistemological basis for how we come to know the world is more foundational than the structure and content of the world itself:

“The important point is that Brandom is right to claim that our understanding of truth is more primitive than our understanding of existence. Our understanding of true claims (or ‘facts’) is more primitive than our understanding of things (objects or entities).”

A truth claim is a proposition that makes reference to purported facts about the world, which in turn make reference to things in the world. The sequence by which we understand any proposition is: semantic meaning of the proposition → understanding of how the nouns and verbs and adjectives in the proposition are interconnected → understanding of the proposition’s truth claims about the interconnections among objects and properties and forces in the world. Or as Pete says:

“Just as there is the pointing, the direction pointed in, and the thing pointed at, in representation there is the act of representing (assertion), the content of representation (proposition), and the object represented (things within the world, and in the limit, the world itself).

So if you tell me “The cat is on the mat,” I have to understand how this sentence hangs together as a meaningful bit of language, then how “cat” and “mat” and “on” fit together syntactically in this sentence, then I follow the trajectory of the linguistic “pointing” out to how the actual cat and mat in the world are positioned relative to each other. Okay, I can see the sequence here in linguistic processing. But why would Brandom — and Pete — assert that, for humans, understanding semantics is “more primitive” than understanding the world? Pete further claims this:

“Our representation of the world as a whole is just the totality of propositions that we take to be true.”

According to this (debatable) formulation, presumably we have a large and interconnected set of propositions about the world stored in memory, which we then retrieve as needed. It’s obvious that the world itself isn’t made up of propositions; rather, says Pete, our representation consists of a set of linguistic pointers to things in the world. Our access to the world, both individually and in conversation, is mediated by propositions about the world. Presumably, then, we can’t make sense of the things in the world without first retrieving from our mental representations those propositions that point to the things in the world.

What makes Pete a realist is his assertion that propositions and the words from which they’re composed really do point to corresponding things in the world, rather than just pointing to other words and propositions inside the representational-linguistic matrix. So when people talk about the world they really are talking about the world. From my standpoint Pete’s interpretation of the relationship between language and reality is a big improvement over Saussurian structuralism, where words point only to other words inside the representational-linguistic matrix, and over Lacanian post-structuralism, where language cuts us off from direct access to the real. On the other hand, in Pete’s formulation language still precedes and mediates our access to the real.

Pete’s theory is embedded in and responsive to the philosophical traditions in ontology and epistemology. I’m more familiar with the empirical psychological literature. The main issue I’d like to address is Brandom’s assertion that “our understanding of truth is more primitive than our understanding of existence,” that propositions about reality precede and mediate our engagement with reality. Briefly, I’d like to point to some empirical evidence to the contrary.

*   *   *

All non-human primates and many other mammal species make use of non-linguistic representation of the world. For example, they remember where the best sources of food are found, they can take detours and shortcuts navigating through their territories, they follow the movements of objects even when completely occluded behind other objects (“object permanence” in Piagetian parlance), they categorize objects based on perceptual similarities, they predict the behavior of conspecifics based on emotional state or direction of locomotion, they use strategies to compete with groupmates for resources. Non-human primates in their natural habitats invent tools, learn important behaviors from their mothers, and understand kinship and dominance relationships among conspecifics that don’t involve themselves. They can be taught by humans to make same-different categorizations of objects; e.g., to distinguish between pairs of objects that are identical to each other from those that are different from each other. However, it takes many repeated trials for mature chimpanzees to learn this skill. By contrast, even very young children can make these object categorizations with ease. Similarly, two-year-old children can infer cause-effect relations between objects in the world; e.g., they understand immediately that an object being pushed through a horizontal tube with a hole in the bottom of it will fall through the hole. Adult chimpanzees, by contrast, don’t get it and must learn through extensive trial and error.

It seems then, based on a wide variety of evidence, that direct representation of the world is more “primitive” than, as well as a likely precondition for, propositional representation of the world. Similarly, non-representational direct responses to the world — e.g., sunflower blossoms that follow the sun’s arc across the sky — are more primitive stepping-stones toward representation. Representation allows the organism to respond to features of the world that aren’t immediately present to the organism; language further extends representational non-immediacy to greater levels of abstraction. And, without going into empirical evidence on the nature of cognitive representation in the human brain, I think that there are distinct advantages of retaining the direct representational content and structures on which linguistic representations are developmentally predicated. Understanding what things in the world are like, what sorts of features they might exhibit, how they can interconnect, how they might work together in cause-effect chains: this sort of general knowledge about the world can be of invaluable aid in coming to grips with new experiences and in formulating propositions for describing our discoveries to others.

Empirical studies of infant language development also cast doubt on Brandom’s assertion of the primacy of propositional truths about reality. The body of evidence strongly supports a developmental sequence that goes like this: (1) following someone else’s pointing at an object in the world; (2) pointing in order to attract another’s attention to something; (3) understanding that someone else’s spoken word corresponds to the object being pointed at; (4) understanding the spoken word and looking toward the object corresponding to that word; (5) speaking the word corresponding to the object being pointed at; (6)  speaking the word referring to the object without pointing at it. Empirical evidence thus supports the inference that, in infant humans, achieving joint intersubjective attention toward specific objects in the world precedes the ability to understand or to use linguistic representations for thinking about and pointing to objects.

In sum, I don’t believe that the empirical evidence support’s Brandom’s — or at least my understanding of Brandom’s — assertion that, for humans, truth propositions about the world precede direct understanding of the world. The sequence by which an adult human understands a proposition — verbal representation, words “pointing” into the world, object pointed to —  doesn’t correspond to the developmental sequence, either in individuals or in the species, of acquiring the requisite knowledge for understanding propositions. Experiential encounters with the real precede and give shape to propositional descriptions of the real.

I think I’ll stop here, and follow up with a separate post addressing Pete’s insistence on the importance of sapience and the norm of objectivity in moving toward a more accurate understanding of the world.  UPDATE: I think we handled it in the discussion on this post.

Inattentional Blindness to Visual Objects

“It is a well-known phenomenon that we do not notice anything in our surroundings while being absorbed in the inspection of something: focusing our attention on a certain object may happen to such an extent that we cannot perceive other objects placed in the peripheral parts of our visual field, although the light rays they emit arrive completely at the visual sphere of the cerebral cortex.” – Reznö Bálint, 1907

You’re looking for a seat in a crowded theater; after scanning for awhile without success you eventually find an empty chair and sit down; the next day your friends ask you why you ignored them at the theater the night before, even though they were waving at you and you were looking right at them — what’s up with that? Neurologists tell us that our eyes pick up the information in the environment’s optical array and transmit the appropriate signals to the visual cortex, but somehow those waving friends in the theater don’t register in conscious perceptual awareness. It seems that the richness of our visual representation of the environment doesn’t match the richness of our visual experience of it. And so we fail to notice what’s right in front of our eyes.

Attention clearly has a lot to do with it. You can give someone a task that requires him to pay attention to some specific features of a dynamic scene, then introduce some unexpected event into the scene that’s unrelated to the task, and the person won’t notice that the event happened. Meanwhile, the intrusion of the strange event is obvious to other observers of the scene who aren’t engaged in the attention-demanding task. Even dramatic changes in the attended-to portions of the scene — like swapping the heads of two people in a photograph — go unnoticed if the change coincides precisely with the person blinking his eyes. Here’s a fun little study: the experimenter asks a passer-by for directions; as directions are being given, two other people carrying a wooden door walk between the experimenter and the passer-by; blocked from view, the experimenter changes places with someone else, who continues the conversation. Passers-by typically continued giving directions without recognizing the switch in who they were talking to.

“Inattentional blindness” impedes various aspects of visual perception: shape, size, color, motion. People do tend to notice certain kinds of unexpected and unattended-to visual objects: smiling faces, for example, or their own names printed out. Curiously, if the subject’s name is misspelled by only one letter, he tends not to notice the name at all. It seems that certain objects are more meaningful to observers than others, and that at some preconscious level the meaning draws visual attention to the object carrying the meaning. Similar studies performed on auditory stimuli confirm similar findings of “inattentional deafness.”

Here’s another example: I don’t know if the newfangled TV screens minimize glare, but mine doesn’t. If I’m watching a movie, I can selectively ignore the reflections bouncing onto the screen from the room and the window in which the TV is located, reflections that cover the entire field of view in which the movie is showing. If I’m not paying attention, just passing by, I can observe that the reflected light from the room might actually be brighter than the projected light of the movie. With attention it’s possible to ignore visual stimuli that completely overlap spatially with the object of visual attention. I can shift my attention from the movie to the reflection, but it’s hard to attend to both at the same time, even though the information for both is continuously and simultaneously on display in the environment and is being picked up by my visual sensory detection systems. This seems like Hitchcock territory: someone is watching a murder on a televised crime movie while, in the very same room, an actual murder is taking place, the murderer silently strangling the victim. The real murder is being reflected onto the TV screen without being noticed by the TV viewer, whose attention is fully captivated by the fake murder taking place in the movie.

A variety of studies have been conducted exploring inattentional blindness in simultaneous overlapping events, though to my knowledge the unnoticed murder experiment has never been systematically staged and documented in the scientific literature. Here’s one from 1975 that reminds me of Minority Report: The subject is shown a video in which two different ball games are being played simultaneously by two different groups of players. The subject is instructed to press a lever whenever a player in one of the games passes the ball to another player. About 30 seconds into the video a woman carrying an open umbrella walks onto the screen, traverses the entire visual field from right to left, then 4 seconds later walks out of the screen. The ball games continue for another 25 seconds, then the video ends. Fewer than a quarter of the subjects said that they’d noticed the umbrella-toting woman, even after they’d been asked specifically about her. When subjects simply watched the screen without being assigned the lever-pressing task, 100 percent of them noticed the umbrella-woman.

These sorts of experiments and findings went out of fashion for a couple of decades, not because they were debunked but because other research interests captured scientists’ attention. In recent years, however, the ball-playing experimental protocol has been varied systematically to see whether specific visual cues either exacerbate or override attentional blindness. E.g., what if the woman did a little dance while she was onscreen? what if a small boy carried the umbrella? what if a guy in a gorilla suit sans umbrella walked into the scene, stopping halfway across the screen to thump his chest? does the color of the costume worn by the anomalous passing figure make a difference?

Maybe the theoretical explanation is wrong. Maybe subjects actually noticed the anomalous walker but immediately forgot it to avoid getting distracted from the lever-pressing task. In other words, maybe what appears to be inattentional blindness is really a behavioral artifact of inattentional amnesia. A couple of variants in the experimental protocol were introduced to investigate the amnesia hypothesis. For example, what if the video ends with the umbrella-woman having traversed only halfway across the screen, such that the anomalous figure is still “visible” in the subject’s working memory of the scene? It turns out that subjects who report noticing the umbrella-woman often smile or laugh when she appears on the screen. Do subjects who don’t report seeing the umbrella when asked at the end of the video demonstrate similar visceral responses while she is on-screen?

By now you might be thinking: so what? This is pretty old-school experimental psychology, of passing interest at best to anyone but those nerds who specialize in this sort of fairly tedious empiricism. What strikes me though is the texture and complexity and nuance. With a sweep of the hand one can flatten the distinctions between a woman carrying an umbrella, a video projection of the umbrella-woman, a retinal image of the umbrella woman, and a conscious percept of the umbrella-woman. They’re all equally objects, we’re told; each of them, like all objects, is split between the sensual interactive component and the withdrawn essence; the relationship between their sensual components is one of translation, as is the case with all objects. How trivial and uninformative are such broad abstractions, it seems to me. A new theory, it is said, should be judged by the amount of work it creates. Here’s a potentially empirical question — which of these two theories is likely to generate more work: (1) a flat ontology of objects that all share the same characteristics and interactions, posited a priori; or (2) a multi-layered topography of objects whose diverse characteristics and interactions await discovery and explanation?

The kinds of studies I’ve mentioned here illustrate the failure of naïve realism: how we perceive the world isn’t necessarily how the world really is. But these studies also point to a way out: if we can understand the ways in which we misperceive, maybe we can compensate for our limitations, try again, and do better the next time.

[Many of the studies noted in this post are summarized in DJ Simons and CF Chabris, “Gorillas in our midst: sustained inattentional blindness for dynamic events — PDF]

Anarchism and Education

“He that learns because he desires to learn will listen to the instructions he receives and apprehend their meaning. He that teaches because he desires to teach will discharge his occupation with enthusiasm and energy. But the moment political institution undertakes to assign to every man his place, the functions of all will be discharged with supiness [i.e., supineness, lethargy] and indifference.” – William Godwin, Discourse on Political Economy, 1758

“[On] the (pretended) will of the people, the Church will no longer call itself Church; it will call itself School. What matters it? On the benches of this School will be seated not children only; there will be found the eternal minor, the pupil confessedly forever incompetent to pass his examinations, rise to the knowledge of his teachers, and dispense with their discipline – the people. The State will no longer call itself Monarchy; it will call itself Republic: but it will be none the less the State – that is, a tutelage officially and regularly established by a minority of competent men, men of virtuous genius or talent, who will watch and guide the conduct of this great, incorrigible, and terrible child, the people. The professors of the School and the functionaries of the State will call themselves republicans; but they will be none the less tutors, shepherds, and the people will remain what they have been hitherto from all eternity, a flock. Beware of shearers, for where there is a flock there necessarily must be shepherds also to shear and devour it. The people, in this system, will be the perpetual scholar and pupil. In spite of its sovereignty, wholly fictitious, it will continue to serve as the instrument of thoughts, wills, and consequently interests not its own.” – Mikhail Bakunin, God and the State, 1871

“The goal of elementary pedagogy is a very modest one: it is for a small child, under his own steam, to poke interestingly into whatever goes on and be able, by observation, questions, and practical imitation, to get something out of it in his own terms. In our society this happens pretty well at home up to age four, but after that it becomes forbiddingly difficult.” – Paul Goodman, Compulsory Miseducation, 1964

“In Britain, at five years old, most children cannot wait to get into school. At fifteen, most cannot wait to get out.” – Colin Ward, Anarchism In Action, 1973

The Students Make the Teacher

In my last post I wrote about the difficulty of distinguishing effective from ineffective teachers. Training, certification, experience, adherence to best practices, student achievement — none of these variables stands up to empirical validation. In a recent discussion on Jon Cogburn’s Blog we agreed that college students’ evaluations of their teachers isn’t the way to do it either.

Maybe we’re looking at the whole teaching thing backward. A good teacher doesn’t make good students; good students make the teacher good.

It’s not like this is some profound insight I’ve had. I suspect that most teachers would agree that they do a better job and enjoy themselves more when they get to work with a classroom full of smart, interested, open-minded, curious, creative, outspoken, serious students. If the teachers don’t have that good fortune, then they teach to the handful of bright and engaged and enthusiastic students who shine forth in an otherwise mediocre class. Tenured university profs typically choose to teach the upper-level classes, which are populated by students who have already demonstrated their scholarship and commitment in the lower-level classes that winnowed out their slow and indifferent colleagues.

Studies typically find that the highest-rated teachers teach in schools where the students’ average aptitude scores are higher. It’s usually assumed either that these schools attract better teachers, or that the students would be high achievers regardless of the quality of their teachers. Maybe though it’s the bright students bringing out the best in their teachers.

It’s been shown that students’ ratings of teachers correlate with students’ grades in the class. Often this result is interpreted as kiss-ass teachers inflating their students’ grades as a crass tactic for bumping up their own “customer satisfaction” scores. But what if good students are generally happier with their teachers than are bad students? Our daughter, a junior in high school, offers personal testimony supporting this hypothesis. She says that in her classes the students who are doing poorly blame the teacher, whereas the students who are doing well blame the low-performing students’ own inattentiveness and laziness. She contends that this effect is robust across classes: kids with higher GPAs generally rate their teachers highly, while kids with low GPAs don’t. This contention could be put to the empirical test — maybe it already has: do students’ overall GPAs predict their teacher ratings for the following school year? I bet yes.

Students don’t just make the teachers; they make the schools. For the past several months our mailbox has been flooded with brochures from universities and colleges trying to persuade our daughter to apply. I asked her how she’d characterize these mailings: “propaganda” was her immediate reply. How so? The schools all pitch the same features to the prospective students. Part of it is the attractiveness of the campus, but mostly it’s the attractiveness of the students. Glossy photos of energetic, ethnically diverse students, mostly in groups, smile out at you on every page. There’s the occasional nod to faculty excellence — so many Nobel laureates, highly ranked research programs — but mostly what’s pitched is the possibility of actually getting to know your professors. These schools are selling an attractive lifestyle, and that lifestyle is a communal one.

The best students compete for acceptance by the best schools, but how do the students identify the “best”? For the select few — Harvard, Stanford, MIT — the reputation for scholastic excellence is universally acknowledged. For most schools though it’s the measurable quality of the students: their SAT scores, their high school GPAs and class ranks. These metrics are readily available for all colleges/universities on the Internets. Somewhere back in history the first cadre of excellent students was attracted by the quality of the faculty at some previously ordinary school, but by now the excellence is self-perpetuating: this year’s good students go where last year’s good students went. The profs too compete for the opportunity to work at these high-reputation colleges. Part of it is the attractiveness of associating with prestigious colleagues, but I’m sure it’s also the appeal of teaching those bright, creative, high-performing, highly motivated students who enroll year after year.

Teacher Effectiveness Evaluations are Crap

Last week the state of Colorado passed legislation, introduced by the Democrats, that makes tenure for primary and secondary school teachers contingent on annual performance evaluations. The evalations are two-fold: subjective assessments performed by school principals, and student improvement on standardized achievement tests. The public rationale is straightforward: crappy teachers, instead of being rewarded with long-term job security and annual pay raises, ought to be let go and replaced by good teachers. Of course there’s also a union-busting strategy in play here; however, maybe the traditional labor contracts, which reward years on the job over excellence, are taking a toll on student learning. Are there valid ways of distinguishing good teachers from bad ones? And does better teaching result in better student learning?

So here’s a study from one of the most influential teacher-evaluation gurus: Daniel Goldhaber, an economist at the U. of Washington. It’s called “Can Teacher Quality be Effectively Assessed? National Board Certification as a Signal of Effective Teaching,” which can be downloaded near the bottom of this link. Goldhaber and Anthony looked at data on North Carolina 3rd-5th graders and their teachers across 3 school years. Included in the data set were students’ scores on statewide standardized achievement tests which are administered to every student every year. These repeated measures made it possible to evaluate the rate of individual students’ improvement from year to year, as well as the average per-student improvement for each teacher.

Of course when you measure anything you find differences: statistically, some teachers appear to be a lot more effective than others. But could these measured differences in student outcomes be attributed to differences in teacher characteristics?

Many schools push and reward their already-certified teachers to obtain national certification through the National Board for Professional Teaching Standards. To qualify, teachers have to put in something like 200 extra hours of training in teaching effectiveness, submit samples of their students’ work to a national evaluation board, and undergo on-site evaluation by trained observers. Only about half of the teachers who seek the NBPTS certification pass the evaluation. Are NBPTS-certified teachers more effective than those who aren’t so certified?

In a word, no. The study found that NBPTS-certified teachers achieved statistically significantly (p<.01) better student test results than other teachers, but this difference was minuscule (0.1 standard deviation, for you statistics nerds). The study included data from thousands of individuals, and with huge data sets like even trivial differences show up as statistically significant. Paradoxically, based on student test data teachers were more effective before than after receiving their national certification.

Goldhaber and Anthony did a lot more slicing and dicing of the data looking for more robust differences between teachers. As far as I can discern, they didn’t find very much. I suspect that the dreaded Bonferroni effect kicked in: if you conduct a whole bunch of statistical analyses on the same data, 5% of those analyses will generate statistically significant results merely because of random noise in the data.

But then finally we get to the Policy Implications section of the paper. Here’s how Goldhaber and Anthony summarize their findings:

“[T]his is the first large-scale study that appears to confirm the NBPTS assessment process is effectively identifying those teachers who contribute to relatively larger student learning gains. This finding is important both because it provides some indication of a positive return on the investment in NBPTS, and on a more fundamental level, it demonstrates that it is actually possible to identify teacher effectiveness through NBPTS-type assessments.”

Say what? I read the report, I looked at the data tables, and that’s not the implication I arrived at. The researchers then acknowledge that the NBPTS certification isn’t cheap: $2,300 for the assessment plus an average $4,200 annual pay increase for those who pass the evaluation. They conclude that it would cost about $7,300 per pupil to raise standardized test scores by 1 standard deviation — a result which, based on their own analyses, is almost certainly unattainable.

*   *   *

Okay, so let’s say that it’s not easy to explain why some teachers are effective while others aren’t. Is it at least possible to distinguish effective from ineffective teachers based on their students’ standardized results? Here’s a second study addressing that question: Goldhaber and Hansen, “Assessing the Potential of Using Value-Added Estimates of Teacher Job Performances for Making Tenure Decisions” (2009), downloadable at the top of this link.

Again using data from NC primary-school students and their teachers, the researchers report that, using 3 consecutive years of their students’ test results, half of the teachers’ outcomes were significantly different from average. But as we saw in the prior study, statistical significance doesn’t imply magnitude of difference. And here again the effect size is even punier than the national certification effects. Goldhaber and Hansen estimate that, if the lowest-performing 25% of the teachers were fired, overall test results for the students would go up an average of 0.03 standard deviation.

To give you some idea of how trivial that result is, we turn to Jacob Cohen, for whom this particular statistic we’re talking about is named, namely “Cohen’s d.” Cohen said that the lower threshold for a “small” effect size is a Cohen’s d score of 0.2 or above. And what did Goldhaber and Hansen come up with? A Cohen’s d of 0.03. This isn’t even big enough to be small; in practical terms it’s indistinguishable from nothing.

So now I skip ahead to the Policy Implications section. Will Goldhaber once again vastly inflate the potential impact of his trivial findings? He begins by asserting that teacher evaluations based on 3-years’ worth of student test performances “serve as better indicators of teacher quality than observable teacher attributes.” That sounds impressive until we remember from the prior study that observable teacher attributes were crap indicators in their own right. But what about these puny Cohen’s d numbers he estimated as the policy impact of firing the lowest-performing quarter of the teachers? Says Goldhaber:

“While these may appear to be quite small, new evidence (Hanushek, 2009) suggests that even these small impacts on the quality of the teacher workforce can have profound impacts on aggregate country growth rates.”

WTF? What is the nature of this purported “new evidence”? What are “aggregate country growth rates,” and what have they to do with teacher effectiveness and student test results? Alas, this is the last sentence of the Policy Implications section. Still, he offers this summary recommendation at the very end of the report:

“[T]he results presented here indicate that teacher effect estimates are far superior to observable teacher variables as predictors of student achievement, suggesting that these estimates are a reasonable metric to use as a factor in making substantive personnel decisions.”

Again, the numbers don’t justify the enthusiasm.

*   *   *

I conclude, based on these two studies, that if Goldhaber’s work represents the state of the art in evaluating teacher effectiveness, then the new Colorado law is ill-conceived in the extreme. The costs associated with putting teachers through fancy re-credentialing procedures and with firing and replacing presumably under-performing teachers can’t possibly result in meaningful improvements in student learning outcomes. On the other hand, using these poorly-validated means of axing teachers can save the state money, especially if doing so provides a quantitative rationale for dumping relatively high-paid tenured teachers and either replacing them with low-paid new teachers or not replacing them at all.

The fact that it’s so difficult to find meaningful distinctions between good and bad teachers would concern me if I were a teacher.  The situation is similar to that of psychotherapists and counselors, where level of training and years of experience have virtually no measurable impact on client outcomes. At least it’s been demonstrated that, for people suffering from psychological symptoms/disorders, any therapy is substantially better than none at all. Can the same be said for teaching? It’s been demonstrated that home-schooled kids do just as well or better on standardized tests compared with traditionally-schooled kids. Still, home schooling isn’t teacherless: the parent functions as a private tutor even if s/he doesn’t carry the recognized teaching certificate. It’s also been shown that increasing class size, even doubling it, has little to no effect on learning outcomes. As far as I can tell, the question of how best to enhance student learning remains wide open.

Ecology of Intelligence and Poverty

In a recent post I presented some demographic data pointing to college as a classist institution. The high and escalating price tag excludes kids who can’t afford it, preventing them from acquiring the sheepskin that grants access to higher-paying jobs. The ensuing discussion began to unpack the links between money, habitus, aptitude, education, and employment. This post looks at the relationship between parents’ socio-economic status, which is a composite measure of income and education, and kids’ intelligence/aptitude.

Empirical studies have consistently found that about half the differences in individuals’ intelligence can be attributed to genetic factors: bright parents tend to produce bright offspring. Parenting, in contrast, seems to exert surprisingly little effect on children’s intelligence. However, parents’ socio-economic status is empirically correlated with intelligence: on average, kids from more well-to-do families tend to score higher on standardized IQ and aptitude tests than do kids from poor families — see this graph. Can we infer that, on average, rich people are intrinsically brighter than poor people, resulting in brighter offspring who are better suited for higher education and more likely to benefit from it?

Psychologist Eric Turkheimer and his students/colleagues at the U. of Virginia (my doctoral alma mater) have conducted several studies exploring this question. Here I describe two of them, both involving statistical analyses of existing longitudinal databases collected from sets of twins.

In this study Turkheimer et al. evaluated the results of IQ tests taken at age 7 by identical (monozygotic) and fraternal (dizygotic) twins. The study estimated the proportion of differences in IQ accounted for by genotype (monozygotes share 100% of the same genes, dizygotes share 50%), shared environment (which consists largely of parenting — twins are raised by the same parents at the same time), and nonshared environment (other aspects of kids’ lives, including neighborhood, school, peers, parents’ social networks, etc. — and presumably also uncaused emergent intelligence if there is such a thing). Using some fancy multivariate structural modeling techniques, the researchers demonstrated that the relative importance of these three correlates of IQ varied as a function of socioeconomic status. For the highest-SES kids, genotype was by far the most powerful correlate of IQ, with neither shared nor unshared environment adding any predictive power. For the lowest-SES kids, both shared and nonshared environment were strongly and separately correlated with IQ, whereas genotype was unrelated to IQ. In other words, for low-SES kids IQ is almost entirely a function of environment, while for high-SES kids IQ is almost entirely a function of heredity. The researchers conclude:

“Additive models of linear and independent contributions of genes and environment to variations in intelligence cannot do justice to the complexity of the development of intelligence in children… [T]he developmental forces at work in poor environments are qualitatively different from those at work in adequate ones.”

  • “Genotype by Environment Interaction in Adolescents’ Cognitive Aptitude” (2006) — link available here

This study by Harden, Turkheimer & Loehlin replicated the 2003 study, using eleventh-grade twins’ SAT test scores as the measure of intellectual aptitude. Findings were similar to the prior study of 7-year-olds: the strong statistical impact of genetics on SAT scores appears only in the high-SES kids. In this study the shared environment accounted for practically none of the variance: this is consistent with other studies showing that whatever influence parenting has on younger children dissolves rapidly in adolescence. Unbundling the SES effect, the researchers found that the effect of parents’ income is stronger than their level of education. That is, kids’ genetic predisposition for academic aptitude is expressed more fully in rich families than in poor ones. The researchers conclude that their findings support a “social context as enhancement” hypothesis about genetic-environmental interaction:

“[D]ifferences among ‘normal’ environments are largely irrelevant for differences among children’s intelligence. Below a certain threshold of environmental quality, intelligence increases sharply with better environments… In contrast, above a certain threshold of environmental quality, the reaction plane is essentially flat: for any given genotype, better environments do not predict an increase in intelligence…, and for any given environment, genetic differences are equally well expressed… Thus any gene-environment interaction disappears above a threshold of environmental quality.”

These findings and their interpretations are fascinating in their own right. They unfold within a broader ecological model of development that’s more conceptually and methodologically sophisticated than traditional schemes in which nature, nurture, and culture are treated as separate and independent components that can simply be added together. There are practical implications as well, but I’ll leave those either to discussion or to a separate post.

OOO and Correlationism

It may be that the most of the first wave of object-oriented philosophy bloggists have moved on to other concerns, but since most of them are neither posting nor commenting it’s hard to know. I suspect they’re working on books and chapters and journal articles — activities that generate greater academic exchange value than blogging, which now seems dedicated mostly to announcements of new journal issues, conferences, etc. And then there’s me, still publicly trying to come to grips with OOO on my own terms, entirely divorced from academic philosophy. But I think I’ve gotten it figured out now, so after this post I can leave it alone until some new major development arises. In this post I’m clarifying, summarizing, and elaborating on what I wrote in my last post and comments thereto.

I begin with a brief imaginary conversation about what Levi Bryant identifies as OOO’s first Core Claim:

OOO: “Objects are radically withdrawn from all interaction with other objects.”
Me: “How do you know?”
OOO: “Ah, but now you’ve moved from ontology to epistemology, from what things are to how you know about them. Stick with ontology please.”

This conversation can go no farther:

(1) Objects withdraw from all interaction, and
(2) knowing-about-something is a form of interaction between knower and thing; therefore,
(3) objects withdraw from being known; therefore,
(4) one cannot know that objects withdraw, because knowing is itself a form of interaction from which objects withdraw.

It seems that OOO wants to offer a description of objects without explaining how it came into awareness of this description. Is it sheer speculation? transcendence? revelation? Science isn’t boxed in like this. Scientists can tell you what they’ve discovered about objects AND how they discovered it. Of course it’s possible to critique science’s knowledge claims and methodologies, but at least science specifies its truth claims and makes them directly accessible to critique.

OOO purports to escape what Meillassoux termed correlationism, or “the thesis of the essential inseparability of the act of thinking from its content” (After Finitude, p. 36). How is this escape perpetrated? On the one hand there’s the assertion of the radical withdrawnness of objects from all interactions, which we just talked about. This idea of withdrawnness can be thought, just as the idea of objects being matter participating in pure and eternal form can be thought, or the objects’ withdrawn essences all living secretly together on Pluto can be thought. Anything can be thought; the question is whether the content of these thoughts can be validated outside of thought. By what means can anyone be assured that the content of what they’re thinking about is real? Here it seems to me that OOO’s assertion is only a negative, apophatic one: the essence of the real is that it cannot be known. Consider these two assertions: “We cannot know what objects are really like” and “The essence of real objects is that they cannot be known.” Is there any meaningful distinction between those two assertions? I don’t believe there is: with respect to the relationship between minds and the objects of thought, OOO’s radically apophatic ontology seems interchangeable with a radically agnostic epistemology.

According to OOO, objects encounter one another only indirectly and relationally. The qualities or properties of an object which another object encounters — its mass, chemical make-up, color, beauty, etc. — occur only as part of the inter-object relationship. These relational properties are not essential to the real object which, per Core Claim One, is radically withdrawn from all interactions with other objects. What Meillassoux calls the Correlation refers to one particular kind of object-object interaction, namely the interaction between a conscious mind and some other object which that mind is trying to understand. OOO thus reaffirms the Correlation — what the mind discovers about objects cannot be separated from the mind doing the discovering. Not only that, but OOO extends the Correlation to all object-object interactions: water’s encounter with the salt it dissolves cannot be separated from the water doing the dissolving, etc. This is what I would regard as strong Correlationism.

So where does that leave us? According to OOO, the Correlation is inescapable with respect to any and all properties that any object might manifest in its encounters with any other object. However, the Correlation is purportedly escaped by each and every object’s real essence, which never manifests itself inside correlational encounters with other objects. But how can I know that this withdrawn essence exists outside of the Correlation? To pose this question is, it is argued, to remain bound by the terms of the Correlation, by the relationship between real objects and my thinking about them. The real withdrawn essence is, regardless of how I think or know about it. And I’m sorry, but this just sounds too much like a statement of faith to me, some tenet I might have memorized from the Baltimore Catechism when I was a lad in Sunday school.

OOO is an evolving and complex set of ideas that goes far beyond the issues addressed in this post. I tentatively conclude, though, that the OOOlogists should consider withdrawing radical withdrawnness as its first Core Claim. Then they can either look for some other way out of the Correlation, or go ahead and build out the radically strong Correlationism that’s implicit in their inter-object relational ontology.

I’ll entertain questions or disputes or discussions, and I’m eminently willing to be schooled further on the failings of my conclusions. After that, though, I hope to exit the correlation between my mind and OOO for the foreseeable future as I pursue other concerns. Thank you, and good afternoon.

Withdrawal and Representation

I read Larval Subjects pretty regularly, so although I’m increasingly preoccupied by education-related issues lately I do keep updated on object-oriented ontology as it develops. I’m not well versed in philosophy, so I interpret the OOO stuff more in empirical and a psychological terms. Bearing those caveats in mind, I wanted to jot down my reactions to what in a recent post Levi Bryant termed “the two core claims of OOO”:

“First, OOO claims that objects are radically withdrawn from one another… OOO vigorously rejects the thesis that other objects are anything like they are perceived by us or any other object; and this for the precise reason that objects are withdrawn.”

I keep trying to understand why this theory should be deemed a form of realism. If there’s no reason to assert that an object is “anything like” the way it manifests itself in interactions with humans or with any other object in the universe, then what can philosophers possibly say about what objects “really” are? OOO is a sort of apophatic theory of objects: what we can say about real objects is only what they are not. The rationale for this core claim seems tautological: we can’t know what objects are really like because they are withdrawn. So if a scientist or a poet or a patch of dirt claims to have discovered something about what a rose is really like, the OOOlogist can discount this claim immediately: “Your claim is impossible because we already know that we cannot come to know anything about what objects are really like.” Whatever is discovered about an object is by definition disallowed as not really getting at the “real” object. Why is this realism again?

“Second… OOO argues that objects relate to one another through translation. Translation is a radically different relation than representation. If this is the case, then this is because there is no translation without transformation. Where representation is based on metaphors of mirroring where there is purported to be a resemblance between the reflection and the reflected, translation is a relation of difference. A translation is not a faithful representation of an original, but is rather a transformation of the original in terms of the system specific structure of the entity doing the translation.”

Again, I have a hard time seeing how this position can be regarded as realism. Further, I don’t understand why translation is radically different from representation. Levi says that representation, like a mirror image, is a relation of similarity, whereas translation is a relation of difference. But a mirror image is both similar to and different from the object it reflects, isn’t it? The object may be 3D, its mirror image is 2D; the object doesn’t occupy the same physical space as the mirror where its reflection appears; the mirror inverts the object’s orientation around the y-axis, so that what’s on the left of the object appears on the right of the mirror image; and so on. Still, despite all these differences, some important sameness is preserved in the mirror’s reflection of the object. A translation too is both different from and similar to the original. If I translate a phrase from French into English, the sounds of the words are very different, the relative positions of nouns and adjectives may be inverted, and so on, but the meaning of the two sentences are more or less the same in both languages.

Now move on to human interactions with objects. If I see a red rose, the redness is a sort of perceptual translation of light waves reflected off the rose’s surface, transmitted to the sensory apparatus of my visual system and from there to my brain. The rose isn’t intrinsically red in the way I perceive it — my perception of the rose is different from the rose itself. However, the redness I perceive does correspond to or represent or translate something about the real rose, some array of information or stream of waveicles that exists independently of the eyes and brain by which I perceive the redness. Through the sequential and radical transformations from waveicle to rods and cones to nervous system to neural network to consciousness,  there is a sameness that persists between the rose and my perception of the rose. Like the mirror image, visual perception captures sameness within difference. If there wasn’t a persistent sameness between the object and my perception of it, then we’d be back in the non-realism of a world in which a translation isn’t anything like that which it translates and a reflection isn’t anything like what it reflects.

College as Classist Institution

This post summarizes a variety of data on the finances of higher education, mostly unsullied by my own editorializing.

According to this article, major public universities in the US spend around $15K per student annually, with the government paying a little less than half. The schools’ spending keeps going up at a rate faster than inflation, while the state payment rate goes down. As a result, says this article, college tuition and fees have increased 500% since 1982. During that same interval, median family income increased 170%. I.e., tuition/fees have gone up three times as fast as income. Not surprisingly, college student borrowing has more than doubled in the last decade.

Is a college education worth it financially? Historically the answer seems to be an emphatic yes. The “wage premium” accruing to a college degree is substantial. On average, people holding a bachelor’s degree earn 78% more than those with a high school diploma, and 40% more than those with a 2-year associate degree. These income differentials have held quite steady for at least the past two decades. (from “Reducing Poverty by Aligning Policies,” by Anthony Carnevale, Georgetown U., downloadable here). Over a lifetime, the baccalaureate is worth nearly $1 million in expected earnings over what a high school graduate earns (source here).

Who takes advantage of the wage premium by going to college and earning the extra million? Preponderantly it’s those who already come from more well-to-do families. Looking just at the smartest kids, as indicated by the top quartile of SAT test scorers: 80% of smart kids from families with high socioeconomic status go on to a 4-year college, versus only 44% of smart kids from low-SES families. Thirty-one percent of the smartest but poorest kids don’t go to college at all, which is just about the same percentage as the dumbest (bottom quartile SAT) but richest kids. (From “Real Analysis of Real Education” by Carnevale, downloadable here.)

In brief, the rich get richer in part because they can afford to go to college, earn a degree, and earn more money as a result. The wealth gap in access to a college education is widening rapidly in the US.