Ktismatics

6 August 2008

Normalcy and Deviation

Filed under: First Lines, Psychology — ktismatics @ 3:52 pm

The most decisive conceptual event of twentieth century physics has been the discovery that the world is not deterministic. Causality, long the bastion of metaphysics, was toppled, or at least tilted: the past does not determine exactly what happens next… A space was cleared for chance.

– Ian Hacking, The Taming of Chance, 1990

In doing empirical psychology, the researcher attempts either to extend the applicability of a new or existing theory to a new range of phenomena. The researcher proposes a concrete hypothesis by adapting the abstract general theory to the specific empirical situation under study. Does the researcher’s hypothesis provide a better explanation of the data than the generally-accepted alternative explanation? This question is usually evaluated statistically, by investigating whether the pattern of empirical results varies significantly from what would be expected if the hypothesis were not true.

It’s often the case that the psychological researcher is exploring new territory: the kind of data s/he collects hasn’t previously been investigated scientifically. In that case the generally-accepted alternative is known as the “null hypothesis.” Usually the null hypothesis doesn’t take the form of a precise prediction about how the results will turn out; rather, it states that the results will not deviate from what one might expect to find by chance alone, unaffected by the theoretical forces which the researcher claims will affect the results in some predicted way. But “by chance alone” doesn’t mean unalloyed randomness; rather, it means that the results are expected to conform to the statistical distribution typically found in similar kinds of data sets. This is the normal distribution, better known as the bell curve, in which most subjects cluster around the arithmetic mean while the rest tail away toward the right and left of the mean. So if the mean for one group of subjects differs significantly from that of another group as predicted by the researcher’s hypothesis, taking into account the observed amount of random variation in the bell curve, then the null hypothesis is rejected: statistically it’s very likely that something other than randomness is affecting the results.

Why is it that, for so many measures of measurable human performance, randomness takes the shape of the normal distribution? In grad school and in subsequent practice I don’t recall that anyone ever really asked this question, let alone answered it satisfactorily. Here are some quotes from famous statisticians related to the issue, as cited by Ian Hacking in his fascinating book on the history of statistical thinking.

In a given state of society, a certain number of persons must put an end to their own life. This is the general law; and the special question as to who shall commit the crime depends of course upon special laws; which, however, in their total action, must obey the large social law to which they are all subordinate. And the power of the larger law is so irresistible, that neither the love of life nor the fear of another world can avail anything towards even checking its operation. (T.H. Buckle, 1857)

The irrational approval given to the so-called Calculus of Chances is enough to convince all men of sense how injurious to science has been this absence of control. Strange indeed would be the degeneration if the science of Calculation, the field in which the fundamental dogma of the invariability of Law first took its rise, were it to end its long course of progress in speculations that involve the hypotheses of the entire absence of Law. (August Comte, 1851)

‘By chance’ — that is the most ancient nobility of the world, and this I restored to all things: I delivered them from their bondage under purpose. (Friedrich Nietzsche in Zarathustra, 1884)

Collective tendencies have a reality of their own; they are forces as real as cosmic forces, though of another sort; they, likewise, affect the individual from without, though through other channels. The proof that the reality of collective tendencies is no less than that of cosmic forces is that this reality is demonstrated in the same way, by the uniformity of effects. (Emile Durkheim, 1897)

I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by ‘the law of error.’ A savage, if he could understand it, would worship it as a god. It reigns with severity in complete self-effacement amidst the wildest confusion. The huger the mob the greater the anarchy the more perfect is its sway. Let a large sample of chaotic elements be taken and marshalled in order of their magnitudes, and then, however wildly irregular they appeared, an unexpected and most beautiful form of regularity proves to have been present all along. (Francis Galton, 1886)

Galton turning over two different problems in his mind reached the conception of correlation: A is not the sole cause of B, but it contributes to the production of B; there may be other, many or few, causes at work, some of which we do not know and may never know… This measure of partial correlation was the germ of the broad category — that of correlation, which was to replace not only in the minds of many of us the old categories of causation, but deeply to influence our outlook on the universe. The concept of causation — unlimitedly profitable to the physicist — began to crumble to pieces. In no case was B simply and wholly caused by A, nor indeed by C, D, E, and F as well! It was really impossible to go on increasing the number of contributory causes until they might involve all the factors of the universe… Henceforward the philosophical view of the universe was to be that of a correlated system of variates, approaching but by no means reaching perfect correlation, i.e. absolute causality. (Karl Pearson, 1914)

Chance itself pours in at every avenue of sense: it is of all things the most obtrusive. That it is absolute is the most manifest of all intellectual perceptions. That it is a being, living and conscious, is what all the dullness that belongs to ratiocination’s self can self can scarce muster the hardihood to deny. (C.S. Peirce, 1893)

Advertisements

19 Comments »

  1. Fascinating quotes. I’ve always felt a real sense of rebellion welling up whenever I see a bell curve and especially when that centre +/- 2SD is taken to define what is and is not NORMAL.

    Like

    Comment by samlcarr — 7 August 2008 @ 8:37 am

  2. Against what do you rebel, Sam: the statistical definition of normalcy, or your inexorable tendency to wind up within +/- 2 standard deviations of the mean?

    Like

    Comment by ktismatics — 7 August 2008 @ 10:35 am

  3. I guess both and paradoxically perhaps even worse when I fail to make ‘the grade’…

    But there really is problem when this concept of normality/bell curves is allowed to rule in all sorts of odd situations. If one looks at a medication’s side effects for example, the fact that in clinical testing 95% of the volunteers did not get headaches from a medication in no way helps the person who does. The individual disappears and becomes completely insignificant in the face of any statistical analysis.

    Like

    Comment by samlcarr — 8 August 2008 @ 3:35 am

  4. Looking at your title, how comfortable are you with using statistics to define deviance compared to say a society or culture’s idea of what is normal and what is not? Scientists often mean one specific thing when they use these terms whereas your “normal Joe” probably understands it somewhat differently.

    Like

    Comment by samlcarr — 8 August 2008 @ 3:47 am

  5. “this concept of normality/bell curves is allowed to rule”

    This happens when the norm becomes normative, when “is” becomes “ought” and description becomes prescription. I’m enough of an empirical realist to think that the constructs being measured do describe some aspect of the real, but they’re also cognitive-linguistic constructs that embed the real in a web of intersubjective meaning. Inevitably, it seems, the web gets reified and mystified to the point where people regard it as directly representing the real, or as being identical to the real. E.g., I come to regard my score on an introvert-extravert questionnaire as a direct measurement of who I really am. This progression (or descent) from the real to the conceptual-linguistic to mystified socially constructed reality is an important trajectory to trace, and certainly not just in science.

    What’s uncertain about the bell curve is how it comes to take that characteristic shape. Psychologists tend to assume that it’s a matter of individuals each rolling his/her multi-sided dice and then compiling the results of many throws of the dice. The variation becomes a matter of individuals carrying differently weighted dice as well as the chance factor of actually rolling the dice. Sociologists are more likely to contend that the statistical mean taps into a socially-constructed expectation which pulls everyone toward the middle. The same data set and statistical analysis can be used to explore both the psychological and the sociological construals of normalcy. I suspect its what the neo-Marxists call “overdetermined,” don’t you? I.e., human nature and cultural norms often tend to reinforce each other. This overdetermination could lead one to comfort or to rebellion — perhaps the basis for constructing another personality variable?

    Like

    Comment by ktismatics — 8 August 2008 @ 7:15 am

  6. Do you think e.g. that a 1000 people rolling dice n number of times would result in a bell curve of some sort? Would such a curve have any meaning or connection to ‘the real’? Take that famous example of a positive correlation between ice cream consumption and muggings in Central Park… I have a feeling that we are inveterate simplifiers. We simply can’t avoid applying Occam’s razor. But as Hume pointed out long ago, so what is the connection? Are we really sure that there is one and that our method will eventually ferret it out eventually?

    Like

    Comment by samlcarr — 8 August 2008 @ 7:43 am

  7. The 1000 dice-rollers is the prototypical example of how a bell curve takes its shape. For a 2-dice game the mean will be around 7, which we know a priori to be the most likely combination (6 of the 36 combinations of 2 dice give a 7: 1&6, 2&5, 3&4, 4&3, 5&2, 6&1). The statistical distribution directly corresponds to the real of dice-rolling. Because psychologists learn the bell curve as a progression from dice-rolling, a series of independent random that form a pattern, they tend to subscribe to the idea that the bell-curve distributions generated by aggregate research data are similar to dice-rolling experiments: independent random samples resulting from differences in innate characteristics of individuals. Sociologists probably think of it the other way around, as a way of quantifying collective phenomena that have already been observed qualitatively as societal norms, values, constraints, etc.

    I particularly like the 2nd-to-last quote, from Pearson: all the variables collectively comprise an interconnected matrix of correlations that can never reveal metaphysical causes, nor can it account for all the variance in the world no matter how many variables get woven into the matrix. The urge to simplify per physical sciences can never be satisfied, say Pearson and Peirce — it’s the wrong paradigm for emergent phenomena like human thought and behavior. But it’s not meaningless, nor is it dissociated from the real. It’s not even something to lament, because the empirical paradigm of psychology describes aspects of the human world in ways that don’t overstep what can be accomplished. There might even be a relationship between ice cream consumption and muggings: it’s part of the complex fabric of real human life.

    The other thing I like about Pearson’s quote is its relationship to Meillassoux’s discussion of possibilities. Psychological researchers can give you the range of posssible scores on any number of variables, and can assemble a battery of tests that measure a whole array of variables, but even this vast sum of possibilities doesn’t approach the total limit of the possible. It’s partly because there can never be an exhaustive list of variables — new ones can always be invented — and partly because even the sum total of a priori possibilities doesn’t prohibit the unexpected emergent possibility — that in mid-roll one of the dice will suddenly turn into an ice cream cone and the subject will eat it and mug somebody.

    Like

    Comment by ktismatics — 8 August 2008 @ 9:47 am

  8. So much for statistics! Indeed one expects a mean around 7 to emerge, and with many simulations that’s actually what happens, but then very often it doesn’t. With the ice cream one has a different problem as this ‘correlation’ apparently disappears in the winter, so the link there is expected to be warm weather, when more folks wander in parks, and more folks also eat ice cream – or so my professor told me, but then maybe there really is a link!

    On a different tack, Linnaeus did us all a great service by discovering taxonomy. I learned my biology in that delightful ‘middle age’ when taxonomists actually made up a significant proportion of the biology department but the ‘mashers’ (cell biology types) were slowly taking the reigns. Now, with the advent and growing confidence of genomics, we are realizing that many apparently solid morphological relationships are actually quite vague and sometimes the result of parallel evolution. The problem is found to be most vexing in the bacteria-blue green ‘algae’ where it seems that completely different species have been happily trading genes ‘horizontally’. Another real problem is that we have very little understanding of stuff like development and in multicellular organisms our ability to separate out genetic effects from cell interactions and the impact of the cell’s microenvironment is actually very limited.

    In any case quite generally with scientific experiments, one hopes that except for the specified ‘variables’ all else is in fact held constant, if not exactly equal. But Murphy’s Law will somehow intrude.

    Given that what we don’t know always far outweighs what we do know (or think we know) a little humility probably would make better scientists of us every time. So, though dated, I think Hume’s musings still do have a lot to teach us.

    Like

    Comment by samlcarr — 8 August 2008 @ 10:47 am

  9. “a little humility probably would make better scientists of us every time.”

    You continue to stress this scientific arrogance theme, Sam, and I don’t understand it. Working research scientists are more likely to have results published that falsify the current theory, so the stability of the status quo is always under assault. When a theory survives continual assault by scientific researchers, it’s bound to be a pretty strong one. As Meillassoux observed, it’s the philosophers who are arrogant in their persistent contention that they’re the ones who really know what the scientists are up to, who really understand that scientific knowledge-so-called is really hopelessly confounded by Hume’s methodological solipsism. This move puts philosophers above scientists in the epistemological hierarchy. Evangelical Christians manifest the same arrogance when they contend that scientists discount creationism in all its forms because their rebellious hearts blind them to the revealed truth — of which the evangelicals are the true interpreters.

    Of course science doesn’t have access to all truth. The task Meillassoux outlines is to get continental philosophy out of its obsession with what science doesn’t know and to start working on how it is that science really does discover truths about the real world. Empirical psychology comes under the same criticism from various forms of structuralist psychology. Like language, the host of quantified psychological variables collectively constitute a correlational matrix of signifiers that have no direct connection to the real. The implication is that conscious empirical observation and cognitive cogitation, rather than giving us access to the real world, actually interpose a barrier between us and the Real. Lacanian analysis is explicit about this disjuncture between conscious cognition and the Real at the level of the individual psyche — as if we can glimpse the Real only through unintentional slips or irrational symptoms, as if only the unconscious gives us any connection to the Real. I think this overstates the limitations of consciousness in a way that parallels continental philosophy’s overstatement of the limitations of science.

    Like

    Comment by ktismatics — 9 August 2008 @ 6:48 am

  10. I think on philosophers and their interactions with science I am quite in agreement with you. But trying to argue that a statistical approach helps us to be more objective is something that I am not at all sure about. In practicality we always and instinctively do calculate the odds. We always also behave and think as though cause and effect are real, that there are laws and that these are not inherently capricious.

    One problem is that when such assumptions are applied to our current state of knowledge and our currently in-vogue theories, we all too often forget that we are dealing with theories and ‘laws’ that would-could-should be overturned in the next cycle of experimental work.

    This does not mean that we abandon our method, but it does mean that we have to keep telling ourselves that it is all still a theory, perhaps one that looks unassailable at present, but that does not push it then into the realm of fact. I also think that in general there are things that we feel comfortable to question and then there are things that we don’t. Take something like special relativity, Avogadro’s number, or the second law of thermodynamics, all are taught now as very basic science, yet will they still exist in a hundred years? I hope not.

    Newton, Einstein, and Darwin all contributed their theories and these then became paradigms. Newton’s laws are still used in practice because they work yet the theory of why they work has changed. One looks forward to the integration at some time in the future of the laws that govern the micro and the macro, but even when this is achieved, I’m quite sure we will find that a whole new set of questions will appear and we will then be wondering how dumb we were to have put up with the older theories for as long as we did AND much more to the point, our view of what is real will change. So, I am resistant to ever ‘settling down’ to thinking that science helps us to connect with what is real even though for all practical intents and purposes I behave as though in fact it does!

    Like

    Comment by samlcarr — 9 August 2008 @ 10:32 am

  11. “we all too often forget that we are dealing with theories and ‘laws’ that would-could-should be overturned in the next cycle of experimental work.”

    I responded to this objection on the last Meillassoux post I believe. Gravity is “just a theory” because it’s an abstraction from observations. Newton’s law of universal gravitation is “just a theory” because it’s a quantitative abstraction from measured observations under controlled situations. Newton’s theory of gravity changed not because it was wrong but because it was inadequate for accounting for situations farther removed from human experience; e.g., at the subatomic or galactic scale. One might want to reserve the term “fact” for the raw event, while a conceptual generalization from many similar raw events constitutes a “theory.” Science’s main enterprise is thinking up and testing theories that account for facts. Which is more useful, which provides more knowledge about the world: the fact that when I just dropped my pen it landed on the desk, or the theory that accounts for this fact?

    What would transform a scientific theory into a fact? Compiling more data wouldn’t do it, because even if gravity “worked” every time the theory of gravity remains qualitatively separated from the phenomena it explains. This is what Meillassoux wants to explore in his agenda for “speculative realism”: that a scientific law isn’t “only” a product of human thought confronting facts, but is somehow built into the universe itself. He’s holding forth the idea that mathematics are real, inherent in the world of facts. I personally think that there’s no need to assert the facticity of theory — that even if scientific theories are always couched in human observation and cognition, their demonstrable consistency and usefulness in describing facts is enough to justify the connection between fact and theory.

    A theological approach for turning theory into fact would be for God to reveal that in fact he uses gravitational laws to hold the material universe together, or that the theories are Real and the facts are just temporary material manifestations of the perfect realm of Theory. These ideas aren’t very popular these days among working scientists or philosophers.

    “I’m quite sure we will find that a whole new set of questions will appear and we will then be wondering how dumb we were to have put up with the older theories for as long as we did”

    This sounds like progress to me — or will you take the PoMo position and say that a hundred years from now scientific theories will be different from but not necessarily better than what we have today?

    Like

    Comment by ktismatics — 9 August 2008 @ 1:18 pm

  12. I don’t think that there is any need to get dogmatic about where things may or may not end up. What is real is that science is not some fixed entity but is (or should be) always in process. That unfortunately is not how we tend to think of it. We are always excited to learn about the new things that science is ‘discovering’ yet we continue to treat the whole thing as if it is a fixed body of facts.

    Getting back to your mention of theology and evangelicals, that whole world of thinking is embedded in modernity. The idea springs inexorably from a theorization of what God “must be”. Therefore we have rationality, logic, and propositional truth all flowing from this conceptualization, yet the raw data, i.e. the bible, does not ever conform to this sort of straitjacketing.

    What I do like about PoMo thinking is the conscious stepping away from an assertion of facts as objective and therefore beyond debate, and the consequent openness to the other guy’s set of possibilities. Here is something that is REAL in both theology and in science. My conception and understanding of a theory may not be at all similar to yours. Hence we have always had scientific debates even though at the time the prevailing paradigms may be well established and the internal debates are taking place on what could be considered a relatively level playing field. In theology, talking to three different Calvinists will land you with three quite different ideas of what predestination is…

    Like

    Comment by samlcarr — 11 August 2008 @ 12:52 am

  13. “What is real is that science is not some fixed entity but is (or should be) always in process.”

    This is relativism detached from knowledge, Sam — as if science were a generator of ideational trends that go in and out of fashion, that don’t make connection to the real phenomena they purport to investigate, that don’t make systematic progress in understanding those phenomena. These are the things that motivate most working scientists. By putting ‘discovering’ in irony quotes you’re implying that scientists are doing no such thing. Again, why? Don’t you think the law of universal gravitation was at least as much a discovery about the world as it was a flight of Newton’s imagination? Don’t you believe this law to be more important as a source of knowledge than any number of factual apples falling from factual trees? The “fixed body of facts” are what already exist in the material world. Scientific theories are valuable as knowledge to the extent that they explain raw facts, not just by compiling lists of raw facts.

    “yet the raw data, i.e. the bible, does not ever conform to this sort of straitjacketing.”

    Here I’d bracket ‘raw data’ in irony quotes. The Bible consists entirely of words. Words aren’t facts: they’re informal theories for describing or explaining facts. Do these words signify anything about the Real? Are Biblical words different from other kinds of words in this regard? How does one distinguish linguistic connections to the real world from the language’s subjective and intersubjective function in facilitating thought and communication? I.e., how and where does the system of signifiers that is language break out of human perception and cognition to make contact with the raw data it purports to describe? Traditional ideas about Biblical inspiration propose that the linguistic theories put forward in the Bible are true not because they reliably make connections with empirically verifiable data from the world, but because God is the source of those theories. This is how theory becomes fact in the traditional world: not through empirical validation but through the testimony of an unimpeachable authority. Hacking writes about this in his book book in the series, The Emergence of Probability. He cites Aquinas:

    Since, then, the dialectical syllogism aims at producing opinions, the dialectician seeks only to proceed on the basis of best opinions, namely what is held by the many and especially by the wise.

    For Aquinas and others of the premodern era, reason was the best arbiter of dispute. If one couldn’t arrive at the reason for something on one’s own, one looked to probability. But for Aquinas “probability” wasn’t demonstrable empirical likelihood: he asserted that in demonstration one is not satisfied with the probability of a proposition. Instead, probability referred to the strength of supporting opinion. God’s opinion is always the strongest, and so what He says is completely probable. What the Calvinist theologians opine would be judged on the basis of their numbers and their wisdom — hence the authority of the church Councils, Popes, etc. for decreeing not just ecclesial practice but doctrinal truth.

    I think, Sam, that your understanding of scientific theories and paradigms is rather Thomistic, such that the weight of opinion of respected scientists determines the dominant “truth” rather than the cumulative weight of empirical probabilistic evidence. Per Meillassoux, isn’t it a form of arrogance for the neo-Ptolemaic reactionaries in philosophy and theology to assert that they know more about what scientists are really doing than do the scientists themselves? I’m sure you regard empirical scientists as arrogant when they critique the empirical paucity of evidence supporting the Bible and other theological truth claims, asserting that they know more about how religion works than do religious people. (I don’t think they are necessarily and invariably arrogant in this regard, but that’s another discussion.)

    Like

    Comment by ktismatics — 11 August 2008 @ 6:16 am

  14. I think we do agree on a lot of stuff. I’ve not read enough of Aquinas to know whether I really am a Thomist in PoMo clothing or not, but your point I think, is pretty much what I intended to say. What gets left out of that theological discussion is where one gets one’s ideas about who/what God is. Starting out with some rather foolishly rational conjectures is what theology has long attempted to do and look at the messes that result.

    Now, assuming that God has something to do with the purport of any of the Great religions’ source materials, one would want these source materials themselves to then inform one on what God is or is not, but that really is hoping for too much. I am actually very pleased that something like scientific study of the bible at last does seem to be under way. I think a healthy dose of scepticism is called for when trying to study the bible and all the more so if one suspects that God’s voice is going to speak through such an endeavor. The majority of even well self-educated readers of theology are fed stuff that is so circular/tautological and funnily also self-contradictory that only a feeling of mysterious rationality results and with that we have to be satisfied?

    But now we really are moving quite far off whatever it is that you are exploring here.

    I don’t think that you should link current theory or theorization within a present paradigm to an improving understanding of reality. We do our best with the best that we’ve got, but that’s not really any place to rest too comfortably. I guess I’m a skeptic but I think it’s really more of a sense of optimism that there will be radical changes in our understanding of ‘reality’ and that there should be more and more in the way of perspective revolutionizing paradigm shifts as we go along so let’s not get too stuck to what we now perceive as REAL.

    Like

    Comment by samlcarr — 11 August 2008 @ 6:58 am

  15. Well of course I don’t expect to convert you to empiricism any time soon, Sam. There’s no question that non-scientific factors impede the quest for knowledge about the world. When skepticism rejects even the sheer possibility of science systematically moving toward improved understanding of the world, as per your last paragraph, then it’s hard to know where to go. I presume you want more revolutionizing paradigm shifts because you think the current paradigms are shoddy, or that the way they come into existence and prominence is faulty. Since the current paradigms emerge from scientific practice, I’m guessing you think that the whole methodology of matching theory to the world is defective and in need of replacement. I suspect also that the explosion of revolutionary paradigms you anticipate will have something to do with dramatic divine revelation, but that’s pure surmise on my part. Anyhow, I’ll carry on.

    Like

    Comment by ktismatics — 11 August 2008 @ 8:29 am

  16. Perhaps I haven’t been very clear, but I am not in any sense against empiricism as such. An inappropriate understanding of probability is another matter altogether. Whatever theorizing we do has to stand or fall based purely on empirical factors, at least as far as science goes.

    I do think that given the nature of what the scientific method is, we should not be expecting at any one point to be able to say that we have figured it all out and that this “xyz” now tells us what the real really is.

    Where the other stuff about “dramatic divine revelation” enters into this discussion is a bit beyond me. I hope that I have not implied any such desire anywhere along the line. I really believe that Einstein was not humanity’s last great scientific genius and I do expect and even hope that someone will come along every now and then and think laterally enough to shake the scientific enterprise onto new paths. Invariably when that has happened in the past our response has been “of course, that’s so obvious, why didn’t I think of that”, and reality will then be perceived differently. Incidentally my putting ‘discovery’ into single quotes is because what is involved is rarely new data and more often than not it’s looking at existing data from a different perspective.

    So while I am all for empiricism I also want us to always be realistic about the reality that there’s a lot of stuff that we don’t know and so a lot of stuff still left to find out, and rather than being overly worried about what we now think of reality, lets get on with the very empirical scientific task of finding out what needs still to be found.

    Like

    Comment by samlcarr — 12 August 2008 @ 12:19 am

  17. “An inappropriate understanding of probability is another matter altogether.”

    Yes, I’d like to write a separate post on this subject.

    “Whatever theorizing we do has to stand or fall based purely on empirical factors, at least as far as science goes.”

    Generally we’re in accord, with a couple of caveats. First — and I’m not disagreeing with you here, just clarifying — scientific theories don’t emerge directly from the empirical evidence; they’re ideas for explaining the evidence. Theories are made out of words and numbers and logico-mathematical operations. I believe these components of theory are human constructs derive their meaning in part from their interrelation to other such constructs, as the idealist and structuralist philosophers argue. But I think the words, numbers and operations refer also to the world itself. Meillassoux thinks so too, and wants to mount an argument that the mathematical part of theories is built into the world itself. I’ve not heard an argument for this position yet, but to me numbers and equations are not unlike words and grammar: they refer to the world in itself in ways that human minds can comprehend but that aren’t built into that world.

    Karl Popper insisted that scientific method can falsify theories but can never verify them. In doing so he’s recognizing that theory is permanently separated from the empirical phenomena it tries to explain, that there can never be a perfect correspondence between the human logico-mathematical construct and the thing-in-itself. I agree. However, when a theory is falsified by data it leaves a vacuum that must be filled by a revision or replacement of the old theory that better accounts for the data. In this way the theories get better — more fully accountable to the empirical evidence — without ever transcending themselves into becoming knit to the fabric of the material universe itself.

    “I do think that given the nature of what the scientific method is, we should not be expecting at any one point to be able to say that we have figured it all out and that this “xyz” now tells us what the real really is.”

    You’ve repeated this position a few times now. As I said, this allies you with continental philosophy and Christian theology. The questions remain: if science makes progress in arriving at ever-more-comprehensive descriptions and explanations of how the material world operates, in what way can this progress be characterized? Is it solely a matter of pragmatism: it lets us manipulate the world more predictably, without getting any closer to understanding the world-in-itself? That’s generally what I’ve believed, but pragmatism turns science into a matter entirely evaluated in terms of human cognition and action. The method is built on testing cognition against information it acquires from the world, stripped as far as possible from biases of human perception and cognition. So there’s something more to the method than just enhancing the instrumental utility of the ideas.

    I’ve posted before about Donald Davidson’s triangle of self-world-other. He doesn’t want to transcend human being-in-the-world, but he also asserts that all three sides of the triangle are integral. We don’t know ourselves or one another except for our relations with the world we’re all part of. And I’ve also posted on Tomasello’s empirical investigations of language acquisition, the results of which support Davidson’s theoretical position. So within the context of the triangle, is it possible to separate out definitively what’s true about the world from what’s true about our understanding of the world? I doubt it, and I think Meillassoux doubts it. He wants a philosophy of scientific knowledge that’s closer to scientists’ own understanding of the knowledge the gain from the methods they employ. And it’s knowledge about the world that scientists are after.

    “Where the other stuff about “dramatic divine revelation” enters into this discussion is a bit beyond me.”

    Yes this was an unwarranted leap of faith on my part. However, Meillassoux says that some sort of divine revelation or fideism despite empirical evidence is all that remains if some more direct link between empiricism and the world in itself cannot be established. Maybe I’ll post on that too if I can summon the energy and discipline.

    Like

    Comment by ktismatics — 12 August 2008 @ 6:25 am

  18. Probably I’m just being a stick in the mud here, but it seems to me that you are looking for a 1:1 correspondence between theory and data, where it may not exist. A theory after all is something that we come up with. A theory is a mental construct that in experimental practice is more like a simulation and we can hope that our simulation utilizes and accounts for the data that we have available.

    There could simultaneously be more than one theory that seem to integrate the data equally well. If the rival theories can not be reconciled then one is left with a classic dilemma. It is such dilemmas that drive us to think differently, to devise experiments that will settle the issue one way or the other or perhaps whose results make the apparent conflicts cease to exist or, a new theory ‘pops out’ that altogether replaces the older ones.

    Popper’s assertions on the value of falsifiability have been questioned and in any case look to me a rather silly way to get around Hume’s basic critique. All Popper is saying is that ‘you can’t ever prove anything’ which is something that scientists have always known. Being able to falsify a theory sounds good but it doesn’t take you very far.

    There is always some or another stubbornly uncooperative data. Looking harder at the data that refuses to “fit in” can be very helpful, as the successful integration of this data can be the key to discerning the better option or at the very least will remind us that we still have a ways to go and that we had better not get complacent.

    Like

    Comment by samlcarr — 13 August 2008 @ 5:11 am

  19. “it seems to me that you are looking for a 1:1 correspondence between theory and data”

    It’s a much more modest undertaking than that, Sam. The position Meillassoux responds to is one that questions any connection between theory and data. The disjunction is a double: first, theory is a mental construct rather than integral to the phenomena being theorized about; second, data constitute the way the world presents itself to human perception rather than what the world is in itself. This double disconnect produces a radical skepticism about the truth value of any knowledge we might acquire about the world through experience and reflection, science being merely a more systematic way of going about it. Do we have to invoke something transcendent to bridge the double gap; e.g., a non-trickster God who verifies that the world really is how it presents itself to us (Descartes), or our perceptual and cognitive categories are what they are because they correspond directly to the categories inherent in the world (Kant)? Is it possible for philosophy to assert at least some non-zero correlation between empirically-derived ideas about the world and the world itself that doesn’t rely on divine revelation or fideism? If so, then maybe that correlation can gradually be improved; i.e., scientific knowledge can make progress.

    Like

    Comment by ktismatics — 13 August 2008 @ 5:39 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: