Wednesday, May 31, 2006

Mind over matter

I'm very slowly making my way through the most recent Skeptics' Circle, since lord knows I have nothing else to do besides get rejected (and pre-rejected) for jobs. In the course of this, I found this post dealing with a condition called "electrical sensitivity," in which people manifest physical symptoms as a supposed response to the presence of electromagnetic fields. Anyone who keeps up with the latest "secret killer" quackery can guess what those symptoms are: fatigue, nausea, dizziness, palpitations, difficulty concentrating. About the same symptoms as "aspartame poisoning" or any of a number of faddish, overdiagnosed food and chemical sensitivities. About the same symptoms that everyone suffers from sometimes, and that opportunists can therefore exploit.

A set of symptoms, incidentally, that tend to overlap with the symptoms of a somatization disorder.

The writer of the electrical sensitivity post is well aware that this disorder is bad science -- he's writing about it on a blog called Bad Science. And he does an admirable job bringing up important doubts about the validity of the (statistically very few) experiments that support the existence of electrical sensitivity -- it's really thorough and interesting, and I haven't even plunged into the 159 comments yet. But he's not quite willing to go so far as to say these people are somatizing. He writes: "I don'’t think people who report being hypersensitive to electrical fields are hypochondriacs: I think they have real and distressing symptoms, but I also think, in the light of the evidence above, that electromagnetic fields probably aren't causing those symptoms, and they may remain unexplained for the moment." And later: "Calling people who believe they experience symptoms because of electromagnetic fields 'hypochondriacs', weak-minded, or oversensitive: is offensive, unhelpful, and most damnably of all, untrue. Denying the reality of people's symptoms is similarly offensive and unhelpful. I will give no quarter on this."

Now, I'm all in favor of trying not to offend large swaths of the population, particularly if that means going easy on people who are sick. But I have the same issue with this as I've had with all the chronic fatigue syndrome or fibromyalgia sufferers who expressed indignance and rage at being told their disorders are psychosomatic. See, it's one thing when you're using "hypochondriac" as a catch-all dismissive term, or when a doctor says "it's all in your head" as a synonym for "I'm not going to treat you." This is unacceptable. But is it really helpful to completely rule out the possibility that these symptoms are psychosomatic? It's amazing the effect that the brain can have over the body -- in fact, the brain is part of the body, so actually it's not that surprising. Mental anguish can cause real physical symptoms, and they're no less real for originating in a part of the body that also governs emotions and memories. We don't do patients any favors by pretending that physical symptoms can't have psychological roots.

Decrying people who sneer "hypochondria" at genuinely suffering patients is one thing. But rejecting the possibility of somatization just because some people use "psychosomatic" and "fictional" interchangeably -- well, that's just not helping anyone. An illness can be simultaneously psychosomatic and very much physically real. That's what the "somatic" part means, after all, but people usually stop at the "psycho." I'm all for patient advocacy, but I don't think "ignoring explanations that might hurt the patient's feelings" qualifies as advocacy. We need to get rid of the stigma surrounding the word "psychosomatic," and the idea that it is synonymous with "fake" or "whiny" or "not physical." Because when you rule out that possibility, anyone who has a somatically real, psychologically-based illness will not only continue to suffer, but will probably go through numerous unnecessary, expensive, and often embarassing tests attempting to find a physical reason for their pain. Until we're able to treat somatization as a potential explanation, we're going to have a lot of people suffering for nothing.

Monday, May 29, 2006

College students and science fatigue

I've been out of commission over the weekend, as I was traveling to Ithaca for my sister's college graduation. It seems appropriate, then, to return to blogging with a consideration of an article called "Why American College Students Hate Science", from the Times opinion section. In this piece, Brent Staples considers the education model of UMBC's Meyerhoff Scholars Program, which seeks to engage students in science through realistic hands-on laboratory research:
The students are encouraged to study in groups and taught to solve complex problems collectively, as teams of scientists do. Most important, they are quickly exposed to cutting-edge science in laboratory settings, which demystifies the profession and gives them early access to work that often leads to early publication in scientific journals. At the same time, however, the students are pushed to perform at the highest level. Those who earn C's, for example, are encouraged to repeat those courses so they can master basic concepts before moving on.
The article never really answers the question its title asks; Staples seems at one point to be arguing that viewing basic science classes as "weed-out" courses is hurtful to the discipline, but he changes horses partway through in order to discuss institutional racism. Early in the article, he writes that "large numbers of aspiring science majors, perhaps as many as half, are turned off by unimaginative teaching and migrate to other disciplines before graduating," which is a plausible claim but is presented without evidence. In short, while this is an interesting case history of one school's attempt to combat science fatigue, it doesn't present a coherent answer to the question of why college students hate science in the first place.

That said, I actually think it's the question, not the answer, that's important. And asking this question brings up a few more: Do American college students really hate science? Do they hate it disproportionately to how much they hate other disciplines? If so, what can we do about it? If they hate all college education, what's to be done about that? And should it change our plans for improving science education, if we find that students' educational antipathy is not discipline-bound?

PZ Myers recently sparked a discussion of the relative worth of the sciences and the humanities. The post began by refuting an uninformed claim about science being "overrated," but the comments delved deeply into the reception history of science, its place (versus humanities) on the academic hierarchy, and the spuriousness of the "science versus humanities" distinction in the first place. One image that came up over and over was of the English major with the barely-concealed hostility towards science and math, or the engineer who considers literature to have no value. Both of these descriptions are (on average) caricatures, but neither is a fabrication. There is a nearly inexplicable sense of mutual uselessness and incomprehensibility between science and humanties fields. This, to me, seems to be Thing One in why college students hate science: we haven't made them understand that accepting science doesn't mean rejecting literature, philosophy, or aesthetics.

And speaking of philosophy, Thing Two is that philosophy of science courses aren't required to earn a science degree. A physics major, for instance, could graduate knowing how to calculate acceleration due to gravity, but lacking the equally basic
understanding that scientific theories must be falsifiable; they might know the effects of a particle accelerator but not of a scientific revolution. In other words, they can easily graduate knowing all of the what but none of the why. And while this might qualify them to contribute to scientific journals, it doesn't qualify them to contribute to public scientific understanding. And it certainly doesn't qualify them to want to. This problem is not unique to the sciences, by the way; the very real anti-science sentiment among many humanities scholars means that literary theory can't account for the influence of the current scientific zeitgeist, and whole aspects of the reading and writing individual's world experience are almost completely ignored.

Mind you, I think it's possible, even likely, that the academic sciences are suffering due to general student ennui. I've certainly noticed a tendency, among the students I've encountered, to consider a college education as a token labeled "Redeemable for One (1) Lucrative Job Immediately Upon Graduating." This could easily lead them to turn up their noses at the sciences, hard ones no less than social, as well as at art or English. Figuring out how to instill some kind of fundamental joy in learning might be step one. If what your students want out of college is essentially a professional certification, they may not find Human Evolutionary Genetics any more desirable than Early Modern Poetry.

But there do exist students who believe, or can be convinced to believe, in the fundamental value of education. And these students are ill-served by the unnecessarily strict divisions, often reinforced by campus geography, between the humanities and the sciences. They get the impression that in declaring a major, they are declaring a personality -- are they science and math types, or poetry and philosophy types? Sure, they are encouraged to sample from the other disciplines, but this is presented as Science People needing an English course to graduate, or Humanities People fulfilling a "natural sciences" requirement for Latin Honors. The idea that you don't need to pick a turf isn't seriously considered. So while scientists badly need to understand philosophy in order to be thoughtful researchers, and the humanities need to graduate beyond 19th-century pop psych if they want to regain relevance, there are no adequate models for stepping off those turfs onto neutral ground.

Sunday, May 21, 2006

What the cool kids are reading

For those of you not in the ultra-cool loop, Laura and I have started a joint blog. The inspirations were our twelve-year love triangle with formal logic, and our slightly more recent liberal rage. I'm also very proud of the site design, so go appreciate it, and make all my labor worthwhile. (It may look simple, but I guarantee it was difficult to pull off the CSS!)

The most extensive post so far is our response to Richard Cohen's critique of Colbert, in which we attempt a modified reductio ad absurdum -- we bring the reduction, Cohen brings the absurdity.

They blow up bathtubs so you don't have to

Imagine that you're flipping through the channels, late at night or maybe in that stretch of weekend afternoon when nothing's ever on. Maybe it's too early for the good cartoons and too late for "What Not to Wear" reruns, but dammit, you don't want to grade. So even though you usually stop pushing the "channel up" button once you hit Comedy Central, you decide to keep surfing, up into the 70s and (gasp!) the 80s. And there, on channel 81, a bathtub is blowing up. In fact, a bunch of dental hygienist's nightmares in lab coats are throwing tiny vials of alkaline metals into various tubs, and watching the resulting depth charge.

Such was my introduction to "Brainiac," quite possibly the best thing to come out of England since Douglas Adams. It occurred to me today that there might still be people out there who haven't seen this demented science program; while it might not be up in the nosebleed channels where you live, it does reside (in the US) on the gamer channel G4, which is more regularly host to video game cheat code shows and "Star Trek: TNG." If you only watch a handful of channels, like I do, it's entirely possible that it flew right under your radar, and that's a crime.

"Science Abuse" is the "Brainiac" tagline, and it's certainly their philosophy. This motto makes no pretenses of rigorous experimental protocol, and indeed the tests are often hideously flawed -- today, for instance, they tested male versus female color perception by having yellow-suited people throw tennis balls from in front of a yellow wall, and seeing which sex dodged the projectiles better. It doesn't take an external review committee to see that this is a test of third-grade playground acumen, not color perception. But "Brainiac" is for fun, not for peer review -- and it is fun. Their primary modus operandi is blowing things up (often trailers), and sometimes having hot girls rate the explosions. If you like guessing whether pouring potassium chlorate on candy will result in a fizz or a bang, the show delivers; if you've always wondered whether napalm will destroy a black box, they'll let you know. But there are also useful non-explosive segments, such as "Tall v. Small" (is it better to be a giant or a midget when, say, walking down Oxford Street?), "101 Uses for a Wee" (self-explanatory), and "Which Fruit Floats?". Plus you can enjoy watching hapless young Brits trying to ascertain which amusement park ride causes the most stress (the roller coaster), whether staring at breasts for half an hour is equivalent to a workout (yes), which sock material is the slipperiest (wool), and what you should use if you have to make a DIY bungee cord (underwear elastic). Some of their methodology wouldn't pass science fair muster, but it sure is fun to watch.

If by some chance you've missed "Brainiac," don't miss it anymore. This is what happens when you let nerds run free, and it's wonderful.

Saturday, May 20, 2006

Take your HoloCreations to the level of the ridiculous!

If you were deathly bored or confused by my quantum posts, perhaps you'd be more excited learning How You Can Use HOLOGRAPHIC CREATION to Easily Manifest Your Desires, even if you lack Visualizing Skills!

If you're a physicist who appreciated the attempt to break down the tricky but fascinating tenets of quantum, despite the fact that my lack of math knowledge was palpable, reward yourself by learning How a simple change in the way you think about time and space makes it easier to create what you want (Chapter 13), How to get yourself into the "Harmonic Zone" of Creation even when you don’t feel like it! (Chapter 12), and How to improve the lives of millions through taking your Extreme HoloCreations to the level of the ridiculous! (Chapter 11).

If you're a philosopher who's steamed that I tried to provide a clear, if simplified, lay analysis rather than waving my hands at the difficulty and ambiguity of subatomic mechanics, perhaps you'd prefer to Actually Learn how to Create Baby Holograms of your desires out of light and sound, and then learn how to send these Baby Holograms into the soil of the Universe where they will grow into Full-Size Holograms that you will meet in Physical Existence!

If you enjoy false dichotomies, you may be pleased to know that Either this is a universe with a purpose, or this is a universe without a purpose. This book is for those who know the universe has a purpose!

Yes, it appears that if you're unwise enough to send emails referring to such things as "superfluid vortices," Gmail with reward you with a sidebar ad entitled "Quantum Mechanics and You? How You Can Master Quantum Time To Gain Extreme Wealth And Success!" The word "quantum," here, is of course worse than meaningless, especially in the context of "mastering quantum time." For one thing, the difficulty of observing (let alone mastering!) quantum anything is part of the reason quantum mechanics is so fabulously slippery. For another, there's not really any such thing as "quantum time." Still, this could have been little more than a semantic quibble, just an objection to the uninitiated using technical terminology (in the same vein as "dammit, Scott Bakula was not making quantum leaps!"). But they're selling this snake oil, and it doesn't come cheap. You can get a Personal HoloCreation Consultation for $50 per half hour ($90 for a full hour), and then there's the book, priced at a very reasonable $28.19:
P.S. - I Create Reality: "Beyond Visualization" How to Manifest Your Heart's Desires With Holographic Creation will be priced at $49. I can't guarantee how long the introductory price of $28.19 will last, so act now!

P.P.S. -- Make the decision:
If you don't purchase this book, where will you be weeks from now? Probably right where you are now. Positive change comes from new ideas. Get the success you want through the motivation and specific instruction provided in this book.

Wouldn't you like to have increased energy, more focus for goals and dreams, and a new ability to create what you really want? To make your life different, you must start to do things differently. Start today.

Act now! Buy now! Your satisfaction is guaranteed. Click Here NOW

Note -
Increase your wealth immediately by promoting this book! I have an affiliate program which currently pays 60% commission for sales. This means you make more than I do for every sale you refer. I have some affiliates who sell more books than I do!
In other words, even if you have to spend 50 bucks on this miscarriage of science, you need it, because otherwise you'll never do anything with your life... oh, and by the way, join our pyramid scheme.

Orac at Respectful Insolence posted a while ago about "celebrity nutritionist" Don Lemmon, who has gotten filthy rich off of hawking "internal cleansers," "fat burners," and "glandular therapy complex" (which is just what it sounds like). In response, he got a slew of gland-addled gullibles pointing out that, in essence, "Don Lemmon is buffer than you, QED." (This is very close to one of my favorite fallacies, argumentum ad baculum -- nothing to do with Bakula this time.) And I'm sure someone could point out that this Quantum Time Master is better at Creating Baby Holograms than I am -- I certainly wouldn't argue. But whether or not the person embodies their own tenets isn't the issue. This isn't Bill Bennett's gambling problem. The issue is that these people use incomplete science as a false justification for their extortionist activities. They are literally trading on public ignorance.

It's one thing to misrepresent your activities as science-based for fun; it's quite another to do it for profit. I made a comment on Orac's post to the effect that nobody cares what nonsense you believe in your spare time; it's when you start raking in dough from the weaker-minded that science fans and rational thinkers get pissed. Just ask Penn and Teller. Is there jealousy at work here, as Lemmon's minions [m]indicate? Well, sure these pseudoscience barons are probably richer than we are; preying on the ignorant has always paid more than trying to forward the cause of knowledge. But it's not like we're actually coveting those dirty dollars. We just don't like seeing people kept ignorant just because it's lining someone else's pocket.

I don't blame Google, though. They could have linked the ad to emails about "not knowing what to do with my life" or "personal stagnation" or "too few holographic mindbabies." Instead they evidently linked it to phrases like "superfluid vortices," thus ensuring that the people who saw the ad would probably be immune to its pseudoscientific posturing. If only everyone were doing their part to make sure such travesties remain as good inside jokes for the knowledgeable, instead of money pits for the dumb.

Thursday, May 18, 2006

I wish I could quit you, chimpanzee

Lynne sent me a link to this New York Times article on human-chimpanzee species divergence, which has suddenly become hot property in the blogosphere -- way to ride the cutting edge, Lynne! Coming right on the heels of the completion of the Human Genome Project, biologists at the Broad Institute have theorized that human and chimpanzee genomes indicate not one, but two species splits. Though we haven't looked on the moon yet, we have been finding older and older fossil skulls with humanoid characteristics. But this recent comparison of human and chimp genomes, which seems to be set apart largely by its immense sample size, implies a gap of something like 1.6 million years between the age of the oldest humanoid fossil and the human/chimp species divergence:
The analysis, by David Reich, Nick Patterson and colleagues at the Broad Institute in Cambridge, Mass., sets up a serious conflict between the date of the split as indicated by fossil skulls, about 7 million years ago, and the much younger date implied by genetic analysis, as late as 5.4 million years ago.

The conflict can be resolved, Dr. Reich's team suggests in an article published in today's Nature, if there were in fact two splits between the human and chimp lineages, with the first being followed by interbreeding between the two populations and then a second split.

...

Hybrid populations often go extinct because the males are sterile, Dr. Reich pointed out, so hybrid females may have mated with male chimps to produce viable offspring. The human lineage finally re-emerged from this hybrid population, Dr. Reich suggests, explaining the younger genetic dates, while the very early fossils with humanlike features may come from the earlier period before the hybridization.
In other words, the phenotypic split doesn't line up with the genotypic split, and we've got primates with human-looking heads running around before genetic evidence of speciation. Reich's interpretation is that the early fossils don't indicate an independent species, but two related species that regularly interbreeded to improve adaptation and fertility -- manpanzees would have combined the adaptive traits of chimps and people, but females might have had to back-breed with chimps instead of their sterile bybrid cousins. The chimp and chumanzee lines might thus have been intertwined for a good long time.

There have been a number of blog responses to this study. Jason at Evolutionblog makes a preemptive strike against IDers who might misuse the findings by claiming that "Darwinists" are ignoring the inconsistencies in their data. Jason calmly explains the difference between phenotype and genotype, and how we can use those concepts to understand the disagreement between fossil records and gene records -- it's too smart for IDers, but it's pleasant for the rest of us to read. Carl Zimmer at The Loom draws a comparison to the ethical quandaries surrounding human-animal chimeras; his definition of "chimera" differs from what I'd understood it to mean, but Carl is one of the best science writers out there, so he probably knows what he's talking about. Razib at Gene Expression explains why it's not absurd to imagine a human-chimp hybrid. And John Hawks, who seems to be the only one among us who's actually gotten to look at the pre-pub paper, takes serious issue with the science. Oh John, you're ruining all the fun we've been having, breaking out the portmanteau words like "humanzee" and "manimal." But these are serious objections, especially the one about insufficient citation of previous studies. John argues that "The result was entirely anticipated by earlier work on smaller datasets. There's nothing new here, other than the addition of more data," and on this one I disagree -- the addition of more data is a significant improvement, and there's not a huge difference between "the result was entirely anticipated by earlier work" and "the result confirms earlier work." Most of the critiques are very informative, though; for instance, John implies that the study is claiming "hybridization after a long prior differentiation," which is not what I'd understood after reading the NYT piece, and is harder to imagine. Go to his blog and pick up your grain of salt before you get too into the idea of our chumancestors.

Finally, there's Lynne's take on the article, from her email:
My favorite part:

' "If the earliest hominids are bipedal, it's hard to think of them interbreeding with the knuckle-walking chimps — it's not what we had in mind," said Daniel E. Lieberman, a biological anthropologist at Harvard.'

To me this sounds like, 'Chimps are totally not sexy enough for humans!' which I think is pretty hilarious.
This, unsurprisingly, is my favorite interpretation of the data. And here's the kicker... not only did chimps manage to overcome their knuckle-walking awkwardness long enough to seduce some hominids, but they actually maintained a pretty stable relationship -- the lines didn't fully split for over a million years, and hell, that's longer than most marriages. Those humans just kept coming back! Plus, this all happened millions of years before any primate learned to wear shoes. Hairy, knuckle-dragging chimps can get laid without shoes? What have we women been thinking all these years?

Wednesday, May 17, 2006

"The moon's just right upstairs"

This morning I woke up to a very cool -- and amusingly sound-edited -- NPR report about looking for life on Earth on the Moon. I realize that's a confusing combination of prepositional phrases, but the idea is this: We have a billion-year gap between the birth of the planet and the first chemical traces of life in fossils. This is not necessarily because there was no life on the planet, though -- it's because rocks that old effectively don't exist anymore. We're talking about rocks from before the Earth fully cooled, which means they've been melted and pulverized and folded and spindled and anything else that can happen to you in four and a half billion years or so. Not the best conditions for seeking the very earliest traces of life.

But Earth has, in our only satellite, a nice little attic storage space. Asteroids hitting the earth in these early days -- and they did that a lot -- could easily have kicked up material from the infant earth, splashing bits of it all over the lunar surface. In fact, there are potentially millions of tons of Earth rock on the Moon, according to Dr. Peter Ward at the University of Washington. (I'm not sure why they interviewed him, rather than the two graduate students conducting the study, but I guess that's academia for you.) Mind you, we're not talking about easily recognizable, conveniently labeled chunks of material. For one thing, the Earth-to-Moon meteorites were probably crushed to powder in their fall, with no atmosphere to slow them down. For another, it's not immediately obvious how we would tell terran material apart from lunar material, though according to this report there are several possible chemical parameters for differentiating the two. And there's no obvious Earth material in the lunar rocks we've collected so far. But there's evidence that if we knew what to look for, we'd be able to find pristine (if powdered) Earth rock dating nearly to the birth of the planet.

What would that get us? Well, we often assume right now that life began to emerge after the planet was cool and heavy bombardment by asteroids had tapered off. But that's partly because we don't have any reliable records from before then. If signs of life appear in these relics from Earth's first billion years, then life may happen more easily than we thought. It would indicate that the conditions necessary to support life are not as tricky and specific as we might have thought. After all, we knew our ancestors were hardy little buggers -- otherwise they wouldn't have lived long enough for a lineage -- but hardy little buggers who can live on an unfinished planet under a hail of space debris? That presents a different picture. It makes us even bigger badasses, and perhaps more importantly, it lends credence to the idea that life could easily have arisen on other planets, under other -- maybe tougher -- conditions. Statistically that's pretty much a no-brainer, but if life is easier to produce than we'd imagined, alien life may be not only likely but frequent.

And all we need to do to find out is check upstairs.

Quantum awesomeness part deux

So I've been preparing for exams this week, but I promised a second half to the quantum mechanics post, and a second half you shall have. When we last saw our intrepid photons, they were allowing us to "see" an obstruction 25% of the time, by coming out the dark port instead of the light port. Fifty percent of the time, however, they blew everything up. Suffice to say, these are not ideal odds. Can we find some way to improve them?

Here's where the quantum Zeno effect comes in. As you might guess, this is named after the Greek philosopher who was constantly making arrows get stuck in flight, keeping Achilles from overtaking the tortoise, and in various other ways ascertaining that it's not possible for anything to get anywhere. As you know if you took any sort of pre-Socratic philosophy course, Zeno is hard to refute logically; it's difficult to argue against the idea that an arrow in flight isn't going anywhere because at each individual instant it's stationary, or that in order to get from point A to point B, you have to cover half the distance, then half the remaining distance, then half the remaining distance, ad infinitum. The best way to refute it, in fact, is simply to point out that this isn't what happens. But with quantum particles, this kind of logic holds much more sway -- remeber, if it's logically necessary that a particle be doing something, it does it.

So in order to make a photon bend to your will, all you need is to make it logically necessary that it be doing what you want it to do. How to do that? Well, first it's important to establish that light can be either horizontally polarized or vertically polarized (the light waves oscillate either side-to-side or up-and-down). You can set up your photon detector (like D-light and D-dark in the earlier experiment) behind a horizontal or vertical polarizer, which will transmit similarly polarized light but absorb perpendicular light. Thus, if you shoot a horizontally polarized photon at a detector, you'll detect it, but if you interpolate a vertical polarizer, it'll be absorbed (vertical polarizers transmit vertical light and absorb horizontal light) and you won't see it at all.

Now imagine that before the detector, you place something that rotates the polarization of light 90 degrees (see image, once again ganked from this SciAm reprint). Now, when the photon gets to the detector, it's vertically polarized, it doesn't get absorbed, and you can detect it again. Replace the vertical polarizer with a horizontal polarizer, and the photon disappears again -- it starts out horizontally polarized, but by the time it reaches the end it's been rotated to vertical, and the horizontal polarizer absorbs it. So far so good. Now suppose that instead of one rotator that rotates the polarization 90 degrees, you have six rotators that do 15 degrees each. Same end effect -- a horizontally polarized photon will be vertically polarized by the time it reaches the polarizer, and it will get absorbed.

Here's where it gets awesome. In between each of the six rotators, we put another horizontal polarizer. When the photon goes through the first rotator, it's only rotated 15 degrees, so it's still pretty horizontal; this means that it has a low chance of being absorbed by the first polarizer. A 6.7% chance, in fact -- SciAm says it's determined by the square of the sine of the rotating angle, and who am I to argue? The important thing here is that the chance is small, and as the angle gets smaller, the chance of absorption decreases too. Accordingly, it has a 94.3% chance of passing through the polarizer -- and if it passes through, it must be horizontally polarized, because horizontal polarizers only transmit horizontally polarized light. When the photon goes through the next rotator, then, it is again only rotated 15 degrees off of horizontal. This means that if the photon makes it through all six rotators and all six polarizers, it is still horizontally polarized -- even though it's gone through 90 degrees worth of rotation. There's only a two-thirds chance of this occurring, because each polarizer represents some small chance of absorption, but as we increase the number of rotators and polarizers, that chance goes up (because the rotation angle, and thus the chance of absorption, goes down).

So what happens when we integrate this experiment with the earlier one, where we were detecting obstructions? Well, imagine that instead of sending photons through a regular beam splitter, you send them through a polarizing beam splitter -- it transmits horizontally polarized light and reflects vertically polarized light. When the photon is released, it first goes through a rotator, then into the beam splitter; the beam splitter sends horizontally polarized light one way and vertically polarized light the other way, then they both bounce off mirrors, recombine at the beam splitter, and go on. It's pretty much like the setup we saw in the last post. If you keep the photon in the system for several cycles, though -- six cycles if it's a 15 degree rotator, for instance, or 12 if it's a 7.5 degree rotator -- then you'll end up with a photon of the opposite polarization. It's exactly like sending it through six (or twelve) rotators in a row. As with the other "seeing in the dark" experiment, you don't have to know or care whether the photon got sent down the vertical path or the horizontal path or a little bit of each. The likelihood is that it probably did both -- doesn't matter, though. What matters is that you put one horizontally polarized photon in and get one vertically polarized photon out.

What happens, though, if there's an obstruction in the vertical-polarization path path? Well, it's almost exactly like interleaving the polarizers with the rotators. Horizontally polarized photon enters, gets rotated 15 degrees (say), and hits the polarizing beam splitter. It has a 6.7% chance of the beam splitter considering it to be vertical and reflecting it, just as it previously had a 6.7% chance of the polarizer considering it to be vertical and absorbing it. If the photon is reflected, it'll hit the obstruction and be absorbed (or exploded, depending on how demonstrative you want to get). If not, it will be transmitted, going on the horizontal-polarization path -- but if it does this, it must be horizontal, because only horizontally polarized light would be transmitted instead of reflected. So you get a quantum Zeno effect: the photon, despite being repeatedly rotated, either gets absorbed or remains horizontally polarized. And there's only a 6.7% chance of the former each time, and less if the angles are smaller and the cycles more frequent.

After the requisite number of cycles, you let the photon out and check under its tail. If it's vertically polarized, there's no obstruction, because there was no Zeno effect; the photon got rotated a total of 90 degrees, and its polarization is a total of 90 degrees off from what it was before. If it's still horizontally polarized, though, it was still rotated a total of 90 degrees, but its polarization is exactly what it was before -- Zeno effect. This means there must be an obstruction, since you only get this effect when there's a chance of the photon not making it through. The photon you're detecting never touched the obstruction, or you wouldn't be detecting it (it would be absorbed/exploded). But the obstruction must be there, or the polarization would have changed. So again, you can "see" something that no light has ever touched, but this time you can get your effectiveness damn close to 100% just by decreasing the angle of rotation and increasing the number of times that the photon cycles. With every decrease, there's a higher chance that the photon makes it through even when there's an obstruction -- but if it makes it through without changing polarization, the obstruction must be there.

So what's so awesome about all this? Well, the implications of being able to see something without light touching it are pretty amazing... you could develop, for instance, less invasive internal photography. Practical applications are pretty remote at this point, of course, but think about the philosophical ones -- it's hard to get your head around how quantum particles work, even with a totally surface-level layperson's explanation like the one I've given, and according to a large percentage of Real Quantum Physicists my description is probably 90 degrees rotated from reality. But dude, we can see something without light touching it. That is, not to put too fine a point on it, pretty rad.

Sunday, May 14, 2006

Happy Mother's Day

It's Mother's Day, so give your favorite pro-life or anti-stem-cell-research mother a handful of flower seeds.

"They're fertilized zygotes, Mom, what's your problem? You always said fertilized zygotes were ethically equivalent to fully-grown creatures. So by that token, I got you a bouquet."

Then come back here and let me know whether you got disowned or not. If yes, score one for our side -- we caught your mom in a hypocrisy!

(In all seriousness, hi Mom, happy M.D. Aren't you glad you took me to those pro-choice marches when I was a kid? Did you know you were raising a liberal loony?

Yeah, I thought so.)

Friday, May 12, 2006

Drunken monkey style

Friday is no day for serious science, so instead of following up on the quantum post, I'm going to let you in on a monkey's secret weakness. Don't say I never did anything to stop the coming invasion.

As I've mentioned, my best friend is a little phobic of lab monkeys, on the principle that they're way too smart for their own good. So she must have been relieved, yesterday, to be able to send me this article about Lab Monkey Happy Hour. It appears that there's a certain consistency in primate alcoholism -- just like humans, rhesus macaques' drinking habits are correlated with genetic and environmental factors. When given regular access to alcohol (in what the researchers seriously refer to as a "happy hour"), macaques who were housed alone, or who had low social standing when living in a group, put away a lot more moonshine. In addition to the losers (my phrase, not the researchers'), there were also the congenital lushes, who drank heavily regardless of social factors:
"The singly housed monkeys certainly drank more than the socially housed monkeys -- at least two to three-fold more," Chen told Discovery News. "With the socially housed monkeys, there are a number of factors that can potentially compete with access to alcohol, such as social status or dominance ranking."

Lower-ranked monkeys and males tended to drink more overall, but certain individuals consistently drank more than others, regardless of status or housing conditions.
In other words, this is basically a perfect mirror of human alcoholism patterns. Some people are genetically predisposed, and will drink more under any circumstances than someone with teetotalling genes (though they may never become habitual tipplers). Put a person in reduced social circumstances, though -- a studio apartment, a menial job, graduate school -- and their intake will probably increase (though the quality of the hootch probably won't). Even better, some simians get sloshed in response to stress:
In yet another study, the scientists gave a group of male monkeys 24-hour access to the beverage dispensers. According to the researchers, a spike in consumption immediately followed the facility’s working hours.

"Like humans, monkeys are more likely to drink after stressful periods, such as soon after the daily 8-5 testing hours and after a long week of testing," said Chen.
I love the image of a rhesus macaque dragging itself back to the cage at the end of the workday, kicking off its shoes, and groaning "god, I need a cocktail." I feel for you, little monkey dude. I may not be singly housed, but I usually want a cocktail at the end of the day.

The most important thing here, though, is that we know how to stop the monkeys if they get too uppity, with their transplanted heads and their telepathic arms. I mean, these little guys can be serious boozehounds:
In the study subjects, "blood alcohol levels often exceeded the .08 percent level, which is the legal limit for most states in the U.S.," said Scott Chen, one of the study’s authors and a researcher at the National Institutes of Health Animal Center in Maryland.

The study, recently published in the journal Methods, also found that booze affects monkeys much the same way it affects people.

"It was not unusual to see some of the monkeys stumble and fall, sway, and vomit," Chen added. "In a few of our heavy drinkers, they would drink until they fell asleep."
I only hope that, when the lab monkey revolution does come, we have the presence of mind to inundate their ranks with tasty beverages. Monkey happy hour: more than a good idea. It could save your life.

Thursday, May 11, 2006

Surely your birthday, Mr. Feynman

It seems to be keenly appropriate that I've been talking about quantum physics today; Bora points out that Richard Feynman was born today in 1918. To celebrate, I thought I'd share my favorite excerpt from Surely You're Joking, Mr. Feynman:
When I was a junior or a senior I used to eat at a certain restaurant in Boston. I went there by myself, often on successive evenings. People got to know me, and I had the same waitress all the time.

I noticed that they were always in a hurry, rushing around, so one day, just for fun, I left my tip, which was usually ten cents (normal for those days), in two nickels, under two glasses: I filled each glass to the very top, dropped a nickel in, and with a card over it, turned it over so it was upside down on the table. Then I slipped out the card (no water leaks out because no air can come in -- the rim is too close to the table for that).

I put the tip under two glasses because I knew they were always in a hurry. If the tip was a dime in one glass, the waitress, in her haste to get the table ready for the next customer, would pick up the glass, the water would spill out, and that would be the end of it. But after she does that with the first glass, what the hell is she going to do with the second one? She can't just have the nerve to lift it up now!

On the way out I said to my waitress, "Be careful, Sue. There's something funny about the glasses you gave me -- they're filled in on the top, and there's a hole on the bottom!"

The next day I came back, and I had a new waitress. My regular waitress wouldn't have anything to do with me. "Sue's very angry at you," my new waitress said, "After she picked up the first glass and water went all over the place, she called the boss out. They studied it a little bit, but they couldn't spend all day figuring out what to do, so they finally picked up the other one, and water went out again, all over the floor. It was a terrible mess; Sue slipped later in the water. They're all mad at you."

I laughed.

She said, "It's not funny! How would you like it if someone did that to you -- what would you do?"

"I'd get a soup plate and then slide the glass very carefully over to the edge of the table, and let the water run into the soup plate - it doesn't have to run onto the floor. Then I'd take the nickel out."

"Oh, that's a good idea," she said.

That evening I left my tip under a coffee cup, which I left upside down on the table.

The next night I came and I had the same new waitress.

"What's the idea of leaving the cup upside down last time?"

"Well, I thought that even though you were in a hurry, you'd have to go back into the kitchen and get a soup plate; then you'd have to sloooowly and carefully slide the cup over to the edge of the table..."

"I did that," she complained, "but there was no water in it!"

Your dose of quantum awesomeness

For some reason, a lot of our musician friends are also scientists. Dan is of course a physicist, Might Could is all grad students (two physicists, one chemist), and keyboard wunderkind Sharif studies and teaches chemistry. So it wasn't particularly weird, last open mic night, to see Dan explaining the quantum Zeno effect to Sharif before their set. What was funny, I thought, was that when some friend came over to say goodbye to Sharif, she looked knowingly at me and said "are you smiling and nodding? That's what I do." I tried to find some way to say "um, no, I understand what they're talking about, and you probably would too if you'd been here the whole time"... actually I said something almost exactly like that, since I don't know this girl so who cares about nice. So for the unknown girls of the future, let me go on record: there is nothing all that hard about getting a lay grasp on quantum mechanics. I'll probably never be able to calculate a wave function or a path integral, but having a rough idea of what's going on and why it's so cool? Not a big hurdle.

I know what you're thinking... "but Jess, of course you had to go through some kind of rigorous training to be a physicist's girlfriend, and that's why you can talk about this stuff." Yes, of course, but listen: the secret is that physicists are actually closer to humanities people when they're talking about quantum than they are at any other time. It's mostly a matter of logic and metaphor.

The most important principle is that, at least until you measure them, photons don't or. They only and. Until you look at them they do all possible things. Then, of course, when you measure them, they're only doing one thing, so they must have been doing that. That's where the logic comes in -- if it's logically necessary that a photon was doing thing A to get the results you got, then the photon was doing thing A. The second most important principle is that mechanics at that level are so freaking weird that nobody can really discuss them without metaphors, except for nearly-crazy genius scientists. So if you have to talk about a photon "knowing" that you're looking at it (even though you're relatively sure it isn't conscious), or imagine it as a stream of light or a bouncy ball (even though it isn't), you're in good company.

Take the phenomenon Dan was explaining to Sharif, of quantum "seeing in the dark." The idea here is that you can "see" an object without any light ever touching it. How? Well, imagine that you have two beam splitters and two mirrors. They're set up in a square, such that a beam of light that goes in the beam splitter splits into two beams, at 90 degree angles from one another, e.g. one at zero degrees and one at 90, as in this image (call them the lower and upper paths respectively). Each of these beams hits a mirror, then bounces back at a 90 degree angle into another beam splitter, which combines them and sends out a single beam at the same angle at which it entered (zero degrees). The picture should help; I cribbed it, and the one below, from this reprint of a Scientific American article, because writing about quantum physics and refamiliarizing myself with Illustrator seemed like too much before lunch.

If the beam is not a beam but a single photon, it's basically the same. A photon entering the beam splitter has equal chances of being directed to the upper path or the lower path -- 50% either way. But of course a photon doesn't or; it ands. Since the chances are equal and you don't know which it will do, it actually goes on both the upper and lower paths, then combines with itself in the second beam splitter. There's a detector at the other end, so once it comes out, you know what it did. All fine and dandy -- photon in, photon out.

Now say there's an obstruction in the upper path. Physicists like to imagine it's explosive, just for kicks, so: there's an obstruction in the upper path, and if a photon hits it it will explode. Photon goes into beam splitter, same 50% chance of being directed each way... but if it goes on the upper path, BOOM. If, however, it goes on the lower path, it now hits the second beam splitter alone, instead of combining with itself there. So instead of combining, it splits, with again a 50% chance of going either way. As before, it actually does both until you find out which it did, but that's going to happen in just a second, because when it comes out of the beam splitter it goes into a detector and thus you know what it must have been doing. If it comes out at zero degrees (the "light port," or what the image calls "D-light," because it's the usual way for light to come out), you don't know anything about the exploding obstruction, since of course it would have done that anyway. BUT if it comes out at 90 degrees (the dark port), it must not have combined with itself at the second beam splitter, which means there was an obstruction. It never hit the obstruction -- if it had, there would have been an explosion. So it's not like it went in both directions but one was stopped or absorbed by the object. It must have gone on the lower path only, because that's the only way it could come out at the dark port. Note all the deductive reasoning here -- you didn't observe it, because you can't observe photons and have them do what they normally do (if you observe them, they only or). But you can reason it out, and whatever you deduce has to be what happened. In a certain sense, the photon went on both paths, because that is what photons do. But it can't have gone on both paths, it can't have gone on the upper path at all, because nothing exploded, and it must have gone on the lower path only because that's the only way it could come out the dark port. Reasoning equals history; reasoning equals observation.

The practical upshot here is that if a photon comes out the dark port, you can "see" that there is an obstruction, even though no light hit it. Unfortunately, that only works 25% of the time, and 50% of the time the whole thing blows up. How to improve the odds? It's too much for one post, so I'll address it in the next one.

Wednesday, May 10, 2006

A quick post about sexy smells

The blog's been a bit fallow lately, so I thought I'd check in even though I'm about to run to class. Lynne brought my attention to this article about pheromone processing in lesbians, aprés a PNAS-reported study that I can't find. (The same team seems to have done a similar experiment on gay and straight men almost exactly a year ago.) The short version: Lesbians process female pheromones differently from straight women, but not exactly the same as straight men. We can file this one under "stuff that was intuitively obvious but that it's nice to have scientific backup for."

The experiment looked at PET scans of gay women (but not straight women, according to this article -- they seem to have been operating off of previously-collected data, which could be a design flaw) while they were exposed to male and female pheromones. Pheromones have even wackier names than most chemicals, names like "4,16-androstadien-3-one" and "estra-1,3,5(10),16-tetraen-3-ol," so we'll use the kindly-provided nicknames AND and EST. Briefly, AND is a pheromone found in all human sweat, but in much higher concentrations in men, and EST is, um, "an estrogen-like substance, found in the urine of pregnant women." Yum.

Here's the purportedly interesting part:
When Savic's team looked at the brain activity of the 12 lesbian women, it found the lesbians responded to both compounds in a similar way. And they processed them in a way more like heterosexual men than heterosexual women. But the relationship to the opposite sex was not as strong as the researchers found it to be in a previous study between homosexual men and heterosexual women.
So lesbians have similar preferences to straight men, and the difference between gay and straight women is not the same as the difference between gay and straight men. In other words, lesbians like girls, and homosexuality in women is not exactly the same as homosexuality in men. Duh and duh, but again, it's nice to have confirmation.

But this is the part I find really interesting:
Savic's group also found that in contrast to the heterosexual women they studied, the lesbian women processed the AND pheromone by the olfactory network, not the anterior hypothalamus; when they smelled the EST pheromone, they partly shared activation of the anterior hypothalamus with the heterosexual men studied.
Now, maybe I'm getting my anterior hypothalamus wrong -- this article only says that it "regulates metabolic process and links the nervous system to the endocrine system by secreting brain hormones." But I always thought that the hypothalamus was related to emotion. If that's not wrong, and a cursory Google search (I told you, I'm on my way to class) implies that it's not, then this is very cool. The article makes it sound like the pheromones of the sex you're attracted to are processed like pheromones, and the others are processed like smells. This is pretty interesting already. But the hypothalmic involvement implies -- and this is very glib, I know -- that the attractive pheromones are processed with an emotional component, an emotional rather than just a physical arousal. That's still saying the same basic thing as "the pheromones of the sex you're attracted to are processed like pheromones, and the others are processed like smells," but with more of a clue as to what "processed like pheromones" means. And a nice implication that emotional arousal and physical arousal are intricately linked. (After all, any emotion traces back to physical structures like the hypothalamus, anyway.)

Sunday, May 07, 2006

Yet more people who know more than me

Here's a press release that puts a more empirical face on my earlier claims about the brain. I don't think I was clear enough in that post that I wasn't intending any kind of, well, I guess you'd call it environmental determinism. I'm well aware of the vast and complex effects of genetics, even on such an adjustable organ as the brain. I just think it's unjustified, given the brain's extreme (and rather poorly understood) capacity for change and adaptation -- given the fact that cognitive development relies on this capacity -- to make the leap from "proven genetic difference" to "assumed cognitive difference."

This new study deals primarily with the visual system, since its structure has been well investigated. The question at hand: How does visual input affect neural functioning on the molecular level? The findings: "[V]isual stimulus turns up the expression of some genes and turns down the expression of others, somewhat like a conductor cueing the members of an orchestra." Above and beyond the whole "neurons that fire together wire together" premise, stimulation also affects the expression of whole families of genes. It's quite complex really -- visual input affects gene activity, which affects the way the brain responds to visual input. Probably not a big surprise to some of you, since it's well-established that environmental influences affect gene expression (Anna just blogged about how this presents itself in the Harry Potter universe), but as someone who is merely a cog-sci hobbyist, I really hadn't thought about it in the context of the brain before. The practical upshot? Sure, genetics have an influence on brain activity, but the reverse is also true. In other words, genetic variation is not sufficient cause to assume cognitive differences.

So that's a better, more evidence-based, and more authoritative version of what I was intending to say. Thanks are due to JP for making it clear that I hadn't been clear.

Friday, May 05, 2006

From scholar to word

I love it. As I'm leaving, maybe for good, my life as a literature scholar, I get invited to be part of a work I've studied, one of the few I felt really invested in. This is amazing news, and I know it has nothing whatsoever to do with science, and I know I just ended a sentence with a preposition, but shut up, I'm excited.

Okay, let me back up. For a long time I've been doing work on the Skin project, a "mortal work of art" by pioneering hypertext author Shelley Jackson. The work is a sort of radical expansion of the freedom of hypertext, one that gives the words themselves life. "Skin" is a short story, but you haven't read it, and in the usual sense you probably won't: it will only be published word by word as tattoos (though the tattoo-ees will get to see a full version). Once a participant is assigned a word -- you can choose to decline it, but not to change it -- he or she gets that word tattooed, in a classic book font. The most important thing, from a theoretical perspective, is this, from the call for participants:
From this time on, participants will be known as "words". They are not understood as carriers or agents of the texts they bear, but as its embodiments. As a result, injuries to the printed texts, such as dermabrasion, laser surgery, tattoo cover work or the loss of body parts, will not be considered to alter the work. Only the death of words effaces them from the text. As words die the story will change; when the last word dies the story will also have died. The author will make every effort to attend the funerals of her words.
Words are, from what I've seen, very passionate and interesting people; I know two of them, and I've corresponded with a few more. And I had resigned myself, being a scholar and therefore necessarily rather dry, to examining this work from the outside. I gave a paper on it at the Society for Literature and Science conference last year, and met up with a word while I was in Chicago, thus probably being the only academic in history to have coffee with part of the text she'd just presented on (except, of course, in the lonely and figurative sense). But I figured that would be the closest I got to the project.

Shelley came to Maryland last month and gave a really inspiring talk, not to mention holding her own against dull and obtuse academic questions in a small discussion. I'm not only praising her because she unwittingly supported some of my "Skin"-related conclusions in front of a professor who'd doubted and challenged me (although come to think of it... haha! Take THAT!). This is a woman who is probably a mile ahead, conceptually, of anyone else working in fiction today. She's an excellent writer, which is a rarity in people who experiment with form, but she's also constantly thinking about the limitations and expectations of her medium. In short, she's cool. So we got to talk a little (I was of course a little star-struck), and I emailed her my paper and my Powerpoint from the conference. And I didn't say hardly anything in the email to point out that I really really really wanted to be a word. Maybe just a little, but I think I was very polite about it.

But I did, and now I can, because she wrote back with the release form, and holy shit. In a certain sense this timing is perfect... I'm leaving the academy partly because it enforces such distance from the text, so being accepted into a text could not come at a better time. And as I'm leaning more and more towards writing careers, which I've always resisted, it seems very appropriate to be able to define the trajectory of 1/2095th of a story. And of course I'll get to read the story in its entirety, once the words are all mailed out... and even though my theoretical commitment is to the idea that the original story doesn't matter, I'm pretty damn curious.

I get to be a word. People who know me will know how thrilled I am about this. If you don't, just know that I read Shelley's email half an hour ago and I am still weak in the knees. Now just wish me luck in not getting "poo" or "cabbage" or "flabby" or something. Luckily Shelley chooses her words carefully, and almost all the ones I've seen have been lovely, except maybe "swelling" but I think even that could be great once you think about it. Oh I wish we had a working printer, I would print out the form right now.

Do you chew you?

Since it's Friday and I've expended today's liberal vitriol by writing angry letters to the Post, I thought I'd bring up an issue that's been kicking around Bee Policy HQ: Would you eat human meat, if it had never been a person?

Bora at Science and Politics recently posted a quote of significance:
Try to go through life a little bit edible
You never know when you'll meet someone hungry.
--------
Try to go through life a little bit hungry
You never know when you'll meet someone edible.
And of course there's been a lot of attention paid to Terry Bisson's short story "They're Made Out Of Meat", now that it's been adapted into a short film -- no better catalyst for contemplating our true meat heritage. But the question only started seeming relevant when I read this item in "News of the Weird":
In work by various labs in the United States, the Netherlands and Australia (reported by Toronto's Globe and Mail in March), meat was grown in test tubes, and such dishes may yet be a staple in progressive kitchens. "Before bed, throw starter cells and a package of growth medium into the (coffee maker-sized) meat maker and wake up to harvest-fresh sausage for breakfast," wrote the Globe and Mail. Engineered meat would taste like beef or pork, but could be created to be as healthful as salmon. One private group told researchers it was interested in growing human meat, but funding for any of the work will be difficult, said a Medical University of South Carolina scientist.
Oddly, that's not the exact wording I read, though it's off the News of the Weird site -- what I read said specifically that the funding from the human-meat group had been turned down. So we're not actually going to get vat-grown long pig anytime soon. But it's still an interesting question: Would you eat it? Would you eat it more than once?

Dan said that he might take a taste out of curiosity, but only once. Unless it's the best meat on the damn planet, he argued, you'd only eat more than that if you wanted to make a statement (i.e. "I'm a cannibal," or more likely "I'm a rebel"), and those statements aren't statements he wants to make. But John, joining us via satellite, pointed out that human meat might very well be amazingly delicious. This might not apply to vat-grown human, but real people -- or at least Americans -- must be as tender and marbled as Kobe beef. I don't want to eat real people, and I'd only consider eating human meat if it had never been real people, but if a cow tastes better when it's been idle and beer-fed, then an American must taste fantastic too.

Furthermore, as John and I determined, we don't have any good data to challenge this idea. Most written records of cannibalism have probably (correct me if I'm wrong) come from groups like the Andean rugby team, who only went Lecter out of desperation. There was a fantastic article in a recent New Yorker, unfortunately not available online, that described forensic investigations of the Donner Party site; records show that if the Donners actually did eat their dead, they only did it after first consuming their horses, roofs, and dogs. And as John pointed out, "as they got increasingly desperate, they grew decreasingly delicious." Sure, starvation salts the dish, but a stringy and ailing companion does not make for a gourmet meal, no matter how you (ha ha) slice it.

But even if the nouveau Soylent Green wasn't exceedingly tasty, I might still have to eat it -- yes, to make a statement, but not the statement that I'm an eccentric or a sociopath. I have green hair; I don't need to make that statement. No, I feel like I might have to eat my fellow man for science, liberalism, and all that is good. Though it's true that I don't believe in zombies, I do still think it's important that not everything that's human-like has personhood. Like I said, I wouldn't eat people. Too much work, for one thing. But "human" meat grown in a test tube is human only in cell structure -- it's identical to other chunks of homo sapiens on a micro level, but it isn't a person, it was never a person, it could never feel pain, it doesn't have nerves for crissakes. I might have to scarf it down just to make the point that consciousness has significance, that not everything people-esque is people, and that you can't kill something that was never alive. I think you catch my drift.

Now that said, I don't eat red meat very often. But would I munch a humanburger for science and social justice? You bet I would.

Image copyright Ryan North; click it for the only webcomic you'll ever need

Wednesday, May 03, 2006

Brothers under the skull

As I mentioned in the last post, I recently got into an argument with someone who justified his claim that "women are prey" by pointing to the one-chromosome difference between ladies and fellas. (We'll call him Jack, short for... well, it's obvious what it's short for.) "There are demonstrated physical differences, therefore there are cognitive differences" seemed to be the basic gist. Being apparently devoid of a sense of irony, Jack also invoked "marketing" as proof that women have different thoughts and desires.

Then my mom sent me a link to a Richard Cohen column in which he starts off talking about the very interesting case of Donor 401, but then waxes rhapsodic about Nicholas Wade's book Before the Dawn, presumably to show that he reads. (I fondly remember a long-ago article, maybe by Gene Weingarten, calculating the density of the words "I" and "me" in Cohen's column.) Cohen is very pleased with his newfound determinism (look, ma, I'm a scientist!), but his description of the book shows that in his enthusiasm he's plunged in o'ershoes. Wade, according to Cohen, apparently "chides PC-addled scientists who insist there is no such thing as race when, just for starters, certain medicines work differently on whites than blacks. As with the noble savage, the raceless world is a myth." I'll get to Donor 401 later, Mom, but let's deal first with why this statement worries me.

I would have to be blind, deaf, and in all other ways insensate to claim that there were no physical differences between men and women, or between different races. Secondary sex characteristics, bone structure, muscle mass, and vocal cord size are all genetically influenced; so are skin melanin, facial structure, hair texture, lactose tolerance, alcohol-digesting enzyme production, ear wax composition, and so forth. So yes, what Wade is saying is right, and what Jack is saying has truth to it, too. These statements aren't false. But without some qualification, they can be dangerous.

See, from the neck up and the scalp inward, the rules change. The brain is surprisingly plastic and adaptable, and while there are certain glass ceilings installed by genetics, it's amazing how wide a berth we really have for learning, training, and new connections. For instance, take a look at this article on photographic memory, in which Josh Foer quite rightly points out that it doesn't actually exist. There has never been replicable evidence of a truly photographic, i.e. accurate in every detail, visual memory. Furthermore, such a phenomenon wouldn't fit with what we know about change blindness; if, in the course of normal cognitive processing, we ever really took in every single detail of a scene, such phenomena wouldn't occur. And yet we have people like Stephen Wiltshire, the "Human Camera." Stephen doesn't have some kind of spooky, ill-explained magic talent like "photographic memory" is supposed to be. What he has is a really, really, really good visual memory, combined with great skill at reproduction. And plenty of people, not just savants, have really really really good visual memory. (Foer mentions the Shass Pollaks, who demonstrably memorized over 5000 pages of the Talmud.) It's not an innate superpower; it's a matter of training and attention.

Then there are people who lose a whole brain hemisphere but compensate with the other, or have no arms but develop extreme motor control in their feet. The brain is an amazingly adaptable tool, and those adaptations don't only occur on an evolutionary time scale. Influences like poverty, cash- and teacher-starved public education, and peer pressure to look cool by not looking studious can lead to blacks getting lower average IQ scores, just as social pressure to both flaunt and hate one's body can lead to women who act like those "Sex and the City" chicks. At the same time, there are demonstrable, undeniable physical differences among races and between sexes. But this doesn't mean that the gene for book-smarts is on the same chromosome as the one governing skin pigmentation, or that having a uterus means you've got a natural disadvantage in thinking logically. Neural architecture is just too mutable for these ideas to hold water. A basic understanding of the brain debunks the idea that someone of normal development can be naturally unable to (say) think logically -- such things are too easy to train -- while an even more basic understanding of sociology makes it clear that any disparity in mental abilities has more than enough explanation available. So accept the idea of genetic difference, because you can't deny it. But accept it with a grain of salt, because it stops at the mind.

Monday, May 01, 2006

Caffeine, or righteous indignance

Copyright Shannon Wheeler, please don't sue me, I love youAs Schacter and Singer first showed in 1967, it's very hard to differentiate between actually experiencing an emotion and experiencing only its physical component. If you're experiencing physical arousal, you're likely to attribute an emotional response to whatever stimulus is in your vicinity, even if its causes are wholly physical. So I had a hard time, this morning, distinguishing whether my heart was racing from my morning coffee, or from rage. But I've got a guess.

I'm a few days behind on this, admittedly, but I just read all the comments on Nick Matzke's post about Tony Snow. Here we see a Bush-supporting but apparently otherwise intelligent conservative economist (how a conservative economist could support this money-hemorrhaging administration I'll never know) arguing that because not all Republicans are fundies, then the fundies are not in charge of the government, QED. Of course this is comforting but terrifically naive, not least because Bush has apparently decided that Congress is no longer significant to his decisions. He is The Decider, after all -- who needs to listen to liberals or science-lovin' conservatives?

I am glad -- thrilled -- that there are non-fundamentalist conservatives, because hopefully they'll eventually notice and get disgusted with the fact that "their" government is run by a cadre of zealots engaging in a holy war on all fronts. (The numbers would suggest that this realization has already happened, but see the link.) But for non-fundamentalist conservatives to support this administration and to vote for Republicans now really does negate any rationality or love of science they profess. It's becoming increasingly clear that you must make a choice: Bush or science. If you accept one, you throw the other away.

And I can't quite understand the rationale, either. A prospective Nader voter once told me that he believed Bush was evil, and believed that Bush would probably win if progressives voted anything but Democratic, but he was going to vote for Nader anyway. I'm equally flummoxed by the logic of these putative science-supporting Republicans. So you know that Bush is a fundamentalist zealot who doesn't support science either financially or conceptually, but you still support this administration because... what? Liberals don't have big enough muscles or good enough hair? You don't like Michael Moore? A liberal stole your lunch money? You hate the poor?

From Matzke's post, I went to Snow's column on why rational debate about evolution is impossible. This runs to some 762 words, when of course a handful would have sufficed -- "because support of ID is not a rational position." But pundits bloviating about how intelligent design is about evidence, rather than faith and ignorance, is nothing new. The really choice part was this:
A century ago, physicists boasted of having solved all the major problems involved in studying the universe. The following year, their smugness collapsed when a patent clerk named Einstein published his paper on general relativity.
In other words, the theory of general relativity is not "just a theory," despite being almost definitionally unobservable by direct means -- the IDers playground-level "were you there? Did you see it?" comeback works particularly well on Einstein. And, of course, general relativity has certainly never been proven; rather, it's been rigorously tested and significantly not disproven, because that is how science works. But because you have to understand relativity quite well before you find that it conflicts with fundamentalist Christianity, Snow feels comfortable throwing it around as an example of scientific ego deflated. What a tool.

Luckily, the intelligent and eloquent guys are still on our side. Pharyngula's defense of secularism should be posted in every school, church, and courtroom; we should papier-mache the Capitol with it. Bora at Science and Politics just posted a pithy and perfect explanation of why he'll be teaching evolution -- because regardless of political and religious affiliations, "teaching biology without evolution is like teaching English without verbs." And of course there's Stephen Colbert, bless him (in a secular way, of course). Plus, I still haven't removed the coffee variable, so maybe I'm madder than I need to be (there's also the confounding factor of having gotten involved in an argument with someone who says that women should be treated like "prey" because they have a whole different chromosome and therefore are illogical and bad at math). But you know, I suspect that the state of the country, and all it implies for education and science and democracy, has something to do with it. Intelligent eloquent guys, talk louder!

Too Much Coffee Man is copyright Shannon Wheeler, please don't sue me, I buy tons of your books