Saturday, April 29, 2006

More on unbounded education

Cory Doctorow of Boing Boing has linked to a video about the Fairhaven School, the DC-area Sudbury school that my classmates so roundly ridiculed. His comments:
I went to publicly funded schools like this from grade four to graduation, and they were the most important factor in the way I conduct my own adult life. Attending schools like this teaches many kids to run their own lives, blazing their own trail, inventing their own careers, and trying anything. Useful skills in a world where any job that can be described is likely to be outsourced.
I have long admired Doctorow for being so self-made -- he's a successful author, blogger, and activist, despite undoubtedly not having majored in Authoring, Blogging, or Activating -- so it doesn't surprise me that he would find the Sudbury model appealing. (I keep track of people who seem to invent their own jobs, Jonathan Coulton and John Hodgman being probably my favorites.) But I am somewhat surprised, and much pleased, that nonregimented schooling is being presented as a viable prospect (and not only by the schools themselves).

As I said in my previous post on the subject, I do in the end think that the Sudbury model is too hands-off. Such radical educational libertarianism seems like it would tend to attract teachers more devoted to the model than to their subjects, and this would encourage a system that means well but teaches poorly. With reliably excellent teachers, the situation would be different, but so would public school (at least when people don't have to teach to the tests, which is rare and getting rarer). But I feel like there's a problem, a very deep-seated problem in our understanding of what education means, when a school like Fairhaven is sneered at as the utmost in child-spoiling indulgence (which is an attitude I've seen more than once since the Post article came out). What really ruins children is not encouraging them to be self-motivated, but shielding them from the saving forces of logic and inquiry. I'm not scared of Fairhaven; I'm scared of a school that allows a project "proving" intelligent design (through liberal application of question-begging) to go to the state-level science fair. (Incidentally, the comments on Pharyngula's post about that science fair project should be required reading for everyone.) I'm scared of the parents who raised this child to believe so unquestioningly in their dogma -- to regard it, in fact, as unquestionable -- and I'm scared of the teachers who didn't take pity on the kid and show her how to challenge such doctrinaire assertions of truth.

So yes, I mistrust any school that won't take a stand on miscarriages of science, and I mistrust any school that doesn't insist on students knowing where knowledge comes from, where it is right now, and how it might be changed in the future. If Sudbury schools are truly so relativist that they don't believe in enforcing a difference between science and dogma (which is by no means the impression I got from the video), then by all means, let's discourage them. But if not, well, I'd still prefer a hands-off system -- even a too hands-off system -- to a system that allows kids to be left high and dry without the basic survival skills of curiosity and logic. With luck, we need not actually choose between radical freedom and radical oppression, but I won't accept the argument that both ends of the spectrum are equally condemnable. With radical freedom, if the kids get screwed over, at least they can think their way out.

Friday, April 28, 2006

Is that a TI-82 in your pocket, or are you just happy to see me?

Back in high school, where I was in a math/science/computer magnet program, we used to make up "magnet pickup lines" (like the title of this post). This was partly an exercise in cleverness and puerility, but partly a rueful acknowledgment of our place at the bottom of the social food chain. We were not oblivious, earnest dorks on the model of Screech and Milhouse; we didn't even try to get asked to the prom, and we would have had too much homework anyway. It's not that we were terrifically bitter about this, at least not those of us who got laid occasionally, but we were under no delusions: we were nerds, and there's nothing hot about nerds.

But it seems that the attraction value of nerds may be on the upswing. In the last few days I've been coming across a lot of material on the general sexiness of scientists. The first was this article, linked from FrinkTank, in which a neuroscientist with a results-skewing British accent successfully makes science into a pickup strategy:
"I’m a scientist too, actually," I confess.

As her sneer begins to deepen, something snaps. Damn it! I’ve spent four years, six months and two days in graduate school, I have twenty-four years of solid education under my belt, I have a Dean’s Reference for excellence for my undergraduate thesis! I have successfully written (and got funded) two grants! I am published, by George! I’m not ashamed of this!

"Yes, a scientist," I continue, with a little more pep in my vocal-stride now. "My specialty is neuroscience."

Oh yes. That’s got ‘em. That’s not geeky! That’s cool! Seriously, think about it for a minute. The devil’s in the details, see. "Scientists" are sad geeks with bad hair and no social skills. Ecologists however, are cool as fuck. They’re saving the world, y’know. Pharmacologists can be awesome too. They’re working on saving our lives and helping us live forever. Neuroscientists? Now they freaking rock! How cool is it to do something like that for a living! I study your brain!
This is a discovery almost as important as any advancement in neuroscience: being a scientist doesn't have to be a social death sentence! In fact, there's an entire blog's worth of hot guys and girls just oozing smarts (and degrees!). Here you can find high cheekbones and boyish smiles, along with phrases like "The development of metal-mediated reactions has greatly expanded the synthesis of small molecules and polymeric materials" and "The systems under development in his group are anticipated to show a high affinity for (i) atom and group transfer chemistry and (ii) reactions at robust X-H bonds, where the X-H bond refers generally to a C-H, Si-H, B-H, or H-H bond." If that don't turn you on, what will?

Well, maybe AIR's Luxuriant Flowing Hair Club for Scientists, where the brains with manes go to seek their own kind. (Props to hot scientist Tara at Aetiology for both of the above, by the way.) Steven Pinker, whose locks do not mess around, was unanimously declared to be the first member; this tickles me because I had a former friend who hated Pinker, and we always theorized that this had to do with hair envy. Some of the hair is straggly and not so luxurious, and some of the faces underneath are startling, but it's a dangerous place if you're swayed by long hair or degrees (I'm guilty of both). Luckily I've got a physicist of my own, with legitimately luxurious hair, and if I can trick him into letting me take a picture of him without a ponytail (and maybe with the sphere!), I'm totally nominating him for the LFHCfS.

So the moral is, only nerds may be recognizing it now, but nerd is the new black, and soon there will be nothing hotter than pumped-up calculators, perky pocket protectors, taut coverslips, or a really built circuit. Men will make passes at girls who wear glasses, and the thicker the better. Nerds will, in every way, be back on top.

And yet, and yet. We don't have safe search on at home, so usually when you put pretty much anything into a Google Images search, your first results are naked or half-naked girls. So when looking for an image for this post, I figured if I just put in "scientist," I'd likely as not get some photos of a cleavagey pinup with glasses and a lab coat. No dice. Looking for "sexy scientist" got porn or research, but not both, and no other set of keywords (scientist pinup? sexy lab coat costume?) seemed to do it either. I eventually had to resort to private stock. So be warned: the hot nerd revolution is not yet universal. Take care when whipping out your CV, your pipettes, or your electrophoresis results at the singles bar.

Thursday, April 27, 2006

I'll take "Crucial Science Knowledge" for 200, Alex

Several people have blogged this article on "science questions every high school graduate should be able to answer." The Fundamental Ten, as proposed by ten different scientists, are as follows:
  • What percentage of the earth is covered by water?
  • What sorts of signals does the brain use to communicate sensations, thoughts and actions?
  • Did dinosaurs and humans ever exist at the same time?
  • What is Darwin's theory of the origin of species?
  • Why does a year consist of 365 days, and a day of 24 hours?
  • Why is the sky blue?
  • What causes a rainbow?
  • What is it that makes diseases caused by viruses and bacteria hard to treat?
  • How old are the oldest fossils on earth?
  • Why do we put salt on sidewalks when it snows?
  • Extra credit: What makes the seasons change?
I first saw this on the charming Adventures in Ethics and Science, where Dr. Free-Ride suggested a few crucial questions of her own:
  • Why is leaving the refrigerator door open a really bad strategy for cooling off the kitchen on a hot day?
  • Why does taking your tea with lemon and milk result in chunky tea?
  • Licking the spoon while you're making the pudding makes it so the pudding doesn't set properly. What's up with that?
  • When you're prescribed antibiotics, why should you take the whole course of them even if you feel better sooner (and why won't antibiotics do jack for the common cold)?
  • Why are the days longer in the summer than in the winter?
  • Why does ice float (and why is this probably a good thing for people living near big lakes)?
  • Why shouldn't you use a blow-dryer/curling iron/paper shredder/electric carving knife while soaking in the bath tub?
Meanwhile, Evolgen added a few more:
  • What is a hypothesis?
  • What is a theory?
  • What is probability?
  • What is an atom? What is a neutron? What is a proton? What is an electron?
  • What is a molecule?
  • What is a cell?
  • What is a gene? What is a genome? What is the central dogma of molecular biology?
What I'm finding really interesting is how everyone who suggests a set of questions has a very different idea of what it means to be informed about science. The original set of questions, as several people have pointed out, leans heavily towards trivia. These are "things every high school graduate should know" in the sense of "things where it's really embarassing to us as a country if our high school graduates don't know them." You really should know them, you're ripe for some faux pas with the educated if you don't know them, and depending on your answers to a few of them (e.g. the one about dinosaurs) people would be justified in declining your company, but they don't reflect a high level of understanding. Dr. Free-Ride's questions show the practical applications of science, and probably offer an excellent way of getting kids and humanities majors interested. They highlight everyday mysteries that can be rendered explicable with a smattering of science knowledge. These are questions whose answers will do you some good, not only because you'll be able to understand the phenomenon in question, but because in doing so you'll learn useful, applicable tenets. It's Discovery Channel knowledge -- not just "I never knew that," but "I always wondered." Then there are Evolgen's suggestions, which are much more basic; you can't consider yourself science-literate without being able to answer them, but what they allow is not immediate explanation of the real world (like Dr. Free-Ride's set), but the ability to participate in scientific discourse. If you can answer these questions, you're set to study, discuss, and maybe learn to contribute to science. If you can't, you can't.

And then there's Dave S.'s comment on the Adventures in Ethics and Science post. Dave doesn't state his questions in the form of a question, but lists several truths about science that he would like people to know, such as:
... that science is a method invented by humans to understand the natural world using natural methodology. It is not meant to reveal all knowledge about every aspect of existance.
... that due to the above, science is limited. But in this limitation lies its strength. The proof of this is the great success science has had.
... that science is not about truth, it's about evidence.
... that evidence is empirical, which means we need to be to observe it with our senses.
... that just because we don't know everything about a process doesn't mean we don't know anything.
... that science is tentative doesn't mean we can't ever be confident in it. Science is stable, but it never stands still.
These emphasize the philosophical side of science -- the significance of evidence, empiricism, reproducibility -- and they're probably my favorite, in terms of what I wish every student would graduate knowing. You can hook them with the ain't-it-cool factor of mundane mysteries ("why doesn't pudding set when you lick the bowl?"), and it's important to show them how science is relevant to their daily lives. You can teach them terminology, and in fact you shouldn't let them leave school without a working scientific vocabulary. But at the end of the day, I wouldn't mind if nobody in the country could explain why the sky was blue, diagnose the cause of chunky tea, or accurately define an atom, as long as we could have everyone understand what a theory is and how it comes about.

He weeps, for he has but one small tongue with which to taste a world

Okay, I really will stop it with the Tick quotes soon. But I did a little digging on Tongue-Tongue Technology, and turned up a much better-written article from Discover. (The June 2003 issue. I'm so cutting-edge it hurts.) This one doesn't have any bluky-hand-held devices, but it does have a little more consideration of phenomenology, since the author actually gets to try out the contraption:
The images have a sour, battery taste and feel like the pelting of a hot summer cloudburst. They certainly convey some sense of where things are around me, but is that the same as sight?

In practical terms, the answer may be irrelevant. When Kamm places a small white cube somewhere on the table, I can reach out and grab it nine times out of 10, even though I'm blindfolded. I can even recognize large letters, as long as I can bob my head around to get a better sense of their outlines. Given a few more hours with the device, I might eventually learn to forget the tingling in my mouth and just see. Is that sight?
Whatever, says tongue-sight pioneer Paul Bach-y-Rita: "There's nothing special about the optic nerve. The brain doesn't care where the information comes from. Do you need visual input to see? Hell, no. If you respond to light and you perceive, then it's sight." Or perhaps, even more accurately, if you respond and perceive, who cares about the semantics? It certainly seems to be more like seeing than feeling; as someone who occasionally needs to communicate under the table in Morse code taps, I know how hard it is to translate mere touch into meaning, but the tongue electrodes bypass this laborious process. From the author's description, the bluky-tongue-held device allows you to respond to tongue transmissions without consciously translating them into information: "What the camera sees is zapped onto my tongue's wet, conductive surface. As Kamm rolls the ball, my blindfolded eyes see nothing, but a tingling passes over my tongue. When she sends the ball my way, my hand leaps out to the left. I've caught it."

The idea of the brain's plasticity -- that brain functions aren't strictly compartmentalized, that the brain adapts to adversity and technology, that the swapping of jobs and abilities is not only possible but potentially commonplace -- is not a new one. But this makes me think about it from a slightly different angle. I wonder, for instance, whether this would be the last nail in the coffin of the "how could you evolve something so complex as an eye" argument. Not that scientific data is so useful for refuting that one, since if it were, there is ample evidence of the eye's evolution being more or less routine. Still, it kind of helps to show that the eye structure itself isn't even necessary to our visual perception. Could we survive even if they eye hadn't turned out exactly like it is? Well, yeah. What good is half an eye? Potentially lots.

Do not be afraid of Tongue-Tongue, he is only tasting you

Does anyone remember the classic "Tick" episode called "The Tick vs. Science"? The Tick goes to a Mad Scientist Fair (Professor Chromedome: "What good is science if no one gets hurt?") and sees the important advancements coming out of the mad science world. There is, for instance, the Can-O-Man, an aerosol-spray golem who can bench-press 240 pounds, lifts heavy things on command, gives a great shoulder massage, and disappears after one hour in a fragrant cloud of potpourri. (I still believe these should exist, even though my boyfriend practically is one.) There's also Dr. Mung Mung's creation Tongue-Tongue, a creature made entirely of tongue; he has limbs (and, weirdly, a tongue) but no eyes or ears. One sense is enough for Tongue-Tongue.

Well, that was just about the first thing I thought of when I read this article, provided by Lynne. It appears that military researchers are hard at work on Tongue-Tongue technology for Navy SEALs. Specifically, they're working on a device that will route sonar signals to electrodes on the tongue and thence to the brain, allowing night vision, panoramic vision, and other "superhuman senses."
The device, known as "Brain Port," was pioneered more than 30 years ago by Dr. Paul Bach-y-Rita, a University of Wisconsin neuroscientist. Bach-y-Rita began routing images from a camera through electrodes taped to people's backs and later discovered the tongue was a superior transmitter.

A narrow strip of red plastic connects the Brain Port to the tongue where 144 microelectrodes transmit information through nerve fibers to the brain. Instead of holding and looking at compasses and bluky-hand-held [sic] sonar devices, the divers can processes the information through their tongues, said Dr. Anil Raj, the project's lead scientist.

In testing, blind people found doorways, noticed people walking in front of them and caught balls. A version of the device, expected to be commercially marketed soon, has restored balance to those whose vestibular systems in the inner ear were destroyed by antibiotics.
I just don't know what to say about this (besides "release the nice moth man, Tongue-Tongue; here is an individually wrapped slice of processed cheese"). The article isn't very well-written -- see "bluky-hand-held sonar devices," above -- so it's tough to figure out exactly how this technology will work. But if you think about the classic sensory homunculus, the tongue and lips certainly occupy a big swath of neural real estate. If the intention is to get signals directly to the brain, the tongue is a pretty direct conduit.

But what is it like? The article doesn't say much about the phenomenology of tongue-munication. One Navy diver says it's like "feeling the outline of [an] image," which mostly implies to me that it's a very difficult experience to describe. I wonder whether it's similar to synesthesia, in that you're perceiving something you're not actually seeing (and perceiving it, furthermore, via a near-direct stimulation to the sensorimotor cortex). Is it experienced as an image? As some kind of tongue-image? As "just knowing" that something's there? Is the SEAL "feeling the outline" of the image on his tongue, or is it more like the experience of actually touching the object? Man, now I want to try it. Not enough to join the Navy, mind you, but I'd love to know what it means to see through your tongue. Or at the very least, what it's like to hold and look at a bluky-hand-held sonar device.

(I want a DVD of "The Tick" now, too, but there isn't one. What gives with that?)

Wednesday, April 26, 2006

Speaking of...

...relativism: Bruno Latour has noticed the grievous misuse of his arguments about the social construction of scientific fact, and is chastened:
Do you see why I am worried? I myself have spent sometimes in the past trying to show the "lack of scientific certainty" inherent in the construction of facts. I too made it a "primary issue." But I did not exactly aim at fooling the public by obscuring the certainty of a closed argument–or did I? After all, I have been accused of just that sin. Still, I'd like to believe that, on the contrary, I intended to emancipate the public from a prematurely naturalized objectified fact. Was I foolishly mistaken? Have things changed so fast?

In which case the danger would no longer be coming from an excessive confidence in ideological arguments posturing as matters of fact–as we have learned to combat so efficiently in the past–but from an excessive distrust of good matters of fact disguised as bad ideological biases! While we spent years trying to detect the real prejudices hidden behind the appearance of objective statements, do we have now to reveal the real objective and incontrovertible facts hidden behind the illusion of prejudices? And yet entire Ph.D programs are still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always the prisoner of language, that we always speak from one standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we meant? Why does it burn my tongue to say that global warming is a fact whether you like it or not? Why can't I simply say that the argument is closed for good?
As I said on Alejandro's blog, where I first heard about this, I like Latour and think that his arguments are of great import when kept out of the wrong hands. The Intersection writes that Latour should stop flagellating himself and get on with countering misused relativism, and I agree: We forgive you, Bruno, now help make it right.

(The article also has resonance with a discussion Laura and I just had about the academy's inability to cope with postmodernism, but that's a whole other can of worms.)

...education: There is a wealth of education links over at The Education Wonks, including one to the aforementioned Sudbury model school. I haven't even started making my way through them, so I'll probably also blog specific things that strike me, but meanwhile, there's enough reading material there to keep you from needing to look at my blog for weeks. Happy birthday.

Meanwhile, Dispatches from the Culture Wars has a post about the New York Academy of Sciences' recent conference on teaching evolution. The post includes a suggestion from a panelist that evolution be taught not only as a biological truth but as "an example of science done right," and (in comments) a concurrent suggestion that the 16th-century furor over heliocentrism should be used to give perspective on the evolution/ID "controversy," as well as to show (Kuhn-style) how science works when it really works. The historian of science in me loves the great taste... but the adult in me agrees that this would be an invaluable asset to science education! We need a little philosophy with our high school sciences -- not just plate tectonics and ATP, but why we study the world the way we do; what it means to talk about a scientific theory; what it means to "prove" something and whether it's possible; and generally where science comes from.

Ideas about education: An exercise in oxymoronicism

So, just for a little background, I'm taking an undergraduate course this semester. This was a bit of a mistake and I knew it would be, since I'm better off in a seminar-style course with a lot of excited discussion, and have been ever since I was an undergrad. Being totally burned out on school has not exactly helped. But I wanted to take a course on postmodern literature, and there's a dearth at the graduate level.

In Monday's class (I would have blogged this earlier, but see the previous post on publishing issues), the professor took a minute to discuss the potential harmfulness of across-the-board relativism. This, I am totally in favor of. I noticed a trend towards ending an argument with relativism in my senior year of college, and I hated it. My anthropology class, for instance, had an online discussion forum (quite ahead of its time), which allowed us to talk about things we hadn't gotten to in class or have additional discussion about class issues. Someone mentioned Scientology in class, and this was followed up on the forum by a lot of posts along the lines of "well who are we to say they're a cult" and "but if it's true for them..." Of course my response involved links to some definitions of "cult" and a few choice sentences about how WORDS MEAN THINGS. I don't believe that every thought is equally valid, and I can't stand it when people try to use that "logic" to end or avoid an argument.

In my mind, the best example of relativism's dangers is intelligent design. The idea that we should "teach the controversy" requires the assumption that any crackpot cosmogeny must be put on an equal level with well-supported scientific theory -- it is this assumption that the Church of the Flying Spaghetti Monster so beautifully exploits. Instead of ID, though, my postmodernism professor chose to discuss relativism by way of an article in the Post about a nearby Sudbury model school. (If you can read that page without choking up, you had a better education experience than I did.) The assertion was that such a school is relativist because it derails the idea of expert knowledge and puts childhood desires on the same level as adult obligations.

The students, as students are wont to do, picked up on the fact that they were meant to find this ridiculous, and got accordingly derisive. They'll be forced into creative jobs... support them for the rest of their lives... never learn how to function in the real world... play video games all day... my friend teaches at a Montessori school and the kids are monsters.... Non-standard jobs were maligned, lack of enforced education was equated with discipline problems, and overall the students showed themselves to be true believers in the idea of education as a way of racking up employability points. At one point, a kid who I'd assumed to be quite bright said "What am I talking about, we're English majors, we'll never get jobs." I told him that was 100% false. Call me a relativist.

But of course, in a certain sense the kids were right, because if you didn't tell them what to study they really would play video games (or drink) all day. Learning is an obligation that they seek to be released from; the same students who snicker that Sudbury kids can't function in the real world complain about having to read, having to write, having to think. My comp students resent learning how to structure an argument, because for god's sake they're going to be engineers and how does this relate to what they care about. One of my lit students doesn't like literature because the characters aren't enough like him, so how can he relate? Others complain that they used to enjoy reading, but college "ruined it forever." Who's the real discipline problem -- a kid who goes to chemistry class but feels comfortable asking questions and doesn't call his teacher "Mr." or "Ms.," or one who no longer knows how to want to learn? I know which one I'd rather raise.

And which one I'd rather have been. Maybe the problem is that these kids had idyllic educations, or maybe they're just average smart. But I'm not ashamed to say that I really needed gifted education, and not just one pull-you-out-of-class session a day. I spent my elementary years being pushed aside and neglected -- "go find your own vocabulary words" -- and without my two shining years at the Program for the Highly Underserved by Regular Education and two summers at the Center for Badly Disaffected Youth, I would have dropped out well before I hit college. True, we still got grades in Ms. Lakomy's amazing 6th grade class, but we also spent our days inventing things; or doing lateral thinking puzzles; or making, burying, and digging up artifacts. At CTY things were more regimented, but we were only there in the first place because we wanted to learn, so we never resented doing our homework (Laura, you can correct me on this if I'm being too rose-colored, but I remember sitting in the Star Trek Room doing sorites, silently, and kinda loving it, even though I would have relished an extra hour of Mandatory Fun). All I needed, all we needed, was for someone to assume that we didn't have to be bullied into learning. And to believe that education wasn't all about priming us for acceptable, lucrative jobs.

Which is why, when I first found out about the Sudbury Valley school in 8th grade, I wanted to go there so badly my teeth hurt. Imagine being given freedom over what you learned, being treated like someone who's qualified to make such choices! Imagine getting to study subjects that weren't on any standardized test, just because you wanted to! Because I wanted to. And I don't think it's wrong to imagine that I was the only one. In the end, the Sudbury model is a little too hands-off, a little too hippyish, mostly because I doubt it will attract qualified science teachers, which will lead to a lopsided education. And it certainly wouldn't work for everyone. But in my mind, it's a few hundred miles better than the other end of the spectrum, the end where most public schools live.

Kids who are given freedom over their school day will only play video games 24/7 if they have come to think of learning as a chore. And I'm not saying that the concept of expertise should be fully eroded, or that a teacher shouldn't have a certain amount of guaranteed respect just because they know more, but I don't see that treating a child more like an adult is the same as treating adults more like children. I don't know any teachers I've respected more than Ms. Lakomy; Brian and Aeon and Jonathan at CTY; Mr. Donaldson who had us study history of science by designing water clocks; Bill Oram who led my mindblowing five-person Faerie Queene seminar. The teachers who acted as though, given a choice, we might still be there, wanting to know more.

Smart kids deserve more credit, and I think that if all kids got more credit, more kids would be smart. Maybe I'm making the mistake my family so often accuses me of making -- maybe I'm assuming that everyone is like me. Maybe students shouldn't be given more freedom across the board; maybe some of them need to see high school as something you endure so you can go to college which is something you endure to get a career which is something you endure to get money to retire on. But that seems joyless to me, and I know from observing my students (who are smart and dedicated and hate college and just want to get through it so they can make some money) that it will lead to a generation that doesn't value knowledge, that sees creative jobs as a burden, that believes your worth is determined by your salary. I don't want to teach those students. I'm glad I narrowly escaped being one.

Tuesday, April 25, 2006

Blogger didn't work but now it works, huzzah

As you can probably tell from the fact that my phantom double-post FINALLY went away, Blogger has solved its publishing problems (which were actually kind of convenient, since they reminded me to bring up the significance of technical difficulties when I discussed blogs with my students today). This means that the cryptology post now actually exists and can be commented on. It also means, though, that I've gotten behind on general bloggage, especially since Blogger just emailed me a passel of comments from the last couple of days. I'm probably not going to fix that tonight since it's open mic, but this is just a note to say that the software is back on track and soon I will be too.

Sunday, April 23, 2006

O to be a cryptologist now that spring is here

I just found out from the NYT science section that there's an encrypted sculpture on the grounds of the CIA building, about 45 minutes from here. Of course I got really excited and wanted to go see it immediately, but apparently it's not open to the public. Curses! How can you make something so cool and then restrict it to people who work for the government?

Anyway, the Times reported that the sculpture's designer has piped up to correct some of the codebreakers on an error that was throwing them off the scent. Here's how weird the cipher is: the correction turned the string "IDBYROWS" into "LEVELTWO." If you want to see how the ciphers work, you should check out the exhaustive website Realm of Twelve, which manages to catalogue all the existing solutions (three of the four sections have been decrypted) without compromising the sculpture's occultness.

I'm not much of a cryptologist; I'm lousy at math and I don't exactly break codes in my head. But I love puzzles -- you should have seen me when the Notpron riddle was big. (Seen me, but not tried to talk to me.) So while I don't have any illusions that I could help translate this, I would love to go see it. Just to bask in the mystery and be generally creeped out.

Plus, part of the sculpture is in Morse, and that I can read. And can quibble with cryptologists on: the guy at Realm of Twelve writes that "the obvious perception might yield readable results, but this does not exclude the possibility that the Morse could or should be viewed upside down or from a reflected surface," but I started reading it backwards, which is the same as upside down, and it comes up short on the first C. C isn't a valid character backwards. This doesn't mean that it couldn't have another meaning if you broke up the characters differently, like turning the "vir" from "virtual" (···- ·· ·-·) into "steer" (··· - · · ·-·). Parts of the sequence are palindromic -- the "erpre" in "interpretati," the "visib" in "invisible" -- for whatever that's worth.

You can see how much I like this stuff -- not actual high-level codebreaking, which I'm usually not smart enough for, but contemplating objects with a hidden meaning. I mean, I once travelled to New York just to see one of the original copies of Agrippa, and it's not like I was able to touch it or open it and look inside. I just like it. I like being around things like this. I think it's because I'm far too informed and pragmatic to have any kind of sense of spirituality; really good puzzles are the closest I get to mystical objects.

But actually, only being able to access the sculpture online has a thrill to it as well. It makes the story seem to straddle the boundaries between fiction and fact -- this could almost as easily be an Eco-esque story as a real construction of copper and petrified wood. (This is only helped along by the fact that the Realm of Twelve webmaster seems to have mystical leanings.) One of my favorite hypertext authors, Shelley Jackson, recently said in a talk that she considers ambiguous nonfiction to be the new territory of online literature -- we're so used to looking for fact online that factlike fiction has the potential to be much more striking and a better use of the medium. (For an example, see the really chilling story "Revelation of the Lamb in Four Parts" on Everything2.) Of course this is fictionlike fact, but it's the same liminal space and I like it.

All right, I know this has been kind of starry-eyed and Englishy, but it's a beautiful Sunday and I've had a bike ride and a shower and there's imminent steak and zucchini and I don't really feel like railing on IDers. Back to your usual cynical science tomorrow.

Saturday, April 22, 2006

Zombie hunting

I was totally blown away by this post on immediate blogroll addition Neurotopia, drawing connections between the now-widely-reported phenomenon of Ambien zombies and the less well-known theoretical zombies discussed by philosophers. "Zombies" are beings that look like normal humans but have no consciousness; whether or not you believe that they're possible (not that they exist, mind you, but that they're possible -- people who believe in this kind of zombies are probably wrong, but they're not crazy) says a lot about your views on the "hard problem of consciousness." (If I recall correctly, the zombie's opposite number -- consciousness but nonhuman appearance -- is the "swamp man." Philosophers, huh?)

From Neurotopia:
Can there be a human who goes to work, has a wife, coaches little league, but has no conscious experience? Perhaps here we have a way to investigate the possibility. Does Ambien shut off excitatory (or activate inhibitory) neural circuits that directly disengage a "consciousness" module or diffuse neural net? Can a somnambulant Ambien user "learn" new things, be they cognitive or motor skills, and can these new skills be performed/recalled at a high skill level when fully conscious, despite a lack of mnemonic recall for the event itself? Or can he/she be made to become fully conscious while in this state e.g. through the use of pain or loud noise, or direct stimulation of other neural circuits? To what extent does Ambien actually interfere with conscious perception? Do Ambien users display their normal personality when they are having a somnambulatory event? Can we stick them in an MRI or EEG and observe how activation patterns differ when the person is conscious and when they are on Ambien?
This is a brilliant connection. People who take Ambien are being found to walk, eat, even drive without apparent awareness or memory, and this certainly looks a lot like the philosophical zombie. Zombified Ambien users are clearly reacting to the outside world -- for instance they avoid obstacles, at least to a certain extent, when they drive -- but they don't seem to be aware of it. In other words, when they see something they don't know that they see it, and they don't form memories of seeing it (or of knowing that they saw it). Looking at the differences between someone's neural reaction to a stimulus on Ambien and their unimpaired reaction could give a lot of insight into what it means to be conscious of something (and by extension, what it means to be conscious, period).

Neurotopia speculates that "David Chalmers must be wetting his pants," but I don't know. As I understand it, Chalmers wants the possibility of zombies to mean that consciousness can't be tracked to an observable physical process. Presumably, if it's possible for a zombie to exist, that's because it's possible for there to be a creature that is physically (including neurally) identical to a human but that lacks conscious experience anyway. This essentially makes any study of Ambien users inadmissible for Chalmers' purposes, though of course he'd be able to explain the findings away -- see my earlier comments on refuting pseudoscience with facts. If there turns out to be a measurable distinction between a "zombie" brain and a fully conscious one, that shows that there's some neural basis for consciousness, but the beauty of Chalmers' take on the "hard problem" is that this isn't enough to prove consciousness to be purely physically based. Nothing is.

So these guys are good news for neuroscience, and we should definitely chain them up in the shed behind the MEG lab. But there's no refuting a philosopher so spooky-minded that he wants to believe in zombies. Seriously, zombies. I ask you.

What we talk about at the bar

(And if you think this is geeky, you haven't seen Dan and me complaining to each other in Morse code while someone's telling a boring story.)
  • New Scientist just reported new evidence of a split between form and function: apparently websites proportioned according to the Golden Ratio are the least user-friendly. Specifically, when the ratio between the navigation bar and the content frame was (1+√5)/2, users had a significantly harder time finding information than they did when the ratio was smaller. Of course, this leads us to the radical conclusion that design is not the same as art, and what people find pleasing is not the same as what they find useful. Although I'm actually a bit uncertain on the "pleasing" part; it's true that phi seems to show up an awful lot in natural forms and in art, but it's also true that this conclusion probably involves some fudging. As the Skeptical Enquirer points out, "Measurements of parts of a building, or work of art, have such fuzzy boundaries that it is easy to find phi when ratios close to phi fit just as well." It seems there are as many experiments showing that people like a 1.83 ratio (Markowsky) as that they like 1.618 (Fechner).

  • Also from New Scientist: Nobel laureates warn Bush that nuclear attacks are, you know, not so good for children and other living things. I actually brought this up at dinner, but I should have waited until we got to the bar, because I can't think of any response to this other than heavy drinking. As Dan pointed out, if he's seriously considering the nuclear option, it's not physicists we need. It's Special Ops assassins. How coked-up do you have to have been during the Cold War to require eminent physicists telling you that nukes are a bad idea?

  • Shaggy insisted, probably because of his own imposing mandible, that chin size is determined by testosterone and, accordingly, directly correlated to testosterone levels. I can't find any support for the second claim that doesn't look like phrenology, but one of the late pubertal effects of testosterone does seem to be jaw growth. I'm still on the fence about whether we can take this to mean that Amy Grant is a hormonal hermaphrodite. I'm also trying to reconcile it with the fact that mandibular prognathism is a symptom of Klinefelter Syndrome (in other words, men with an extra X chromosome have big jaws).

    I was willing to grant that men find small chins attractive on women and women find strong chins attractive on men, but apparently it's not that simple. Well, the part about men liking tiny ineffectual baby jaws is that simple, but according to Discover, the ladies are a little more choosy about chins:
    There's no question that a dose of this classic "maleness" does contribute to what is now called handsome. Actor Brad Pitt, widely regarded as a modern paradigm of male attractiveness, is a wide-jaw guy. Biologically speaking, he subconsciously persuades a female that he could chew more nutrients out of a leafy stalk than the average potential father of her children-a handy trait, in hunter-gatherer days anyway, to pass on to progeny.

    But a woman's agenda in seeking a mate is considerably more complex than simply whelping strong-jawed kids. While both men and women desire healthy, fertile mates, a man can-and, to some extent, is biologically driven to-procreate with as many women as possible. Conversely, a woman, "thinks about the long haul," notes Etcoff. "Much of mate choice is about finding a helpmate to bring up the baby." In several studies, women presented with the hypermale face (the "Neanderthal type" as Etcoff puts it) judged its owner to be uncaring, aggressive, and unlikely to be a good father.
    Also, get this: "Female preferences in male faces oscillate in tandem with the menstrual cycle, suggests a study conducted by Perrett and Japanese researchers and published [June 1999] in Nature." Can't wait to bust that one out on Shaggy next time we go drinking.

Friday, April 21, 2006

Genes. Code. For proteins.

I know you know this, Discover. I know you know that genes don't code for eye color, or mental health, or body fat percentage, but for proteins that affect these things. I know your mental picture of the double helix doesn't include tiny little flags that say "intelligence!" and "double-jointedness!" and "sensitivity!"

I know you know this because at the end of the article "The Violence Gene," there's a nice even-handed quote:
There are many possible factors at work, he says, and violence is an extremely complex behavior. "Whether or not any given person in any given situation will become violent is known to be almost impossible to predict."

But the findings are still significant, Meyer-Lindenberg says, because, "...it gives us a handle for the first time into a genetic risk for violence."
But did you notice that this article is called "The Violence Gene"? Come on, Discover, you're not helping. A world where you can read Discover, which is a pretty decent lay science publication, and still be under the impression that there's such thing as a "gene for violence" is not a world I want to live (or teach college) in.

I was very much struck by Blake Stacey's beautiful defense of "reductive" knowledge on Pharyngula:
I can certainly appreciate the beauty in a flower or a sunset, say, without going deep into the biology or the physics of it. However, since I do know a little about how the sunset happens, I can appreciate the combination of phenomena which go into the spectacle. I can connect the reddening of the sunset on a dusty, hazy evening to the reddening of distant stars seen behind dust clouds in astronomical photographs. My sunset relates to many other things: scientific knowledge allows all sights to become metaphors.

...

"Reductionism" is a decent enough concept, I suppose, yet it doesn't capture what I have experienced as a fairly common occurence among people who have a scientific worldview. Knowledge keeps adding to beauty, I find. It enhances the multiplicity of meaning. Some days I think that all the words we use to describe the "scientific method", words like reductionism and materialism, were invented by people who had never even tried the "method" out. Now we're stuck with them, and at the very least they make the job of communication harder. More's the pity.
This is so perfectly put, and very close to the explanation I've offered when religious friends ask how I can find any beauty in life yet not believe in the soul. And I think this relates to genetics too. It's so easy to take the truly reductive viewpoint: nature versus nurture. Either everything is environmentally influenced, or everything comes directly from your DNA. Either you're violent because you have a bad dad, or you're violent because you have the Violence Gene. But the truth is so much more complex and subtle, and therefore so much more enthralling (and so much more satisfying). One can still call it reductive, since it presupposes that the tiniest parts of life (genes, proteins, nearly indistinguishable environmental factors) play a role in the development of the human gestalt, but in no way does it actually reduce the profound complexity of life. In fact, it enhances it. And I know that "The Violence Gene" makes a striking headline, but we're not doing people, or science, any favors by taking the glib and easy way out.

So Discover, read your Lewontin and get back to me. (And with that lovely mixed image of a magazine reading a book, I'm off to Target.)

Thursday, April 20, 2006

Losing yourself superfrontal gyrus activity in a book

I was just discussing with one of my brighter students the difficulty of quantifying the distinction between "literature" and "just regular old writing." People who believe in "literature" tend to be vague and muddy about where exactly the boundaries lie, which is basically fine because the idea of a canon does more harm than good. But as I said to this student, it would be really nice to have quantitative data about brain functioning when reading literature and how it differs from brain functioning when reading... well, when just reading. We have data on what happens in the brain when reading a word, reading a nonword, reading a word in limited context, but not (to the best of my knowledge, though I'd be thrilled to find out I was wrong) on reading fiction versus doing other things.

I think it would be useful. But at the same time, I realize that the results would hardly be conclusive, since so much goes on in the brain when reading; fMRI results would be messy, and it's not like there's a single measurable moment of electrical impulse that defines literary reading. It's probably more effective to break it up. So I find it pretty interesting that according to this article, data imply that there is a substantive change in brain functioning associated with the sense of "losing yourself" in a task (or in a book?). From New Scientist, which you may have noticed is one of my favorite sources:
Goldberg found that when the sensory stimulus was shown slowly, and when a personal emotional response was required, the volunteers showed activity in the superfrontal gyrus – the brain region associated with self-awareness-related function.

But when the card flipping and musical sequences were rapid, there was no activity in the superfrontal gyrus, despite activity in the sensory cortex and related structures.

"The regions of the brain involved in introspection and sensory perception are completely segregated, although well connected,” says Goldberg, “and when the brain needs to divert all its resources to carry out a difficult task, the self-related cortex is inhibited."
It's far from a conclusive experiment, since they seem to have collected data from only nine participants, and fMRI data really are a lot muddier than they're often presented. But it's an interesting implication, and perhaps a first step towards a quantifiable criterion for "great" (or at least engrossing) writing.

What would you send into space?

...is what this post would be about if I had any readers yet. However, this blog is still in its solipsistic phase, where I can safely assume that nobody's watching (or anyway can't safely assume that they are). Therefore, this post will be about what I would send into space. If you want to send something, that's fine, but get your own can.

See, a California company is apparently gearing up to send things into space for $99. The catch? Well, you have to have $99, so you probably can't be a graduate student. Also, it has to fit in a container about the size of a soda can. So you can't send a hideous melange of pieced-together cadaver bits in the hopes of catching some life-giving solar rays (not that I would). Anything smaller than a can and non-explosive, though, and you're good to go.

Of course my first thought is a toy rocket. Recursion is always funny. But it's only going up and then coming back down, so for maximum effect you probably want something that will react well (or badly) to a short trip into the vacuum. I think Marshmallow Peeps would be ideal. I have a vivid memory of what happened to a marshmallow when my high school physics teacher put it in a vacuum jar, and it would be awesome to see that fate befall a Peep. Circus peanuts would also be satisfying, though not so blandly anthropomorphic.

I suppose anything develops a bit of a frisson once it's been in space, so if I had unlimited money I would probably just turn random items into space items. Space goggles. Space earrings. Space dentures. Space orthopedic socks. It's going to be a long time before I have unlimited money.

File under monkey/rodent terror

So my best friend has a horror of lab monkeys. Partly because they have interchangeable heads, but mostly because we're constantly implanting/training/otherwise imbuing them with the power to take over the world. This fear reached a peak sometime last year, when remote-control rat technology overlapped with monkeys' ability to control robot arms with their brains. She was pretty much sure that we were in for an army of robot rats telepathically controlled by primates.

Now, Pharyngula points out a new video game technology that allows you to install your favorite rodent as the Boss. The game tracks your pet's movements as it chases a piece of bait (undoubtedly a tiny, cheese-flavored human effigy), and reproduces Mr. Squeak in the game as a giant monster chasing your helpless avatar. Of course, the developers have all sorts of heartfelt justifications:
"We want to enable pets to play games in a way very similar to the way human players' play," said RASTER's Vladimir Todoroviæ, a collaborator working on the Metazoa Ludens project. "To play a computer game with your hamster would definitely make us think about where we have come with digital tradition now."

It may sound like a really complicated version of Ms. Pac-Man, but the goal of the game makers is ambitious: to merge human spaces with pet spaces through pervasive computing interfaces. By creating high-tech, pets-versus-owners computer games, researchers hope to gain new insights into animal behavior, and perhaps develop new technologies that could close the gap between the species.
Still, I think it's painfully obvious: We're preparing for the coming monkey-rat invasion.

Wednesday, April 19, 2006

Have you hugged your ancestors today?

We now have a matched set of transitional fossils: the fishapod, which is old-ass news by now, and a snake with legs. To the left you can see a very touching painting of Darwin expressing his gratitude to the fishapod for beginning to heal the much-maligned fossil record. If we had an equally cuddly image of the snakeapod, I'm sure he'd be just as demonstrative.

When I was in undergrad, a teacher sought to punish me for expressing a hatred of group work by making me play Huxley in a debate about Darwinism. I was up against four anti-evolutionists, and I wiped the floor with 'em just like the Bulldog would have. I believe I made some breezy and undoubtedly urbane comment about the inevitable incompleteness of the fossil record, then pointed out to the ersatz Richard Owen that he had actually recently acquired an archaeopteryx fossil and how did he explain that? "Owen," not having done her research, was utterly flummoxed. It is totally awesome and not nerdy to destroy people with factoids.

I would love to think that this would do the same to today's anti-evolutionists, but check out the astute comments on the Pharyngula post. The basic gist, and it's a sobering but true one, is that you can't fight pseudoscience with evidence. That's basically definitional of pseudoscience. This post makes the application of this principle to creationism painfully obvious: "God created a wide variety of animals, many of whom share various characteristics, and even DNA, with each other. That doesn't prove that these animals evolved from each other." In other words, God created all creatures plus all evidence for evolution (though perhaps not all evolutionists). It's self-healing, utterly impregnable to outside logic or evidence.

Perhaps the worst thing about it is that the creationists have convinced themselves, and probably their children, that this is what science is. Not the testing of falsifiable theories, not a collaborative collecting and interpretation of empirical evidence, not even problem-hypothesis-materials-method-conclusion. Just a set of subjects and terminology, without any of the dedication to genuine inquiry that has always made science great. Science appears to be distinguished from faith because it's an unassailable, non-falsifiable belief about the natural world, rather than an unassailable, non-falsifiable belief about divinity. And this is a recipe for ignorance and stagnation.

Which is of course what the religion meme needs to perpetuate itself, so why am I surprised? I wish someone would make a nice plush version of the fishapod. I think I need a hug.

Heart cartridge

I've been meaning to blog about organ printing since I first saw it mentioned on Boing Boing, but I didn't yet have a blog to blog it on. As a result, I'm not exactly on the cutting edge here, but this hardly diminishes the awesomeness of the technology.

From the New Scientist article:
Gabor Forgacs, a biophysicist at the University of Missouri in Columbia, described his "bioprinting" technique last week at the Experimental Biology 2006 meeting in San Francisco. It relies on droplets of "bioink", clumps of cells a few hundred micrometres in diameter, which Forgacs has found behave just like a liquid.

This means that droplets placed next to one another will flow together and fuse, forming layers, rings or other shapes, depending on how they were deposited. To print 3D structures, Forgacs and his colleagues alternate layers of supporting gel, dubbed "biopaper", with the bioink droplets. To build tubes that could serve as blood vessels, for instance, they lay down successive rings containing muscle and endothelial cells, which line our arteries and veins. "We can print any desired structure, in principle," Forgacs told the meeting.
Maybe the most elegant thing about this technology is how naturally it collects and animates existing data. There's no mystery about what organs look like in slices; we've been looking at the body that way since the invention of computed tomography (early 1970s according to Wikipedia, and who am I to argue?). As Forgacs would apparently say, we learned this in kindergarten. (My boyfriend, who took classical mechanics with Forgacs, reports that anything the class had already covered was considered to be "learned in kindergarten.") The printer doesn't require fresh data or complicated mental gymnastics. It uses a new technology to activate what we already know.

Of course I'm also fond of the apparent lack of moral ambiguity -- in fact, I can't totally suppress a smug sense of triumph on behalf of Forgacs and his team. "Object to this, science-phobic fundies," I've been saying, sometimes out loud. "This is a technology that can eventually save lives, even nonwhite non-Christian lives, without encouraging sex or endangering zygotes. Scary, no? I dare you to find a platform on which to condemn it." Perhaps I'm underestimating the noise machine, but I daresay there's no foothold for pseudo-pious outrage on this one.

Seriously, though, "bioink"? Does anyone else pronounce this in their head as though it were a sound effect? They could really do with a judiciously-placed hyphen in there.