Astrogator's Logs

New Words, New Worlds
Rest
Artist, Heather Oliver             

Archive for the 'Science' Category

Spacetime ‘Branes: The Multiverse

Friday, December 2nd, 2011

by David Darling

Today I have the pleasure of hosting my friend David Darling, an astronomer and well-known science writer, who will update us on the multiverse. Dr. Darling has written many books of popular science, including Life Everywhere: The Maverick Science of Astrobiology (in which he mentions my views on Rare Earth and the Anthropic Principle). He also maintains a much-visited website, The Worlds of David Darling that contains The Internet Encyclopedia of Science. His latest book, Megacatastrophes!: Nine Strange Ways the World Could End, his second collaboration with Dirk Schulze-Makuch, will appear next spring.

The multiverse, or theory of many universes, is very much in the news right now because some recent work strongly suggests that it might be true. The basic, mind-boggling idea is that “out there” is more than just the bubble of space-time we happen to live in – what we call the Universe. There are trillions and trillions (and trillions and trillions…) of other universes. Don’t even bother trying to imagine them all or your head might explode.

Surprisingly, the word “multiverse” has been around for a long time. It was coined way back in 1895 by the American philosopher William James, although he probably had something quite different in mind than what modern scientists are talking about.

And what are they talking about? Here’s the first problem we run into in tackling the multiverse concept. When scientists talk about the multiverse they can mean different things. To a cosmologist – someone interested in the origins and evolution of the universe as a whole – the multiverse is a consequence of the nature of the vacuum, which isn’t as empty as we usually suppose. The cosmologist’s multiverse stems from something called chaotic inflationary theory, which itself is a variety of the theory of cosmic inflation. In a nutshell, our universe is like a bubble of spacetime that spawned from a great foaming ocean of spacetime that’s always existed and always will exist. In it’s first few moments, our universe expanded at a fantastic rate before settling down to a more sedate rate of growth. But beyond our universe are other, similar bubbles – other universes – each expanding and each with their own physical constants and laws. One estimate puts the number of such universes at an outrageous 10 to the power 10 to the power 10 million (in other words 1 followed by 10 to the 10 million zeros – aargh!).

On the other hand, to a quantum physicist – someone who deals with the very smallest things in nature – the concept of the multiverse is a different beast. If you believe in something called Everett’s many-worlds interpretation of quantum mechanics, which a lot of quantum scientists do, every time an observation is made at the quantum (super-tiny) level, the universe splits into all the possible outcomes that could happen. I’m not even going to get into what counts as an “observation”! Some people say it has to involve a conscious or sentient observer (like a human being); others argue that any measuring instrument will do. It’s complicated. But the underlying message of Everett’s theory is that any time an event (such as a collision between particles) is watched, the universe splits in various ways to take account of all the possible outcomes. Needless to say, this gets pretty crazy pretty fast! If every outcome of every minuscule watched event gives rise to an entirely new universe, then the total number of universes in this quantum physical view of the multiverse is beyond mind-boggling.

So there are these two different multiverse scenarios – the one of the cosmologist and the one of the quantum physicist. And they aren’t mutually exclusive. They could both be right. Trying to figure out the consequences if both types of multiverse are real and co-existent very quickly overwhelms my brain’s paltry (and diminishing) collection of neurons. But let’s just focus on a couple of particulars. In the quantum physicist’s multiverse, there are bound to be a lot of universes that are very similar to the one we live in. In fact there are going to be a lot of universes with other you’s – some of them only very slightly different from the one that we’re in right now. The cosmologist’s multiverse also allows for a vast number of universes, but the chances of almost exact copies of you is more remote. Instead, the cosmologist’s multiverse is populated by an incredible variety of bubbles of space-time in which the laws and basic constants are expected to vary widely. Probably very few are capable of supporting life.

Another distinction between the two types of multiverse is their fundamental nature. The cosmologist’s multiverse is a bit easier to grasp. Put it this way, if there were a parallel you in a bubble-universe that was the product of chaotic inflation then this other you would exist in the familiar three dimensions of space. But an alternative you in Everett’s many-worlds picture is a much more esoteric affair: a creature living in a different quantum branch of something called Hilbert space. Not being a mathematician, I won’t try to explain what Hilbert space is (Google it, if you’re interested). Suffice it to say, it’s an extremely important concept in quantum mechanics – but far from easy to visualize.

Now the exciting thing is, physicists are getting close to being able to test if the multiverse is real. If there are other universes beyond our own, then it’s likely we may have bumped into them in the past, resulting in the cosmic equivalent of fender benders. The impacts ought to show up as dents in the cosmic microwave background – the now much-cooled afterglow of the Big Bang. For some time, the European Space Agency’s Planck satellite has been mapping the microwave background to an unprecedented level of precision. The results will be out soon and may confirm the multiverse theory.

On a different front, theorists have made a discovery that goes to the very heart of quantum mechanics. They’ve shown its very likely that something called the wavefunction – the most important concept in the physics of the very small – isn’t a mere wave of probability as previously supposed, but a real physical object. The most far-reaching conclusion of this is that Everett’s many-worlds interpretation is correct and the quantum physicist’s multiverse is also a fact.

So get ready for expanding horizons. Just a few centuries ago, people thought there was only one sun. Then it turned out the stars were suns too. Then we discovered that our Milky Way Galaxy, with its hundreds of billions of stars, was just one among many galaxies. Then it turned out that galaxies were arranged in clusters, which in turned formed superclusters. Now it seems our universe is just one of an unbelievable number of other universes. Who’s to say the hierarchy doesn’t extend beyond the multiverse?

Images: David Darling, Life Everywhere; bubble universes, Sally Bensusen/SciencePhotoLibrary.

Kalos Kaghathos

Thursday, October 6th, 2011

(classical Hellenic: beautiful and good)

When I started dealing with computers, I learned FORTRAN for a crystallography project (this was still the era of perforated cards), then VMS, a UNIX cousin. I got used to bulky cuboids the color of chewed gum, trailing wires like tentacles of beached jellyfish. The language within them matched their appeareance – one made by and for computer geeks (though the alphanumeric version of Rogue was terrific). Late in my postdoctoral stint, however, these sleek, fast apparitions started appearing in the lab: the first Macintoshes, with such exotic capabilities as point-n-click and drag-n-drop.

Ever since then, I and almost all the scientists I know (with exceptions dictated by specific demands) have cleaved to our Apples. The machines were ahead of their time when they first came out, and have been worth every extra penny. They work flawlessly, install and run new applications seamlessly, never crash or munge data – and, yes, they’re beautiful, a feast for the senses. In short, they’re for people who want well-crafted precision instruments and don’t have the time and stamina to endlessly reboot Windows. I’m not starry-eyed about Apple’s business practices but I’m glad they stand against the Microsoft monolith, an alternative to the monoculture that threatens to get humanity conditioned willy-nilly to cynically shoddy work.

Steve Jobs was my age – I turn 56 today. A reminder that we have finite time to realize our aspirations, though he started early and did spectacularly. Few people are as fused to their work as he was to Apple, to the point where people worry about the company’s future after his death. He deserves the tributes that are pouring in and I’m grateful he persevered in his vision of excellence, not just cobble together something that lurches around sort of getting the job done. Although I feel obliged to point out that a woman with his idiosyncrasies, no matter how inspired, driven and charismatic, would have lasted all of half an hour – in Apple or anywhere else.

For my birthday present, I got one of those elegant iMacs that have the CPU incorporated in the back of their slightly curving screen and look like a starship control console. Like Steve Jobs, I too have been checked by cancer – but for as long as I can travel, his Apples will be companions on my journey.

Images: Apple logo modified by Cory Cole; Apple-inspired Eve of Wall-E (from Pixar, another visionary move by Jobs)

If They Come, It Might Get Built

Monday, October 3rd, 2011

Sic itur ad astra (“Thus you shall go to the stars.”)
— Apollo, in Virgil’s Aeneid

Last Friday, several hundred people from a wide cross-section of the sciences and humanities converged on Orlando, Florida, to participate in the DARPA-sponsored 100-Year Starship symposium.  As the name tells, this was a preliminary gathering to discuss the challenges facing a long-generation starship, from propulsion systems to adapting to extraterrestrial homes.

I was one of the invited speakers.  I won’t have the leeway of long decompression, as I must immediately submerge for a grant.  However, I think it’s important to say a few words about the experience and purpose of that gathering.  Given the current paralysis of NASA, activities like this are sorely needed to keep even a tiny momentum forward on the technologies and mindsets that will make it possible to launch long-term crewed ships.

Open to the public, the event lasted two and a half days, the half being summations.  Content-wise, half was about the usual preoccupations: propulsion systems, starship technologies, habitats.  The other half covered equally important but usually neglected domains: biology, society, ethics, communicating the vision.  The talks were brief – we were each given 20 minutes total – and varied from the very broad to the very specific.  The presentations that I attended were overall high quality (though I personally thought “exotic science” should have been folded into the SF panels); so were the questions and discussions that followed them.  The age distribution was encouraging and there were many women in the audience, of which more anon.

Some aspects of the symposium did dismay me.  Structurally, the six or seven simultaneous tracks (with their inevitable time slippages) not only made it hard to go to specific talks but also pretty much ensured that the engineers would go to the propulsion talks, whereas the historians would attend those about ethics.  The diversity quotient was low, to put it mildly: a sea of pale faces, almost all Anglophones.  Most tracks listed heavily to the XY side.  This was particularly egregious in the two SF author panels, which sported a single woman among nine men – none with a biological background but heavy on physicists and AI gurus.  It was also odd to see long biosketches of the SF authors but none of the presenters in the official brochure.

Most disquieting, I sensed that there is still no firm sense of limits and limitations.  This persistence of triumphalism may doom the effort: if we launch starships, whether of exploration or settlement, they won’t be conquerors; they will be worse off than the Polynesians on their catamarans, the losses will be heavy and their state at planetfall won’t resemble anything depicted in Hollywood SF.  Joanna Russ showed this well in We Who Are About To…  So did Chelsea Quinn Yarbro in Dead in Irons.  But neither story got the fame it deserves.

On the personal side, I had the pleasure of seeing old friends and finally seeing in the flesh friends whom I had only met virtually.  I was gratified to have the room overflow during my talk.  My greatest shock of happiness was to have Jill Tarter, the legend of SETI, the inspiration for Ellie Arroway in Contact, not only attend my talk but also ask me a question afterwards.

I hope there is sustained follow-up to this, because the domain needs it sorely.  Like building a great cathedral, it will take generations of steady yet focused effort to build a functional starship.  It will also require a significant shift of our outlook if we want to have any chance of success.  Both the effort and its outcome will change us irrevocably.  I will leave you with three snippets of my talk (the long version will appear in the Journal of the British Interplanetary Society):

“An alternative title to this talk is ‘Distant Campfires’. A Native American myth said that the stars are distant campfires, where our ancestors are waiting for us to join them in storytelling and potlatch feasts.  Reaching and inhabiting other planets is often considered an extension of human exploration and occupation of Earth but the analogy is useful only as a metaphor. To live under strange skies will require courage, ingenuity and stamina – but above all, it will require a hard look at our assumptions, including what it means to be human.”

.

“In effect, by sending out long-term planetary expeditions, we will create aliens more surely than by leaving trash on an uninhabited planet.  Our first alien encounter, beyond Earth just as it was on Earth, may be with ourselves viewed through the distorting mirror of divergent evolution.”

.

“If we seek our future among the stars, we must change for the journey – and for the destination.  Until now, we have participated in our evolution and that of our ecosphere opportunistically, leaving outcomes to chance, whim or short-term expedience.  In our venture outwards, we’ll have to overcome taboos and self-manage this evolution, as we seek to adapt to the new, alien worlds which our descendants will inhabit.

One part of us won’t change, though: if we ever succeed in making our home on earths other than our own, we will still look up and see patterns in the stars of the new night skies.  But we will also know, each time we look up, that we’re looking at distant campfires around which all our relatives are gathered.”

Images: 1st, sunset, September 27, 2011, Sarasota, Florida (photo, Athena Andreadis); 2nd, Spaceborn (artist, Eleni Tsami)

Are Textbooks Science Fiction?

Tuesday, September 27th, 2011

by Joan Slonczewski

Today I have the great pleasure of hosting my friend Joan Slonczewski, who will discuss how textbooks can fire the imagination of future scientists.  Dr. Slonczewski is Professor of Biology at Kenyon College where she teaches, does research in microbiology, and writes a leading undergraduate textbook, Microbiology: An Evolving Science (W. W. Norton). She is also an SF author well-known for incorporating real science in her fiction, as highlighted by her justly famous A Door into Ocean.  Her recent SF novel, The Highest Frontier (Tor/Macmillan), shows a college in a space habitat financed by a tribal casino and protected from alien invasion by Homeworld Security.

When the film Avatar opened, it drew many critiques based on science. The planet Pandora could not exist around a gas giant; the neural-linked ecosystem would have no predators; and the Na’vi should have six limbs, like other Pandoran fauna. The greatest flaw was that the Na’vi have breasts, although their class of creatures are not mammals. Non-mammals having breasts would be an error unthinkable in real science.

Yet I wonder what might happen if an introductory textbook in biology were to receive scrutiny similar to that of Avatar.  If non-mammals should not be shown with breasts, does it follow that true mammals, named for the mammary gland, should indeed show breasts? The typical textbook section on “mammalian diversity” shows scarcely a mammary gland. One would never guess that we drink milk from cattle, mares, camels, and reindeer. The more modern books do show prominent breasts on a human. In other words, a view of life surprisingly similar to Avatar.

I first saw the fictional aspect of textbooks from the viewpoint of a science fiction author writing a college text, Microbiology: An Evolving Science (W. W. Norton).  As a fiction author–my book A Door into Ocean won the Campbell award–I well know the dilemma of “hard SF,” which aims to invent a future world of gadgets that don’t yet exist based on science that actually does. Even “hard” science fiction often dodges inconvenient points about exceeding the speed of light, breathing the air on any planet where the starship lands, and mating with the seductive native “aliens.”

A textbook, I thought, would be different. My coauthor John Foster would correct what I wrote, and our publisher provided a throng of editors and expert reviewers. The art budget paid for stunning visuals from a first-rate graphic arts firm whose artists actually check details in the primary literature.

Our early illusions about textual perfection fell away in the light of reviewer comments based on errors entrenched in other books, and editorial “corrections” that often made clearer English but muddier science. But the art process was what really made me think of fiction. Early on we chose a “palette” in which color conveys information: DNA was purple, RNA was blue, proteins red, yellow, or green. And cell interiors, with their nucleus, mitochondria, and so on, offered a rainbow of colors from lilac to salmon. Our color-coded figures are more than informative; they are gorgeously attractive, so much so that prospective adopters have been known to caress them on the page.

But DNA is not “really” purple, and RNA is not really blue. Chloroplasts are indeed green, as typically shown, but mitochondria are not red aside from a few of their iron-bearing proteins. And what of individual atoms as ray-traced blue and red balls and sticks? This aspect of science art goes beyond fiction–it is fantasy.

Despite their limitations, the visuals in a textbook illustrate in that they form a pattern in the reader’s mind; a pattern that deepens understanding of a concept. This aim of illustration is actually shared by the best science fiction. Frank Herbert’s Dune illustrates how water scarcity drives an ecosystem. Octavia Butler’s Lilith’s Brood illustrates how organisms trade genetic identity for survival.

So if textbook art is “fictional,” what needs to be “correct”? The mental patterns formed by the text and art need to be honest; to spark genuine insights that lead to understanding. A cell’s nucleus is not “really” lavender in color, but the colored shape draws attention to the nucleus as a compartment enclosing the precious DNA. By contrast, an image depicting the nuclear contents as spilling out of the cell would not yield insight, but confusion. A troubling new group of textbooks aim to sow such confusion–books with titles like Exploring Creation with Biology and The Lies of Evolution.  Such books aim to inoculate “inquiring preteens” against the founding principles of biology, geology, and cosmology.

If deliberate confusion is the worst sin of any book, the next worst sin is boredom. Teachers can make students read the most boring book; but will they stay awake?

A key decision we made for Microbiology: An Evolving Science was to tell stories. We told how Bangladeshi women taught the world to fight cholera. How life began out of atoms formed by stars that died long before our own sun was born. How a high school boy testified at the Scopes trial that humans evolved from microbes. How Louis Pasteur as a student discovered mirror symmetry in biomolecules–a tool that astrobiologists may use to reveal life on other worlds.

A textbook, like science fiction, should raise questions. Is there microbial life on Mars–and what might it look like? Textbooks should take the reader to new places where we’ve never been–and perhaps could never go, such as the interior of a cell, the electron cloud of an atom, or a planet where people have three sexes.  Like science fiction, a textbook should inspire people to learn more about real science, and even become scientists.  After Jurassic Park came out, some scientists felt embarrassed by the book’s technical flaws and its portrayal of money-mad dinosaur cloners.  But so many students came to Kenyon College wanting to clone dinosaurs that we founded a new program in molecular biology.

My latest work of fiction, The Highest Frontier, has already drawn complaints. The space elevator won’t work; the casino-financed satellite can’t be built; and the aliens could not really evolve like viruses. Let’s hope at least the book inspires students to pursue virology.

Athena’s coda:  Readers of this blog know the reasons why I detest Avatar, which go beyond its sloppy science; so do some attendees of Readercon 2010, because Joan and I had a lively exchange about it in a panel.  Even so, I entirely agree with what Joan says here, as attested by The Double Helix: Why Science Needs Science Fiction.

In my essays and talks, I have repeatedly used A Door into Ocean as an example of outstanding “hard” SF that does not trumpet its hardness and also contains the additional layers and questioning of consequences that make it compelling fiction.

I also had the privilege of reading the penultimate version of The Highest Frontier.  The novel is an unusual combination of space opera and grounded near-future extrapolation — and Harry Potter aficionados would love it if they found out about it (unfortunately unlikely, given the proliferation of unlinked subgenre ponds in speculative fiction).  It’s fascinating to compare and contrast it with Morgan Locke’s Up Against It, also from Tor.  Both are set in beleaguered space habitats where cooperative problem-solving is the only viable option; both literally brim with interesting concepts, vivid characters and exciting thought experiments.

The two novels are proof of three things: women can write stellar hard SF; scenarios for a long-term human presence in space that ignore biology (very broadly defined) are doomed; and I need not despair of finding SF works that engage me… provided that authors as talented as these continue to be published against least-common-denominator tides.

Images: 1st, a Sharer of Shora (from A Door into Ocean) as envisioned by Rowan Williams; 2nd, Slonczewski’s microbiology textbook opens with the NASA Phoenix lander and asks, “Is there life on Mars?”; 3rd, a glimpse of the habitat in The Highest Frontier.

High Frontiers and Cheap Snarks

Friday, September 23rd, 2011

Note: This article has a coda about the CERN neutrino results, which came out while I was writing it.

Two seemingly disparate but actually related items came up in the news recently. One was the discovery of a planet circling a close binary star system given the placeholder name of Kepler 16. The other was the publication of a viral protein’s crystal structure.

What, I hear you ask, do these have in common? Well, for one both projects used crowdsourcing (now going by the PR-friendly term “citizen science”). The other commonality was the anti-scientist hype: the media trumpeted gleefully that non-scientists are more prescient and clever than scientists. In contrast to plodding experts, prophetic film directors (“OMG, Tatooine!”) and intrepid gamers simply vault over obstacles and gracefully yet squarely hit the target. Kinda like Luke Skywalker homing on the tiny dot of vulnerability in the planet-sized Death Star with little flying experience and eyes wide shut because, ya know, the Force is with him.

Let’s parse the circumbinary planet first. Close to half the stars are in binary configurations, and about half of these have accretion disks. Hence, the likelihood of planets in such systems is very high. Astrophysicists’ models have shown that a planet can stably orbit either around one member of a widely separated pair or around a very tight pair. The first discovery of a planet circling a close binary dates from 1993 (or 2003, if one counts the final confirmation of the original observation). What makes the Kepler 16 system a first is that its planet appears to be smaller than Jupiter. As for prescience, beyond the astrophysicists’ theoretical calculations, Isaac Asimov had written Nightfall and Chesley Bonestell had painted Double Star long before Tatooine was even a solitary neuronal firing in George Lucas’ brain.

So now on to the crystal structure that “had stumped scientists for years but was solved by gamers in a few days”. To begin with, this was not the first crowdsourcing scientific project as touted. The honors for that must go to SETI@home, launched in 1997. There have been many others since, across disciplines. Beyond that, the people in the protein folding contest used a program developed by scientists (FoldIt) and half of the dozen or so participating “gamers” were biologists themselves. Crucially, they were given NMR and X-ray diffraction data to constrain and guide their steps. Finally, the result (a model, which means it’s still hypothetical) primarily aided crystallographers in placing heavy metal elements so as to get well-formed crystals, whose X-ray patterns gave the real, definitive proof of the structure.

Parenthetically, protein folding is a topic of perennial fascination to both creationists and believers in the strong anthropic principle. Many non-biologists, including physicists who blithely delve into biology in popsci books, are fond of intoning ad nauseam that amino acid strings would take billyuns and billyuns of years to fold correctly – hence god/intelligent design/a privileged universe/fine tuning of constants. In fact, with one exception that I can think of, proteins that have been unraveled into amino acid strings never re/fold at all (nor do they fold efficiently or correctly in programs like Rosetta, that presume complete lack of folding). Proteins fold as they get made, while they emerge from the ribosome. So they fold locally to achieve partial energy minima (so-called secondary structure) and these partly folded structures quickly coalesce into the final tertiary structure. On the technical side, making protein crystals is a difficult, delicate art – the biological equivalent of glass blowing. Like coaxing cells into growing, it’s part craft, part experience-based knowledge so deep that it becomes instinct.

Involving many people in parsing scientific data is a tremendous idea: it gets non-scientists familiar with the concepts, process and vocabulary of science, it can accelerate portions of the analysis, and it helps forge a sense of collective purpose and achievement. The great success of crowdsourcing highlights the unique human ability to notice anomalies instead of undeviatingly following protocols as computers do. This human attribute, not so incidentally, is one of the strongest arguments for sending crewed exploration teams to places like Mars.

At the same time, scientists are not stodgy techs in lab coats (whose wearing is almost entirely confined to movies; in real life, MDs are far likelier to sport such togs). To be a good scientist, let alone a great one, you must possess not only knowledge, rigor and stamina but also imagination and the informed, trained intuition that enables you to recognize patterns as well as deviations from them (aka “the prepared mind”). And distributed data churning won’t replace trained experimentation and thinking any time soon – or later.

Anglo-Saxon cultures have a strong anti-intellectual streak. Some of it is the lingering mystique of the British gentleman dilettante; some is the American obsession with self-determination. Yet the same people who treat scientists like class enemies and jeer at their painstaking mindsets and work habits follow woo gurus – from homeopaths to investment advisors to Teabagger televangelists – with unsurprising outcomes.

If people really think that they can do science better than trained scientists, I invite them to apply this reasoning to other domains and have the next person they meet on the street do their root canals or wire their house for electricity. Those who participate in citizen science are praiseworthy, citizens in the full sense of the word. Nothing but good can come from the practice – except for the demagogic triumphalism of those journalists whose self-satisfied ignorance vitiates every hard-won gain achieved by the scientist/layperson partnerships.

It’s a natural human reaction to ridicule what one fears and/or doesn’t understand, though adults are supposed to mature beyond this juvenile tendency. The question then becomes why science, whose record is far better than that of just about any other human endeavor, has become a bugaboo rather than a vision and an integral part of this culture. It’s a question well worth remembering when all the GOP presidential candidates fall all over themselves to deny evolution – and one of them might lead what is still struggling to remain the most powerful country on this planet.

[Click on this image to see legible larger version]

Coda: The news that CERN’s OPERA project recorded anomalous results of neutrino speeds got its share of “Roll over, Einstein” smartass quotes although, thankfully, the hype didn’t reach the proportions of NASA’s “arsenic” bacteria. Neutrinos are literally the changelings of the particle clan but the claim is far from proven and the paper has still not been peer-reviewed.

If it proves true, it won’t give us hyperdrives nor invalidate relativity. What it will do is place relativity in an even larger frame, as Eisteinian theory did to its Newtonian counterpart. It may also (finally!) give us a way to experimentally test string theory… and, just maybe, open the path to creating a fast information transmitter like the Hainish ansible, proving that “soft” SF writers like Le Guin may be better predictors of the future than the sciency practitioners of “hard” SF.

Images: 1st, Double Star by Chesley Bonestell (used as a Life cover in 1954); 2nd, a schematic of a protein structure; 3rd, Neutrinos by ever-sharp xkcd.

Safe Exoticism, Part 1: Science

Wednesday, August 31st, 2011

Note: This 2-part article is an expanded version of the talk I gave at Readercon 2011.

I originally planned to discuss how writers of SF need to balance knowledge of the scientific process, as well as some concrete knowledge of science, with writing engaging plots and vivid characters. But the more I thought about it, the more I realized that this discussion runs a parallel course with another; namely, depiction of non-Anglo cultures in Anglophone SF/F.

Though the two topics appear totally disparate, science in SF and non-Anglo cultures in SF/F often share the core characteristic of safe exoticism; that is, something which passes as daring but in fact reinforces common stereotypes and/or is chosen so as to avoid discomfort or deeper examination. A perfect example of both paradigms operating in the same frame and undergoing mutual reinforcement is Frank Herbert’s Dune. This is why we get sciency or outright lousy science in SF and why Russians, Brazilians, Thais, Indians and Turks written by armchair internationalists are digestible for Anglophone readers whereas stories by real “natives” get routinely rejected as too alien. This is also why farang films that attain popularity in the US are instantly remade by Hollywood in tapioca versions of the originals.

Before I go further, let me make a few things clear. I am staunchly against the worn workshop dictum of “Write only what you know.” I think it is inevitable for cultures (and I use that term loosely and broadly) to cross-interact, cross-pollinate, cross-fertilize. I myself have seesawed between two very different cultures all my adult life. I enjoy depictions of cultures and characters that are truly outside the box, emphasis on truly. At the same time, I guarantee you that if I wrote a story embedded in New Orleans of any era and published it under my own culturally very identifiable name, its reception would be problematic. Ditto if I wrote a story using real cutting-edge biology.

These caveats do not apply to secondary worlds, which give writers more leeway. Such work is judged by how original and three-dimensional it is.  So if a writer succeeds in making thinly disguised historical material duller than it was in reality, that’s a problem. That’s one reason why Jacqueline Carey’s Renaissance Minoan Crete enthralled me, whereas Guy Gavriel Kay’s Byzantium annoyed me. I will also leave aside stories in which science is essentially cool-gizmos window dressing. However, use of a particular culture is in itself a framing device and science is rarely there solely for the magical outs it gives the author: it’s often used to promote a world view. And when we have active politics against evolution and in favor of several kinds of essentialism, this is something we must keep not too far back in our mind.

So let me riff on science first. I’ll restrict myself to biology, since I don’t think that knowledge of one scientific domain automatically confers knowledge in all the rest. Here are a few hoary chestnuts that are still in routine use (the list is by no means exhaustive):

Genes determining high-order behavior, so that you can instill virtue or Mozartian composing ability with simple, neat, trouble-free cut-n-pastes (ETA: this trope includes clones, who are rarely shown to be influenced by their many unique contexts). It runs parallel with optimizing for a function, which usually breaks down to women bred for sex and men bred for slaughter. However, evolution being what it is, all organisms are jury-rigged and all optimizations of this sort result in instant dead-ending. Octavia Butler tackled this well in The Evening and the Morning and the Night.

— The reductionist, incorrect concept of selfish genes. This is often coupled with the “women are from Venus, men are from Mars” evo-psycho nonsense, with concepts like “alpha male rape genes” and “female wired-for-coyness brains”. Not surprisingly, these play well with the libertarian cyberpunk contingent as well as the Vi*agra-powered epic fantasy cohort.

— Lamarckian evolution, aka instant effortless morphing, which includes acquiring stigmata from VR; this of course is endemic in film and TV SF, with X-Men and The Matrix leading the pack – though Star Trek was equally guilty.

— Its cousin, fast speciation (Greg Bear’s cringeworthy Darwin’s Radio springs to mind; two decent portrayals, despite their age, are Poul Anderson’s The Man Who Counts and The Winter of the World).  Next to this is rapid adaptation, though some SF standouts managed to finesse this (Joan Slonczewski’s A Door into Ocean, Donald Kingsbury’s Courtship Rite).

— The related domain of single-note, un-integrated ecosystems (what I call “pulling a Cameron”). As I mentioned before, Dune is a perfect exemplar though it’s one of too many; an interesting if flawed one is Mary Doria Russell’s The Sparrow. Not surprisingly, those that portray enclosed human-generated systems come closest to successful complexity (Morgan Locke’s Up Against It, Alex Jablokov’s River of Dust).

— Quantum consciousness and quantum entanglement past the particle scale. The former, Roger Penrose’s support notwithstanding, is too silly to enlarge upon, though I have to give Elizabeth Bear props for creative chuzpah in Undertow.

— Immortality by uploading, which might as well be called by its real name: soul and/or design-by-god – as Battlestar Galumphica at least had the courage to do. As I discussed elsewhere, this is dualism of the hoariest sort and boring in the bargain.

— Uplifted animals and intelligent robots/AIs that are not only functional but also think/feel/act like humans. This paradigm, perhaps forgivable given our need for companionship, was once again brought to the forefront by the Planet of the Apes reboot, but rogue id stand-ins have run rampant across the SF landscape ever since it came into existence.

These concepts are as wrong as the geocentric universe, but the core problems lie elsewhere. For one, SF is way behind the curve on much of biology, which means that stories could be far more interesting if they were au courant. Nanobots already exist; they’re called enzymes. Our genes are multi-cooperative networks that are “read” at several levels; our neurons, ditto. I have yet to encounter a single SF story that takes advantage of the plasticity (and potential for error) of alternative splicing or epigenetics, of the left/right brain hemisphere asymmetries, or of the different processing of languages acquired in different developmental windows.

For another, many of the concepts I listed are tailor-made for current versions of triumphalism and false hierarchies that are subtler than their Leaden Age predecessors but just as pernicious. For example, they advance the notion that bodies are passive, empty chassis which it is all right to abuse and mutilate and in which it’s possible to custom-drop brains (Richard Morgan’s otherwise interesting Takeshi Kovacs trilogy is a prime example). Perhaps taking their cue from real-life US phenomena (the Teabaggers, the IMF and its minions, Peter Thiel…) many contemporary SF stories take place in neo-feudal, atomized universes run amuck, in which there seems to be no common weal: no neighborhoods, no schools, no people getting together to form a chamber music ensemble, play soccer in an alley, build a telescope. In their more benign manifestations, like Iain Banks’ Culture, they attempt to equalize disparities by positing infinite resources. But they hew to disproved paradigms and routinely conflate biological with social Darwinism, to the detriment of SF.

Mindsets informed by these holdovers won’t help us understand aliens of any kind or launch self-enclosed sustainable starships, let alone manage to stay humane and high-tech at the same time. Because, let’s face it: the long generation ships will get us past LEO. FTLs, wormholes, warp drives… none of these will carry us across the sea of stars. It will be the slow boats to Tau Ceti, like the Polynesian catamarans across the Pacific.

You may have noticed that many of the examples that I used as good science have additional out-of-the-box characteristics. Which brings us to safe exoticism on the humanist side.

Part 2: Culture

Images: 1st, Bunsen and his hapless assistant, Beaker (The Muppet Show); 2nd, the distilled quintessence of safe exoticism: Yul Brynner in The King and I.

Related entries:

SF Goes McDonald’s: Less Taste, More Gristle

To the Hard Members of the Truthy SF Club

Miranda Wrongs: Reading too Much into the Genome

Ghost in the Shell: Why Our Brains Will Never Live in the Matrix

“Are We Not (as Good as) Men?”

Tuesday, August 23rd, 2011

— paraphrasing The Sayer of the Law from H. G. Wells’ The Island of Dr. Moreau

When franchises get stale, Hollywood does reboots — invariably a prequel that tells an origin story retrofitted to segue into already-made sequels either straight up (Batman, X-Men) or in multi-universe alternatives (Star Trek). Given the iconic status of the Planet of the Apes original, a similar effort was a matter of time and CGI.

In The Rise of the Planet of the Apes, we get the origin story with nods to the original: throwaway references to the loss of crewed starship Icarus on its way to Mars; a glimpse of Charlton Heston; the future ape liberator playing with a Lego Statue of Liberty. As Hollywood “science” goes, it’s almost thoughtful, even borderline believable. The idea that the virus that uplifts apes is lethal to humans is of course way too pat, but it lends plausibility to the eventual ape dominion without resorting to the idiotic Ewok-slings-overcome-Stormtrooper-missiles mode. On the other hand, the instant rise to human-level feats of sophistication is ridiculous (more of which anon), to say nothing of being able to sail through thick glass panes unscathed.

The director pulled all the stops to make us root for the cousins we oppress: the humans are so bland they blend with the background, the bad guys mistreat the apes with callous glee… and the hero, the cognitively enhanced chimpanzee Caesar (brought to disquieting verisimilitude of life by Andy Serkis), not only fights solely in defense of his chosen family… but to underline his messianic purity he has neither sex drive nor genitals. This kink underlines the high tolerance of US culture for violence compared to its instant vapors over any kind of sex; however, since Project Nim partly foundered on this particular shoal, perhaps it was a wise decision.

As it transpires, Ceasar is exposed to little temptation to distract him from his pilgrimage: there are no female hominids in the film, except for the maternal vessel who undergoes the obligatory death as soon as she produces the hero and a cardboard cutout helpmate there to mouth the variants of “There are some things we weren’t meant to do” — and as assurance that the human protagonist is not gay, despite his nurturing proclivities. Mind you, the lack of a mother and her female alliances would make Caesar (augmented cortex notwithstanding) a permanent outcast among his fellows, who determine status matrilinearly given the lack of defined paternity.

Loyal to human tropes, Caesar goes from Charly to Che through the stations-of-the-cross character development arc so beloved of Campbel/lites. Nevertheless, we care what happens to him because Serkis made him compelling and literally soulful. Plus, of course, Caesar’s cause is patently just. The film is half Spartacus turning his unruly gladiators into a disciplined army, half Moses taking his people home — decorated with the usual swirls of hubris, unintended consequences, justice, equality, compassion, identity and empathy for the Other.

Needless to say, this reboot revived the topic of animal uplift, a perennial favorite of SF (and transhumanist “science” which is really a branch of SF, if not fantasy). Human interactions with animals have been integral to all cultures. Myths are strewn with talking animal allies, from Puss in Boots to A Boy and His Dog. Beyond their obvious practical and symbolic uses, mammals in particular are the nexus of both our notions of exceptionalism and our ardent wish for companionship. Our fraught relationship with animals also mirrors preoccupations of respective eras. In Wells’ Victorian England, The Island of Dr. Moreau struggled with vivisection whereas Linebarger’s Instrumentality Underpeople and the original Planet of the Apes focused on racism (plus, in the latter, the specter of nuclear annihilation). Today’s discussions of animal uplift are really a discussion over whether our terrible stewardship can turn benign — or at least neutral — before our inexorable spread damages the planet’s biosphere past recovery.

When SF posits sentient mammal-like aliens, it usually opts for predators high in human totem poles (Anderson’s eagle-like Ythrians, Cherryh’s leonine Hani). On the other hand, SF’s foremost uplift candidates are elephants, cetaceans – and, of course, bonobos and chimpanzees. All four species share attributes that make them theoretically plausible future companions: social living, so they need to use complex communication; relative longevity, so they can transmit knowledge down the generations; tool use; and unmistakable signs of self-awareness.

Uplift essentially means giving animals human capabilities – primary among them high executive functions and language. One common misconception seems to be that if we give language to near-cousins, they will end up becoming hairy humans. Along those lines, in Rise chimpanzees, gorillas and orangutans are instantly compatible linguistically, emotionally, mentally and socially. In fact, chimpanzees are far closer to us than they are to the other two ape species (with orangutans being the most distant). So although this pan-panism serves the plot and prefigures the species-specific occupations shown in the Ape pre/sequels, real-life chances of such coordination, even with augmentation, are frankly nil.

There is, however, a larger obstacle. Even if a “smart bomb” could give instant language processing capability, it would still not confer the ability to enunciate clearly, which is determined by the configuration of the various mouth/jaw/throat parts. Ditto for bipedal locomotion. Uplift caused by intervention at whatever level (gene therapy, brain wiring, grafts) cannot bring about coordinated changes across the organism unless we enter the fantasy domain of shapeshifting. This means that a Lamarckian shift in brain wiring will almost certainly result in a seriously suboptimal configuration unlikely to thrive individually or collectively. This could be addressed by singlet custom generation, as is shown for reynards in Crowley’s Beasts, but it would make such specimens hothouse flowers unlikely to propagate unaided, much less become dominant.

In this connection, choosing to give Caesar speech was an erosion of his uniqueness. Of course, if bereft of our kind of speech he would not be able to give gruff Hestonian commands to his army: they would be reliant on line of sight and semaphoring equivalents. However, sticking to complex signed language (which bonobos at least appear capable of, if they acquire it within the same developmental time window as human infants) would keep Caesar and his people uncanny and alien, underlining the irreducible fact of their non-human sentience.

Which brings us to the second fundamental issue of uplift. Even if we succeed in giving animals speech and higher executive functions, they will not be like us. They won’t think, feel, react as we do. They will be true aliens. There is nothing wrong with that, and such congress might give us a preview of aliens beyond earth, should SETI ever receive a signal. However, given how humans treat even other humans (and possibly how Cro-Magnons treated Neanderthals), it is unlikely we’ll let uplifted animals go very far past pet, slave or trophy status. In this, at least, Caesar’s orangutan councillor is right: “Human no like smart ape,” no matter how piously we discuss the ethics of animal treatment and our responsibilities as technology wielders.

Images: top, Caesar; bottom, Neanderthal reconstruction (Kennis & Kennis, National Geographic). What gazes out of those eyes is — was — human.

The Death Rattle of the Space Shuttle

Monday, July 25th, 2011

I get out of my car,
step into the night,
and look up at the sky.
And there’s something
bright, traveling fast.
Look at it go!
Just look at it go!

Kate Bush, Hello Earth

[The haunting a capella chorus comes from a Georgian folk song, Tsin Tskaro (By the Spring)]

I read the various eulogies, qualified and otherwise, on the occasion of the space shuttle’s retirement.  Personally, I do not mourn the shuttle’s extinction, because it never came alive: not as engineering, not as science, not as a vision.

Originally conceived as a reusable vehicle that would lift and land on its own, the shuttle was crippled from the get-go.  Instead of being an asset for space exploration, it became a liability – an expensive and meaningless one, at that.  Its humiliating raison d’ être was to bob in low earth orbit, becoming a toy for millionaire tourists by giving them a few seconds of weightlessness.  The space stations it serviced were harnessed into doing time-filling experiments that did not advance science one iota (with the notable exception of the Hubble), while most of their occupants’ time was spent scraping fungus off walls.  It managed to kill more astronauts than the entire Apollo program.  The expense of the shuttle launches crippled other worthwhile or promising NASA programs, and its timid, pious politics overshadowed any serious advances to crewed space missions.

In the past, I had lively discussions with Robert Zubrin about missions to Mars (and Hellenic mythology… during which I discovered that he, like me, loves the Minoans).  We may have disagreed on approach and details, but on this he and I are in total agreement: NASA has long floated adrift, directionless and purposeless.  Individual NASA subprograms (primarily all the robotic missions), carried on in the agency’s periphery, have been wildly successful.  But the days when launches fired the imagination of future scientists are long gone.

It’s true that the Apollo missions were an expression of dominance, adjuncts to the cold war.  It’s also true that sending a crewed mission to Mars is an incredibly hard undertaking.  However, such an attempt — even if it fails — will address a multitude of issues: it will ask the tough question of how we can engineer sustainable self-enclosed systems (including the biological component, which NASA has swept under the rug as scientifically and politically thorny); it will allow us to definitively decide if Mars ever harbored life; it will once again give NASA – and the increasingly polarized US polity – a focus and a worthwhile purpose.

I’m familiar with all the counterarguments about space exploration in general and crewed missions in particular: these funds could be better used alleviating human misery on earth; private industry will eventually take up the slack; robotic missions are much more efficient; humans will never go into space in their current form, better if we wait for the inevitable uploading come the Singularity.

In reality, funds for space explorations are less than drops in the ocean of national spending and persistent social problems won’t be solved by such measly sums; private industry will never go past low orbit casinos (if that); as I explained elsewhere, we in our present form will never, ever get our brains/minds into silicon containers; and we will run out of resources long before such a technology is even on our event horizon, so waiting for gods… er, AI overlords won’t avail us.

Barring an unambiguous ETI signal, the deepest, best reason for crewed missions is not science. I recognize the dangers of using the term frontier, with all its colonialist, triumphalist baggage. Bravado aside, we will never conquer space. At best, we will traverse it like the Polynesians in their catamarans under the sea of stars. But space exploration — more specifically, a long-term crewed expedition to Mars with the express purpose to unequivocally answer the question of Martian life — will give a legitimate and worthy outlet to our ingenuity, our urge to explore and our desire for knowledge, which is not that high up in the hierarchy of needs nor the monopoly of elites. People know this in their very marrow – and have shown it by thronging around the transmissions of space missions that mattered.

It’s up to NASA to once again try rallying people around a vision that counts.  Freed of the burden of the shuttle, perhaps it can do so, thereby undergoing a literal renaissance.

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

John Fitzgerald Kennedy, September 1962

Images: Pat Rawlings, Beyond; Randy Halverson, Plains Milky Way; European Space Agency, High Aurora.

Ghost in the Shell: Why Our Brains Will Never Live in the Matrix

Thursday, June 23rd, 2011

Introductory note: Through Paul Graham Raven of Futurismic, I found out that Charles Stross recently expressed doubts about the Singularity, god-like AIs and mind uploading.  Being the incorrigible curious cat (this will kill me yet), I checked out the post.  All seemed more or less copacetic, until I hit this statement: “Uploading … is not obviously impossible unless you are a crude mind/body dualist. // Uploading implicitly refutes the doctrine of the existence of an immortal soul.”

Clearly the time has come for me to reprint my mind uploading article, which first appeared at H+ magazine in October 2009. Consider it a recapitulation of basic facts.

When surveying the goals of transhumanists, I found it striking how heavily they favor conventional engineering. This seems inefficient and inelegant, since such engineering reproduces slowly, clumsily and imperfectly, what biological systems have fine-tuned for eons — from nanobots (enzymes and miRNAs) to virtual reality (lucid dreaming). An exemplar of this mindset was an article about memory chips. In it, the primary researcher made two statements that fall in the “not even wrong” category: “Brain cells are nothing but leaky bags of salt solution,” and “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem.”

And it came to me in a flash that most transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons, each exemplified by these sentences. The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.

The analogy du siècle equates the human brain with a computer — a vast, complex one performing dizzying feats of parallel processing, but still a computer. However, that is incorrect for several crucial reasons, which bear directly upon mind portability. A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.

Theoretically speaking, how could we manage to live forever while remaining recognizably ourselves to us? One way is to ensure that the brain remains fully functional indefinitely. Another is to move the brain into a new and/or indestructible “container”, whether carbon, silicon, metal or a combination thereof. Not surprisingly, these notions have received extensive play in science fiction, from the messianic angst of The Matrix to Richard Morgan’s Takeshi Kovacs trilogy.

To give you the punch line up front, the first alternative may eventually become feasible but the second one is intrinsically impossible. Recall that a particular mind is an emergent property (an artifact, if you prefer the term) of its specific brain – nothing more, but also nothing less. Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die – and be aware of the death process. Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness. This is an excellent way to leave a clone-like descendant, but not to become immortal.

What I just mentioned essentially takes care of all versions of mind uploading, if by uploading we mean recreation of an individual brain by physical transfer rather than a simulation that passes Searle’s Chinese room test. However, even if we ever attain the infinite technical and financial resources required to scan a brain/mind 1) non-destructively and 2) at a resolution that will indeed recreate the original, additional obstacles still loom.

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

Let’s now discuss the possible: in situ replacement. Many people argue that replacing brain cells is not a threat to identity because we change cells rapidly and routinely during our lives — and that in fact this is imperative if we’re to remain capable of learning throughout our lifespan.

It’s true that our somatic cells recycle, each type on a slightly different timetable, but there are two prominent exceptions. The germ cells are one, which is why both genders – not just women – are progressively likelier to have children with congenital problems as they age. Our neurons are another. We’re born with as many of these as we’re ever going to have and we lose them steadily during our life. There is a tiny bit of novel neurogenesis in the olfactory system and possibly in the hippocampus, but the rest of our 100 billion microprocessors neither multiply nor divide. What changes are the neuronal processes (axons and dendrites) and their contacts with each other and with other cells (synapses).

These tiny processes make and unmake us as individuals. We are capable of learning as long as we live, though with decreasing ease and speed, because our axons and synapses are plastic as long as the neurons that generate them last. But although many functions of the brain are diffuse, they are organized in localized clusters (which can differ from person to person, sometimes radically). Removal of a large portion of a brain structure results in irreversible deficits unless it happens in very early infancy. We know this from watching people go through transient or permanent personality and ability changes after head trauma, stroke, extensive brain surgery or during the agonizing process of various neurodegenerative diseases, dementia in particular.

However, intrepid immortaleers need not give up. There’s real hope in the horizon for renewing a brain and other body parts: embryonic stem cells (ESCs, which I discussed recently). Depending on the stage of isolation, ESCs are truly totipotent – something, incidentally, not true of adult stem cells that can only differentiate into a small set of related cell types. If neuronal precursors can be introduced to the right spot and coaxed to survive, differentiate and form synapses, we will gain the ability to extend the lifespan of a brain and its mind.

It will take an enormous amount of fine-tuning to induce ESCs to do the right thing. Each step that I casually listed in the previous sentence (localized introduction, persistence, differentiation, synaptogenesis) is still barely achievable in the lab with isolated cell cultures, let alone the brain of a living human. Primary neurons live about three weeks in the dish, even though they are fed better than most children in developing countries – and if cultured as precursors, they never attain full differentiation. The ordeals of Christopher Reeve and Stephen Hawking illustrate how hard it is to solve even “simple” problems of either grey or white brain matter.

The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks. As a result, renewal of large brain swaths will require such a lengthy lifespan that the replacements may never catch up. Not surprisingly, the efforts in this direction have begun with such neurodegenerative diseases as Parkinson’s, whose causes are not only well defined but also highly localized: the dopaminergic neurons in the substantia nigra.

Renewing the hippocampus or cortex of a Alzheimer’s sufferer is several orders of magnitude more complicated – and in stark contrast to the “black box” assumption of the memory chip researcher, we will need to know exactly what and where to repair. To go through the literally mind-altering feats shown in Whedon’s Dollhouse would be the brain equivalent of insect metamorphosis: it would take a very long time – and the person undergoing the procedure would resemble Terry Schiavo at best, if not the interior of a pupating larva.

Dollhouse got one fact right: if such rewiring is too extensive or too fast, the person will have no memory of their prior life, desirable or otherwise. But as is typical in Hollywood science (an oxymoron, but we’ll let it stand), it got a more crucial fact wrong: such a person is unlikely to function like a fully aware human or even a physically well-coordinated one for a significant length of time – because her brain pathways will need to be validated by physical and mental feedback before they stabilize. Many people never recover full physical or mental capacity after prolonged periods of anesthesia. Having brain replacement would rank way higher in the trauma scale.

The most common ecological, social and ethical argument against individual quasi-eternal life is that the resulting overcrowding will mean certain and unpleasant death by other means unless we are able to access extra-terrestrial resources. Also, those who visualize infinite lifespan invariably think of it in connection with themselves and those whom they like – choosing to ignore that others will also be around forever, from genocidal maniacs to cult followers, to say nothing of annoying in-laws or predatory bosses. At the same time, long lifespan will almost certainly be a requirement for long-term crewed space expeditions, although such longevity will have to be augmented by sophisticated molecular repair of somatic and germ mutations caused by cosmic radiation. So if we want eternal life, we had better first have the Elysian fields and chariots of the gods that go with it.

Images: Echo (Eliza Dushku) gets a new personality inserted in Dollhouse; any port in a storm — Spock (Leonard Nimoy) transfers his essential self to McCoy (DeForest Kelley) for safekeeping in The Wrath of Khan; the resurrected Zoe Graystone (Alessandra Torresani) gets an instant memory upgrade in Caprica; Jake Sully (Sam Worthington) checks out his conveniently empty Na’vi receptacle in Avatar.

The Hard Underbelly of the Future: Sue Lange’s Uncategorized

Tuesday, June 14th, 2011

Sue Lange’s collection Uncategorized (Book View Café, 2009, $1.99 digital edition) contains fifteen stories published in various venues (among them Apex, Astounding Tales, Sentinel, Mbrane and Aoife’s Kiss). If I wanted to categorize them, I’d call them quasi-mundane near-future SF – but some of the unifying threads that run through them are unusual.

One is Lange’s love and professional knowledge of music, which pops up in unexpected spots in the stories and is the focus of one of them (“The Failure”).  Another is the matter-of-fact attitude toward technology: the people and societies in Uncategorized have come to terms with genetic engineering and its cousins, although they are aware of their problems.  Finally, the points of view are resolutely working class (many springing directly from Lange’s varied work experience). This doesn’t merely mean that the protagonists/narrators are blue collar. Instead, almost all the stories in Uncategorized center around work issues for people whose jobs are not a way of “expressing themselves” but a way to keep food on the table. These are people who cannot afford ennui or angst, who must punch time cards and undergo intrusive HR evaluations.

There’s an additional feature that makes Lange’s blue-collar protagonists stand out: most are women who do “traditionally masculine” work: meat plant workers, plumbers, soldiers, radiation cleanup crews. Furthermore, these women focus on their jobs and many of their co-workers and friends are women as well. In other words, Lange’s stories handily pass the Bechdel test without falling even remotely into the arbitrarily devalued subgenre of chicklit. If you took Rosie the Riveter and transposed her to a near-future alternate US (minus such outworn cyberpunk accessories as pneumatic-boob avatars and Matrix-style gyrations), you’d have the setting for most of the stories in Uncategorized.

Contributing to this gestalt are Lange’s deadpan humor and rapid-fire dialogue, which require some acclimatization but can become as catchy as strong beats and riffs. Her language is unvarnished Bauhaus – there’s scarcely a descriptive adjective or adverb to be found. Ditto for the settings, which are urban grit to the max even in the stories set off-Earth. One recurrent weakness is hurried endings, often accompanied by twists that were predictable (to me at least). Almost all the stories in Uncategorized would have increased their impact if they were longer and/or less sparse, because they grapple with important issues in original ways without fanfare.

Although Uncategorized hews to the premise of its title, some of its stories are thematically paired – one version comic, the other tragic. “The Club” / “How to Dispose of Sneakers” deal with the intractable problem of humanity’s ecological footprint; “BehaviorNorm” / “Buyer’s Club” tackle another intractable problem, the callousness of administrative management (think Dilbert with a touch of Big Brother transhumanism). For me, the standouts in the collection were: “Peroxide Head”, a poignant vignette on what balancing issues might really be like for a liaison to “Others” in Banks’ Culture universe; “The Meateaters”, a no-holds-barred Outland retelling of Eurypides’ Bacchae; “Buyer’s Club”; “Pictures”, a valentine to second chances; and “Zara Gets Laid”, in which sexual intercourse boosts the immunity of bio-augmented radiation cleanup workers (based on solid extrapolation, no less!).

Despite its deceptively plain trappings, Uncategorized subtends a wide arc and is textbook-classic SF: its stories follow “what if” questions to their logical conclusions, pulling no punches. It’s a prickly, bracing read that walks a fine line between bleakness and pragmatism, and it deserves the wider readership it might well have got if its author had been of the other gender.

Miranda Wrongs: Reading Too Much into the Genome

Friday, June 10th, 2011

Introductory note: My micro-bio in several venues reads “Scientist by day, writer by night.” It occurred to me that for lo these many years I have discussed science, scientific thinking and process, space exploration, social issues, history and culture, books, films and games… but I have never told my readers exactly what I do during the day.

What I do during the day (and a good part of the night) is investigate a process called alternative splicing and its repercussions on brain function. I will unpack this in future posts (interspersed among the usual musings on other topics), and I hope that this excursion may give a glimpse of how complex biology is across scales.

To start us off, I reprint an article commissioned by R. U. Sirius that first appeared in H+ Magazine in April 2010. An academic variant of this article appeared in Politics and the Life Sciences in response to Mark Walker’s “Genetic Virtue” proposal.

“We meant it for the best.” – Dr. Caron speaking of the Miranda settlers, in Whedon’s Serenity

When the sequence of the human genome was declared essentially complete in 2003, all biologists (except perhaps Craig Venter) heaved a sigh of gladness that the data were all on one website, publicly available, well-annotated and carefully cross-linked. Some may have hoisted a glass of champagne. Then they went back to their benches. They knew, if nobody else did, that the work was just beginning. Having the sequence was the equivalent of sounding out the text of an alphabet whose meaning was still undeciphered. For the linguistically inclined, think of Etruscan.

The media, with a few laudable exceptions, touted this as “we now know how genes work” and many science fiction authors duly incorporated it into their opuses. So did people with plans for improving humanity. Namely, there are initiatives that seriously propose that such attributes as virtue, intelligence, specific physical and mental abilities or, for that matter, a “happy personality” can (and should) be tweaked by selection in utero or engineering of the genes that determine these traits. The usual parties put forth the predictable pro and con arguments, and many articles get published in journals, magazines and blogs.

This is excellent for the career prospects and bank accounts of philosophers, political scientists, biotech entrepreneurs, politicians and would-be prophets. However, biologists know that all this is a parlor game equivalent to determining the number of angels dancing on the top of a pin. The reason for this is simple: there are no genes for virtue, intelligence, happiness or any complex behavioral trait. This becomes obvious by the number of human genes: the final count hovers around 20-25,000, less than twice as many as the number in worms and flies. It’s also obvious by the fact that cloned animals don’t look and act like their prototypes, Cc being the most famous example.

Genes encode catalytic, structural and regulatory proteins and RNAs. They do not encode the nervous system; even less do they encode complex behavior. At the level of the organism, they code for susceptibilities and tendencies — that is, with a few important exceptions, they are probabilistic rather than deterministic. And although many diseases develop from malfunctions of single genes, this does not indicate that single genes are responsible for any complex attribute. Instead they’re the equivalent of screws or belts, whose loss can stop a car but does not make it run.

No reputable biologist suggests that genes are not decisively involved in outcomes. But the constant harping on trait heritability “in spite of environment” is a straw man. Its main prop, the twin studies, is far less robust than commonly presented — especially when we take into account that identical twins often know each other before separation and, even when adopted, are likely to grow up in very similar environments (to say nothing of the data cherry-picking for publication). The nature/nurture debate has been largely resolved by the gene/environment (GxE) interplay model, a non-reductive approximation closer to reality. Genes never work in isolation but as complex, intricately regulated cooperative networks and they are in constant, dynamic dialogue with the environment — from diet to natal language. That is why second-generation immigrants invariably display the body morphology and disease susceptibilities of their adopted culture, although they have inherited the genes of their natal one.

Furthermore, there’s significant redundancy in the genome. Knockouts of even important single genes in model organisms often have practically no phenotype (or a very subtle one) because related genes take up the slack. The “selfish gene” concept as presented by reductionists of all stripes is arrant nonsense. To stay with the car analogy, it’s the equivalent of a single screw rotating in vacuum by itself. It doth not even a cart make, let alone the universe-spanning starship that is our brain/mind.

About half of our genes contribute directly to brain function; the rest do so indirectly, since brain function depends crucially on signal processing and body feedback. This makes the brain/mind a bona fide complex (though knowable) system. This attribute underlines the intrinsic infeasibility of instilling virtue, intelligence or good taste in clothes by changing single genes. If genetic programs were as fixed, simple and one-to-one mapped as reductionists would like, we would have answered most questions about brain function within months after reading the human genome. As a pertinent example, studies indicates that the six extended genomic regions that were defined by SNP (single nucleotide polymorphism) analysis to contribute the most to IQ — itself a population-sorting tool rather than a real indicator of intelligence — influence IQ by a paltry 1%.

The attempts to map complex behaviors for the convenience and justification of social policies began as soon as societies stratified. To list a few recent examples, in the last decades we’ve had the false XYY “aggression” connection, the issue of gay men’s hypothalamus size, and the sloppy and dangerous (but incredibly lucrative) generalizations about brain serotonin and “nurturing” genes. Traditional breeding experiments (cattle, horses, cats, dogs, royal families) have an in-built functional test: the progeny selected in this fashion must be robust enough to be born, survive and reproduce. In the cases where these criteria were flouted, we got such results as vision and hearing impairments (Persian and Siamese cats), mental instability (several dog breeds), physical fragility and Alexei Romanov.

I will leave aside the enormous and still largely unmet technical challenge of such implementation, which is light years distant from casual notes that airily prescribe “just add tetracycline to the inducible vector that carries your gene” or “inject artificial chromosomes or siRNAs.” I play with all these beasties in the lab, and can barely get them to behave in homogeneous cell lines. Because most cognitive problems arise not from huge genomic errors but from small shifts in ratios of “wild-type” (non-mutated) proteins which affect brain architecture before or after birth, approximate engineering solutions will be death sentences. Moreover, the proposals usually advocate that such changes be done in somatic cells, not the germline (which would make them permanent). This means intervention during fetal development or even later — a far more difficult undertaking than germline alteration. The individual fine-tuning required for this in turn brings up differential resource access (and no, I don’t believe that nanotech will give us unlimited resources).

Let’s now discuss the improvement touted in “enhancement” of any complex trait. All organisms are jury-rigged across scales: that is, the decisive criterion for an adaptive change (from a hemoglobin variant to a hip-bone angle) is function, rather than elegance. Many details are accidental outcomes of an initial chance configuration — the literally inverted organization of the vertebrate eye is a prime example. Optimality is entirely context-dependent. If an organism or function is perfected for one set of circumstances, it immediately becomes suboptimal for all others. That is the reason why gene alleles for cystic fibrosis and sickle cell anemia persisted: they conferred heterozygotic resistance to cholera and malaria, respectively. Even if it were possible to instill virtue or musicality (or even the inclination for them), fixing them would decrease individual and collective fitness. Furthermore, the desired state for all complex behaviors is fluid and relative.

The concept that pressing the button of a single gene can change any complex behavior is entirely unsupported by biological evidence at any scale: molecular, cellular, organismic. Because interactions between gene products are complex, dynamic and give rise to pleiotropic effects, such intervention can cause significant harm even if implemented with full knowledge of genomic interactions (which at this point is no even partially available). It is far more feasible to correct an error than to “enhance” an already functioning brain. Furthermore, unlike a car or a computer, brain hardware and software are inextricably intertwined and cannot be decoupled or deactivated during modification.

If such a scenario is optional, it will introduce extreme de facto or de jure inequalities. If it is mandatory, beyond the obvious fact that it will require massive coercion, it will also result in the equivalent of monocultures, which is the surest way to extinction regardless of how resourceful or dominant a species is. And no matter how benevolent the motives of the proponents of such schemes are, all utopian implementations, without exception, degenerate into slaughterhouses and concentration camps.

The proposals to augment “virtue” or “intelligence” fall solidly into the linear progress model advanced by monotheistic religions, which takes for granted that humans are in a fallen state and need to achieve an idealized perfection. For the religiously orthodox, this exemplar is a god; for the transhumanists, it’s often a post-singularity AI. In reality, humans are a work in continuous evolution both biologically and culturally and will almost certainly become extinct if they enter any type of stasis, no matter how “perfect.”

But higher level arguments aside, the foundation stone of all such discussions remains unaltered and unalterable: any proposal to modulate complex traits by changing single genes is like preparing a Mars expedition based on the Ptolemaic view of the universe.

Images: The “in-valid” (non-enhanced) protagonist of GATTACA; M. C. Escher, Drawing Hands; Musical Genes cartoon by Polyp; Rainbow nuzzles her clone Cc (“carbon copy”) at Texas A&M University.

Area 51: Teen Commies from Outer Space!

Thursday, May 19th, 2011

(with due props to Douglas Kenney, National Lampoon co-founder)

Driving to work earlier this week, I heard Terry Gross of Fresh Air (NPR) interview Annie Jacobsen about her new book, Area 51: An Uncensored History of America’s Top Secret Military Base. Jacobsen is a national security reporter, which means she is used to facing stonewalling and secrecy and therefore well aware that she must triple-check her information. It all sounded like sober investigative reporting, until it got to the coda. Bear with me, grasshoppahs, because the truth is definitely way out there.

As you know, Bobs, Area 51 is a military installation in Nevada next to the Yucca Flats where most of the US nuclear tests have been conducted. Area 51 was the home of the U-2 and Oxcart military aircraft testing programs and its resident experts appear to have reverse-engineered Soviet MiGs. Some conspiracy lovers opine that the lunar landing was “faked” there. Not surprisingly, its specifics are heavily classified, including an annually-renewed presidential EPA exception to disclosing (ab)use of toxic agents. People in the ostensibly free world can’t even get decent aerial pictures of it — which of course did not deter satellites of other nations, but who cares for rationality where national security is concerned?

To UFO believers, Area 51 is also the facility that analyzed whatever crashed near Roswell in 1947. Which is where Jacobsen’s theory comes in, backed by a single anonymous source. She proposes that the Roswell object was neither a weather balloon nor an alien spacecraft but a remotely flown Soviet craft based on prototypes by the Horten brothers, aircraft designers and Nazi party members. This part is old news, since this possibility was already considered in cold war US investigations.

Jacobsen’s addition (asserted with a completely straight face and demanding to be taken seriously) is that this craft contained “genetically/surgically altered” teenagers engineered by Josef Mengele at the command of that other monstrous Joseph, Stalin. The modifications had produced uniform results of “abnormally large heads and eyes” etc. The goal was to scare the US and weaken its defenses by a repetition of the panic created by Orson Welles’ War of the Worlds 1938 broadcast.

Got that? I can safely bet that Hollywood agents are bidding frantically for the rights to the screenplay even as we speak. And so they should — it’s a guaranteed blockbuster. It has everything: UFOs, Nazis, Frankenstein monsters, government conspiracies… It ties so many loose ends together so neatly that it’s irresistible.

I will leave to other experts the issues of quoting a single anonymous source and the previous debunkings of similar Area 51 “insiders” like Robert Lazar et al. The part that made me laugh out loud was the “genetically/surgically altered” cherry on top of that fabulous cake. To her credit, Gross pressed Jacobsen on this, only to get a rewinding of the tape without any real explanation beyond “I trusted my Cigarette-Smoking Man because I spent two years talking to him.”

For those who don’t live in a parallel universe, the fact that DNA is the carrier of heredity for most terrestrial lifeforms was established in the fifties, which (*counting on my fingers*) came after 1947. So Mengele or anyone else could not have engaged in any form of targeted genetic engineering; that only became possible, in its crudest form, in the eighties. If “genetic” is intended to mean plain ol’ interbreeding, humans take a bit more than two years (the interval from the time the Russians walked into Berlin till the Roswell crash) to 1) produce children, 2) have the children grow into teenagers and, just as crucially, 3) reliably reproduce traits.

Starvation or breaking of bones during childhood can lead to stunting (as Toulouse-Lautrec’s case demonstrates) but I know of no surgery that can increase head size — hydrocephalus kills its sufferers in rather short order. Grafting was so primitive back then that it’s unlikely its recipients would have survived long enough for a transatlantic trip. The only scenario I can envision that would result to anything remotely tangential to Jacobsen’s contention is if the Soviets used youngsters suffering from growth factor or growth hormone deficiencies — genetic conditions that arise without the intervention of experimentation.

Don’t misunderstand me, I know the idiocies that military and “intelligence” agencies are capable of — from marching soldiers to ground zero well after the consquences of radioactive fallout had become obvious, to the frightening abuses of MK-ULTRA, to the Stargate “Jedi warriors” who stared at spoons and goats. But all these are extensively documented, as well as compatible with the technology available at the time they occurred. Jacobsen’s theory is as grounded as the alien craft alternative it purports to debunk. Pity that Gross didn’t invite a biology undergrad to the program.

My theory (and I’ll be happy to talk to Hollywood agents about it) is that the engineered youngsters decided to defect, commandeered the craft and crashed it while drunk on freedom and contraband beer. I even have my own impeccable source: the small store owner at the outskirts of Roswell who sold them the beer. Smoking Parodies and drinking his regular shot of Colt 45 from an oil can, he confided wistfully: “They just wanted to see the Vegas shows, like any kid their age.”

Images: top, Independence Day — the alien craft secreted and reverse-engineered in Area 51; bottom, another possible explanation for the Roswell crash: abduction lesson troubles (from Pixar’s Lifted).

What’s Sex Got to Do with It?

Tuesday, May 17th, 2011

(sung to Tina Turner’s à propos catchy tune)

Two events unfolded simultaneously in the last few days: Arnold Schwarzenegger’s admission that he left a household servant with a “love child” and Dominique Strauss-Kahn’s arrest for attempting to rape a hotel maid. Before that, we had the almost weekly litany of celebrity/tycoon/politician/figurehead caught with barely-of-age girl(s)/boy(s). In a sadly familiar refrain, an ostensibly liberal commentator said:

“…we know that powerful men do stupid, self-destructive things for sexual reasons every single day. If we’re looking for a science-based explanation, it probably has more to do with evolutionarily induced alpha-male reproductive mandates than any rational weighing of pros and cons.”

Now I hate to break it to self-labeled liberal men but neither love nor sex have anything to do with sexual coercion and Kanazawa-style Tarzanism trying to pass for “evolutionary science” won’t cut it. Everyone with a functioning frontal cortex knows by now that rape is totally decoupled from reproduction. The term “love child”, repeated ad nauseam by the media, is obscene in this context.

Leaving love aside, such encounters are not about sex either. For one, coerced sex is always lousy; for another, no reproductive mandate is involved, as the gang rapes of invading armies show. What such encounters are about, of course, is entitlement, power and control: the prerogative of men in privileged positions to use others (women in particular) as toilet paper with no consequences to themselves short of the indulgent “He’s such a ladies’ man…” and its extension: “This was a trap. Such men don’t need to rape. Women fling themselves in droves at alpha males!”

As I keep having to point out, there are no biological alpha males in humans no matter what Evo-Psycho prophet-wannabees preach under the false mantra of “Real science is not PC, let the chips fall where they may”. Gorillas have them. Baboons have them, with variances between subgroups. Our closest relatives, bonobos and chimpanzees, don’t. What they have are shifting power alliances for both genders (differing in detail in each species). They also have maternally-based status because paternity is not defined and females choose their partners. Humans have so-called “alpha males” only culturally, and only since hoarding of surplus goods made pyramidal societies possible.

The repercussions of such behavior highlight another point. Men of this type basically tell the world “I dare you to stop my incredibly important work to listen to the grievances of a thrall. What is the life and reputation of a minimum-wage African immigrant woman compared to the mighty deeds I (think I can) perform?” Those who argue that the personal should be separate from the political choose to ignore the fact that the mindset that deems a maid part of the furniture thinks the same of most of humanity — Larry Summers is a perfect example of this. In fact, you can predict how a man will behave in just about any situation once you see how he treats his female partner. This makes the treatment of nations by the IMF and its ilk much less mysterious, if no less destructive.

Contrary to the wet dreams of dorks aspiring to “alpha malehood”, women generally will only interact with such specimens under duress. They’re far more popular with men who (like to) think that being a real man means “to crush your enemies, see them driven before you, and to hear the lamentation of their women.” Civilization came into existence and has precariously survived in spite of such men, not because of them. If we hope to ever truly thrive, we will have to eradicate cultural alpha-malehood as thoroughly as we did smallpox — and figure out how we can inculcate snachismo as the default behavioral model instead.

Images: Top, Malcolm McDowell as Caligula in the 1979 eponymous film; bottom, Biotest’s “Alpha Male” pills.

Of Federal Research Grants and Dancing Bears

Sunday, May 1st, 2011

Warmth and comfort are yokes for us.
We chose thorns, shoals and starlight.
— from Mid-Journey

I came to the US in 1973, all fired up to do research. I have been doing research as my major occupation since 1980 and have run my own (tiny: average two-member) lab since 1989. So I fulfilled part of my dream and wrote about how it feels to do so in The Double Helix. Yet I may have to abandon it prematurely. Objectively, that’s not a tragedy. People die, people retire – hell, people have midlife epiphanies that make them join odd religions or take jobs in industry with salaries several-fold higher than their academic ones. But right now, I’m one of many who are disappearing. And our disappearance will have an impact far beyond what outsiders perceive (incorrectly) as cosseted academic careers.

Biomedical researchers in the US are on the selling side of a monopsony. If our research is very basic and/or we’re starting out, we can get small grants from the National Science Foundation. If it’s very directed or applied, we can get tiny grants from private foundations or the rare decent-sized grant from the Departments of Energy or Defense. But all these amount to peanuts. The engine behind US biomedical research is a single organization: the National Institute of Health (NIH). When the NIH says “Jump!” we ask “How high?” on the way up.

Grant submissions to the NIH have always been as arcane and painful as a complex religious ritual that includes flaying. Success depends not only on the quality of our science and the number and impact of our papers but also on sending the grant to the right study section for peer review, on using fashionable (“cutting edge”) gadgets and techniques, on doing science that is perceived to fit the interests of the NIH institute that hosts our grant, and on being lucky enough to get reviewers and program officers who agree on the importance of our proposed work (and in the case of reviewers, not direct competitors who are essentially handed unpublished data on a platter). All this, subsumed under the rubric “grantsmanship” or “being savvy”, is not taught at any point during our long, arduous training. In my youth, we learned by literally walking into brick walls.

However, when I started my PhD biomedical researchers could afford to have their egos and labs bashed by savage critiques and terrible grant scores because the NIH payline was 30-40%. This meant that one in three grants got funded – or that each of our grant applications (if of reasonable quality) got funded in one of the three tries the NIH allowed. Also, at that time the traditional university salary covered nine months. For most academic researchers, a grant meant they got the last three months of salary plus, of course, the wherewithal to do research.

My situation was different: I went to a poor institution (my startup package was seventeen thousand – the common minimum is a million) and except for my first two years and a six-month bridge later on I was entirely on soft money till mid-2008. So for me the equation was not “no grant, no lab”; it was “no grant, no job”. This was made a bit tougher by the fact that my research fit the “starts as heresy and ends as superstition” paradigm. For one, the part of my system that is relevant to dementia undergoes human-specific regulation. So unlike most of my colleagues I did not detour into mouse models, long deemed to be the sine qua non of equivalence (an equation that has hurt both biology and medicine, in my opinion, but that’s a different conversation).

Don’t misunderstand me, I’m not saying I’m a neglected Nobel-caliber genius. But much of what I pursued and discovered was against very strong headwinds: I investigated a molecule that had been delegated to the back of the neurodegeneration bus for decades. To give you a sense of how some of my work was received, I once got the following comment for an unexpected observation that flew against accepted wisdom (verbatim): “If this were true, someone would have discovered it by now.” The observation has since been confirmed and other labs eventually made plenty of hay with some of my discoveries, but I’ve never managed to get funding to pursue most of them. I was either too early and got slammed for lacking proof of feasibility or direct relevance to health, or too late and got slammed for lacking cutting-edginess.

It is hard to keep producing data and papers if your lab keeps vanishing like Brigadoon and has to be rebuilt from scratch with all the loss of knowledge and momentum each dislocation brings. To prevent this as much as I could, whenever I had a grant hiatus I cut my salary in half to pay my lab people as long as possible (if you go below 50% you lose all benefits, health insurance prominently among them). I could do this because I had no children to raise and educate, though it has seriously affected my retirement and disability benefits.

While I was marching to my malnourished but stubborn inner drummer, the NIH payline was steadily falling. It now stands at around 7% which means one in fifteen grants gets funded – or that we must send in fifteen grants to get one. Academic institutions, grown accustomed to the NIH largesse, have put more and more of their faculty on largely or entirely soft salaries. At the same time, grants are essentially the sole determinant for promotion and tenure – for those universities that still have tenure. Additionally, universities have decided to run themselves like corporations and use the indirect funds (money the NIH pays to institutions for administrative and infrastructure support) to build new buildings named after their board members, hire ever more assistant vice presidents and launch pet projects while labs get charged for everything from telephone bills to postdoc visas. Essentially, at this point most US biomedical researchers are employed by the NIH and their universities rent them lab space at markup prices comparable to Pentagon toilet seat tarifs.

Meanwhile, the NIH has been changing grant formats for the sake of streamlining (an odd objective, given its mission). We used to have three pages to respond to reviewers. Now we have one. We were able to send last-minute discoveries or paper acceptances that would make the difference between success and failure. Now we cannot. We used to get detailed critiques. Now we get bullet points. We were allowed three tries. Now we’re allowed two. If our two-strikes-and-we’re-out submission fails, we must change our work “substantially” (nebulously defined, and up to the interpretation of individual NIH officers) and to ensure compliance, the NIH has invested in software that combs past grants to uncover overlap. And of course there is essentially no appeal except for resubmission: the NIH appeal process, such as it is, has been copied from Kafka’s Trial.

All these changes are essentially guaranteeing several outcomes: young people will have fewer and fewer chances (or reasons) to become researchers or independent investigators; new and small labs will disappear; and despite lip service to innovation, research will seek refuge into increasingly safer topics and/or become “big ticket” science, doing large-scale politically dictated projects in huge labs that employ lots of robots and low-level technicians — a danger foreseen by Eisenhower in his famous “military industrial complex” address which today would label him as fringe hard left. A recent analysis by MIT’s Tech Review showed that biomedical work still timidly clusters around the few areas that have already been trampled into flatness, ignoring the far vaster unknown territories opened up by the human genome sequencing project and its successors.

Of course, we biomedical researchers have played a significant role in our own slaughter. Because of the unavoidable necessity of peer review, we have acted as judges, jury and executioners for each other. Like all academic faculty, we’re proud we are “as herdable as cats” and like all white-collar workers we have considered it beneath our dignity to be members of a union. Our only (weak and vanishing) protection has been tenure. Many of us are proud to demonstrate how much hazing we can take, how little we need to still produce publishable, fundable science while still carrying the burdens of mentoring, committee work, teaching… The early rounds of cuts were called “pruning dead wood” or “trimming fat”. Except that for the last two decades we haven’t been cutting into fat but into muscle – and now, into bone. Too, keeping ahead of the never-abating storm swells presupposes not only a lot of luck and/or the ability to tack with the winds of scientific fashion but also personal relationships with granite foundations and cast-iron health.

In my case, after my second hiatus I succeeded into landing two small grants that last two years. The day I found out one of them would be funded was the day I was interviewing at the NIH for a branch administrator position. I turned down that offer, with its excellent terms. My heart is irrevocably tied to research. I had not abandoned my home, my country, my culture to become an administrator — even though the position I was offered can make or break other people’s research. One month after the second of these small grants got activated, I got my cancer diagnosis. Being on soft money meant I could not suspend the grant during my surgery and recovery: my health bills would have driven me to penury. So the productivity suffered accordingly. But such factors are not considered during grant review.

Biology is an intrinsically artisan discipline: unlike physics, it cannot be easily reduced to a few large truths reached by use of increasingly larger instruments. Instead, it looks like a crazy quilt of intricately interwoven threads (take a look at the diagram of any biological pathway and you get the picture, let alone how things translate across scales). Some argue that larger labs are likelier to be innovative, because they have the money to pursue risky work and the gadgets to do so. However, it has been my personal experience that large labs are often large because they pursue fashionable topics with whatever techniques are hot-du-jour, regardless of the noise and artifacts they generate (plus their heads have time for politicking at all kinds of venues). They also have enormous burnout rates and tend to train young scientists by rote and distant proxy.

Granted, we need big-science approaches; but we need the other kind, too – the kind that is now going extinct. And it’s the latter kind that has given us most of the unexpected insights that have translated into real knowledge advances, often from neglected, poorly lit corners of the discipline.

Now I’m not all I thought I’d be,
I always stayed around:
I’ve been as far as Mercy and Grand,
Frozen to the ground.
I can’t stay here and I’m scared to leave;
Just kiss me once and then
I’ll go to hell —
I might as well
Be whistlin’ down the wind.
— Tom Waits, Whistle down the Wind

Images: 1st, Sand Lily Shadow (Tudio, Falassarna, Crete); 2nd, I Stand Alone (Michellerena, Burnie, Tasmania); 3rd, Sea Gate (Peter Cassidy, Heron Island, Australia)

The Quantum Choice: You Can Have either Sex or Immortality

Tuesday, March 29th, 2011

Note: A long-term study from Northwestern University (not yet in PubMed) has linked participation of young adults in religious activities to obesity in later life. Overhanging waistlines in First World societies undoubtedly contribute to the degenerative illnesses of lengthened lifespan. But it’s important to keep in mind that fat fulfills critical functions. This article, which looks at the other side of the coin, was commissioned by R. U. Sirius of Mondo 2000 fame and first appeared in H+ Magazine in September 2009.

Because of the four-plus centuries of Ottoman occupation, the folklore of all Balkan nations shares a Trickster figure named Hodja (based on 13th century Konyan Sufi wandering philosopher Nasreddin). In one of the countless stories involving him, Hodja has a donkey that’s very useful in carting firewood, water, etc.  The problem is that he eats expensive hay. So Hodja starts decreasing the amount of hay he feeds the donkey. The donkey stolidly continues doing the chores and Hodja, encouraged by the results, further decreases the feed until it’s down to nothing. The donkey continues for a few days, then keels over. Hodja grumbles, “Damnable beast! Just when I had him trained!”

Whenever I hear about longevity by caloric restriction, I immediately think of this story.

But to turn to real science, what is the basis for caloric restriction as a method of prolonging life? The answer is: not humans. The basis is that it appears (emphasis on the appears) that feeding several organisms, including mice and rhesus monkeys, near-starvation diets, seems to roughly double their lifespan. Ergo, reasons your average hopeful transhumanist, the same could happen to me if only I had the discipline and time to do the same –- plus the money, of course, for all the supplements and vitamins that such a regime absolutely requires, to say nothing of the expense of such boutique items as digital balances.

I will say a few words first about such beasties as flies (Drosophila melanogaster) and worms (Caenorhabditis elegans) before I climb the evolutionary ladder. Many organisms in other branches of the evolutionary tree have two “quantum” modes: survival or reproduction. For example, many invertebrates are programmed to die immediately after reproduction, occasionally becoming food for their progeny. In some cases, their digestive tracts literally disintegrate after they release their fertilized eggs. Conversely, feeding a infertile worker bee royal jelly turns her into a fully functioning queen. The general principle behind caloric restriction is that it essentially turns the organism’s switch from reproductive to survival mode.

Most vertebrates from reptiles onward face a less stark choice. Because either or both parents are required to lavish care on offspring, vertebrate reproduction is not an automatic death sentence. So let’s segue to humans. Due to their unique birth details, human children literally require the vaunted village to raise them — parents, grandparents, first degree relatives, the lot. At the same time, it doesn’t take scientific research to notice that when calories and/or body fat fall below a certain minimum, girls and women stop ovulating. It also takes just living in a context of famine, whether chosen or enforced, to notice the effects of starvation on people, from lethargy and fatigue to wasted muscles, brittle bones and immune system suppression, crowned with irritability, depression, cognitive impairment and overall diminished social affect.

Ah, says the sophisticated caloric restriction advocate, but much of this comes from imbalances in the diet –- missing vitamins, minerals, etc. Well, yes and no. Let me give a few examples.

All vitamins except B and C are lipid-soluble. If we don’t have enough fat, our body can’t absorb them. So the excess ends up in odd places where it may in fact be toxic –- hence the orange carotenoid-induced tint that is a common telltale sign of many caloric restriction devotees. Furthermore, if we have inadequate body fat, not only are we infertile, infection-prone and slow to heal due to lack of necessary hormones and cholesterol; our homeostatic mechanisms (such as temperature regulation) also flag. And because caloric restriction forces the body to use up muscle protein and leaches bones of minerals, practitioners can end up with weakened hearts and bone fractures.

Speaking of fat, the brain has no energy reserves. It runs exclusively on glucose. When starved of glucose, it starts doing odd things, including the release of stress chemicals. This, in turn, can induce anything from false euphoria to hallucinations. This phenomenon is well known from anorexics and diabetics entering hypoglycemia, but also from shamans, desert prophets and members of cultures that undertook vision quests, which invariably included prolonged fasting.  So caloric restriction may make its practitioners feel euphoric. But just as people feel they have comprehended the universe while under the influence of psychoactive drugs, so does this practice impair judgment and related executive functions.

So what about those glowing reports which purport to have demonstrated that caloric restriction doubles the lifespans of mice and rhesus monkeys, as well as giving them glossy pelts? Surely we can put up with a bit of mental confusion, even failing erections, in exchange for a longer life, as long as it’s of high quality –- otherwise we’ll end up like poor Tithonus, who was granted immortality but not youth and dwindled into a shriveled husk before the gods in their whimsical mercy turned him into a cicada. And it does seem that caloric restriction decreases such banes of extended human lifespan as diabetes and atherosclerosis. Well, there’s something interesting going on, all right, but not what people (like to) think.

In biology, details are crucial and mice are not humans. In Eldorado Desperadoes: Of Mice and Men, I explained at length why non-human studies are proof of principle at best, irrelevant at worst. Laboratory mice and monkeys are bred to reproduce early and rapidly. They’re fed rich diets and lead inactive lives –- the equivalent of couch potatoes. The caloric restriction studies have essentially returned the animals to the normal levels of nutrition that they would attain in the wild. Indeed, caloric restriction of wild mice does not extend their lives and when caloric levels fall below about 50%, both lab and wild mice promptly keel over, like Hodja’s donkey. In the rhesus studies, lifespans appeared extended only when the investigators counted a subset of the deaths in the animal group they tested.

On the molecular level, much attention has been paid to sirtuin activators, resveratrol chief among them. Sirtuins are a class of proteins that regulate several cell processes, including aspects of DNA repair, cell cycle and metabolism. This means they’re de facto pleiotropic, which should give would-be life extenders pause. As for resveratrol, it doesn’t even extend life in mice –- so the longer lives of the red-wine loving French result from other causes, almost certainly including their less sedentary habits and their universal and sane health coverage. That won’t stop ambitious entrepreneurs from setting up startups that test sirtuin activators and their ilk, but I predict they will be as effective as leptin and its relatives were for non-genetic obesity.

This brings to mind the important and often overlooked fact that genes and phenotypes never act in isolation. An allele or behavior that is beneficial in one context becomes deleterious in another. When longer-lived mutants and wild-type equivalents are placed in different environments, all longevity mutations result in adaptive disadvantages (some obvious, some subtle) that make the mutant strain disappear within a few generations regardless of the environment specifics.

Similarly, caloric restriction in an upper-middle class context in the US may be possible, if unpleasant. But it’s a death sentence for a subsistence farmer in Bangladesh who may need to build up and retain her weight in anticipation of a famine. For women in particular, who are prone to both anorexia and osteoporosis, caloric restriction is dangerous –- hovering as it does near keeling over territory. As for isolated, inbred groups that have more than their share of centenarians, their genes are far more responsible for their lifespan than their diet. So does the fact that they invariably lead lives of moderate but sustained physical activity surrounded by extended families, as long as they are relatively dominant within their family and community.

Human lifespan has already nearly tripled, courtesy of vaccines, antibiotics, clean water and use of soap during childbirth. It is unlikely that we will be able to extend it much further. Extrapolations indicate that caloric restriction will not lengthen our lives by more than 3% (a pitiful return for such herculean efforts) and that we can get the same result from reasonable eating habits combined with exercise. Recent, careful studies have established that moderately overweight people are the longest-lived, whereas extra-lean people live as long as do obese ones.

So what can you really do to extend your life? Well, as is the case with many other quality-of-life attributes, you should choose your parents carefully. Good alleles for susceptibilities to degenerative age-related diseases (diabetes, heart disease, hypertension, dementia) are a great help — as is high income in a developed country with first-rate medical services, which will ensure excellent lifelong nutrition and enough leisure time and/or devoted underlings to make it possible to attend to suchlike things.

Baby, You Were Great!

Saturday, March 26th, 2011

— title of a story by Kate Wilhelm

Everyone who meets me inevitably finds out that science fiction and fantasy (SF/F) occupy a large portion of my head and heart: I write it, read it, review it and would like to see it discard the largely self-imposed blinkers that impoverish it. For a while, Strange Horizons (SH) magazine thrilled and captivated me. So it’s doubly hearbreaking for me to see it regressing into “normality” and losing sight of what made it stand out in the first place.

My relationship with SH has long been ambivalent. I was happy it was a major SF/F venue brought to vibrant life by female founders: Mary Anne Mohanraj and Susan Marie Groppi after her. I was pleased it published many works by women and Others and contained significant numbers of women in its masthead (as editors, not gofers or dishwashers). I was glad it showcased non-famous writers from the get-go and cast its net wide. My second major SF article appeared there when I was relatively unknown in the domain.

However, there were some worms in the tasty apple. One was that SH seemed to have adopted a stance of “science hurts our brains” – perhaps to distinguish itself from the scienciness of Analog and Asimov’s. This was true not only (increasingly) for its stories but also for the non-fiction articles which steered determinedly clear of science, concentrating instead on literary and social criticism. There’s nothing wrong with that, of course, especially when SF/F is still struggling for legitimacy as literature. But other speculative magazines – Lightspeed, for one – manage to include interesting science articles without shedding cooties on their fiction.

So I read SH fiction less and less but continued to browse its columns and reviews. Then in the last few years I noticed those shifting – gradually but steadily. They were increasingly by and about Anglosaxon white men and showed the tunnel vision this context denotes and promotes. The coalescent core reviewers were young-ish British men (with token “exotics”) convinced of their righteous enlightenment and “edginess” along the lines of “We discovered/invented X.”

I caught a whiff of the embedded assumptions that surface when these self-proclaimed progressives relax, safe from prying eyes. One of them recently reviewed a story on his site and characterized its protagonist by the term “cunt”. He used the word repeatedly, as a synonym for “empathy-lacking sociopath”. Having accidentally read the entry, I remarked that, feminism bona fides aside, the term doesn’t ring friendly to female ears and even the canon definition of the term (“extremely unpleasant person, object or experience”) is not equivalent to psychopath. Perhaps not so incidentally, I was the only woman on the discussion thread.

The reviewer’s first response was that only Amurrican barbarians “misunderstand” the term. I replied (in part) that I’m not American, and presumably he wishes to be read by people beyond Britain and its ex-colonies. At that point he essentially told me to fuck off. His friends, several of them SH reviewers or editors, fell all over themselves to show they aren’t PC killjoys. They informed me that US cultural hegemony is finally over (if only), that “cunt” is often used as an endearment (in which case his review was a paean?) and that women themselves have reclaimed the term (that makes it copacetic then!).

So this is the core group that has been writing the majority of reviews at SH for the last few years and is now firmly ensconced not only in SH but also across British SF/F venues. This may explain the abysmal gender percentages of the latter, which haven’t really budged even after the discussions around the not-so Mammoth Book of Mindblowing SF or the handwringings over the Gollancz aptly named Masterwork Series. The recent epic fantasy debate showcased the prevailing attitudes by discussing exclusively works of (repeat after me) white Anglosaxon men. Not surprisingly, the editor of SH just revealed that roughly two-thirds of recent SH reviews were by male reviewers and two-thirds discussed works of male authors, adhering to the in/famous “one-third rule” that applies to groups helmed by men.

People will argue that SH still has “a preponderance” of women in its masthead and pages. That’s mostly true — for now. However, it is significant that the percentages of works by women in SH consistently reflect the ratios and clout of women within each of its departments. Too, it’s human nature to flood the decks with one’s friends when someone takes over a ship. The problem is that, given the makeup of the current editor’s inner circle, an echo chamber is all but assured. To give one example, a new SH column does blurbs of online discussions relevant to SF/F. Although the editor in charge of it asked for input and admitted he got an avalanche of responses, its entries so far have come almost exclusively from members of the in-group.

So SH is inching towards a coterie of white Anglosaxon men as arbiters of value, a configuration Virginia Woolf would have found depressingly familiar. People are fond of repeating that publication ratios reflect the fact that women submit less than men. What I increasingly see at SH are stances that need not be in-your-face hostile to exert a chilling effect. If someone smirks at you constantly, the passive-aggressive condescension will eventually stop you from going to his parties as effectively as if he had explicitly barred you entry (check out The Valve to observe this dynamic at work).

It grieves me to see SH slowly but inexorably become literally a neo-Victorian club. It grieves me that one of the few SF/F venues once genuinely receptive to women’s work is resorting to smug lip-service. Perhaps the magazine is a victim of its success: once women had nurtured it to prominence, men could take over and reap the benefits – a standard practice.

I see developing patterns early, so much so that I often joke I should be called Cassandra, not Athena. Yet this once, for the sake of the genre and the women who painstakingly watered the now-vigorous SH tree, I fervently hope I’m proved wrong. Otherwise, given the attention span of the Internet, a handful of us will wistfully recall (to hoots of incredulous derision, no doubt) that once there was a verdant oasis in SF/F that women created, shaped and inhabited.

Remedios Varo, Nacer de Nuevo (To Be Reborn)

Note to readers: I am aware this will lead to polarizing and polarized views. I will not engage in lengthy back-and-forths, although I made an exception for the expected (and predictable) response by Abigail Nussbaum. People are welcome to hold forth at whatever length and pitch they like elsewhere.

Blastocysts Feel No Pain

Monday, March 14th, 2011

In 2010, the recipient for the Medicine Nobel was Robert Edwards, who perfected in vitro fertilization (IVF) techniques for human eggs in partnership with George Steptoe. Their efforts culminated with the conception of Louise Brown in 1978, followed by several million such births since. The choice was somewhat peculiar, because this was an important technical advance but not an increase in basic understanding (which also highlights the oddity of not having a Nobel in Biology). That said, the gap between the achievement and its recognition was unusually long. This has been true of others who defied some kind of orthodoxy – Barbara McClintock is a poster case.

In Edwards’ case, the orthodoxy barrier was conventional. Namely, IVF separates sex from procreation as decisively as contraception does. Whereas contraception allows sex without procreation (as do masturbation and most lovemaking permutations), IVF allows conception minus orgasms and also decouples ejaculation from fatherhood. Sure enough, a Vatican representative voiced his institution’s categorical disapproval for this particular bestowal. However, IVF has detractors even among the non-rabidly religious. The major reason is its residue: unused blastocysts, which are routinely discarded unless they’re used as a source for embryonic stem cells.

Around the same time that Edwards received the Nobel, US opponents of embryonic stem cell research filed a lawsuit contending that this “so far fruitless” research siphoned off funds from “productive” adult stem cell research. The judge in the case handed down a decision that amounted to a ban of all embryonic stem cell work and the case has been a legal and political football ever since. The brouhaha has highlighted two questions: what good are stem cells? And what is the standing of blastocysts?

Let me get the latter out of the way first. Since IVF blastocysts are eventually discarded if not used, most dilemmas associated with them reek with hypocrisy and the transparent desire to curtail women’s autonomy. A 5-day blastocyst consists of 200 cells arising from a zygote that has not yet implanted. If it implants, 50 of these eventually become the embryo; the rest turn into the placenta. A blastocyst is a potential human as much as an acorn is a potential oak – perhaps even less, given how much it needs to attain viability. Equally importantly, blastocysts don’t feel pain. For that you need to have a nervous system that can process sensory input. In humans, this happens roughly near the end of the second trimester – which is one reason why extremely premature babies have severe neurological defects.

This won’t change the mind of anyone who believes that a zygote is “ensouled” at conception, but if we continue along this axis (very similar to much punitive fundamentalist reasoning) we will end up declaring miscarriage a crime. This is precisely what several US state legislatures are currently attempting to do, with the “Protect Life Act” riding pillion, bringing us squarely into Handmaid’s Tale territory. It is well known by now that something like forty percent of all conceptions end in early miscarriages, many of them unnoticed or noticed only as heavier than usual monthly bleeding. A miscarriage almost invariably means there is something seriously wrong with the embryo or the embryo/placenta interaction. Forcing such pregnancies to continue would result in significant increase of deaths and permanent disabilities of both women and children.

The “instant ensoulment” stance is equivalent to the theories that postulated a fully formed homunculus inside each sperm and deemed women passive yet culpable vessels. It is also noteworthy that the concern of compulsory-pregnancy advocates stops at the moment of birth. Across eras, girls have been routinely killed at all ages by exposure, starvation, poisoning, beatings; boys suffered this fate only if they were badly deformed in cultures or castes that demanded physical perfection.

Let’s now focus on the scientific side. By definition, stem cells must have the capacity to propagate indefinitely in an undifferentiated state and the potential to become most cell types (pluripotent). Only embryonic stem cells (ESCs) have these attributes. Somatic adult stem cells (ASCs), usually derived from skin or bone marrow, are few, cannot divide indefinitely and can only differentiate into subtypes of their original cellular family (multipotent). In particular, it’s virtually impossible to turn them into neurons, a crucial requirement if we are to face the steadily growing specter of neurodegenerative diseases and brain or spinal cord damage from accidents and strokes.

Biologists have discovered yet another way to create quasi-ESCs: reprogrammed adult cells, aka induced pluripotent cells (iPS). However, it comes as no surprise that iPS have recently been found to harbor far larger numbers of mutations than ESCs. To generate iPS, you need to jangle differentiated cells into de-differentiating and resuming division. The chemical path is brute-force – think chemotherapy for cells and you get an inkling. The alternative is to introduce an activated oncogene, usually via a viral vector. By definition, oncogenes promote cell division which raises the very real prospect of tumors. Too, viral vectors introduce a host of uncontrolled variables that have so far precluded fine control.

ESCs are not tampered with in this fashion, although long-term propagation can cause epi/genetic changes on its own. Additionally, recent advances have allowed researchers to dispense with mouse feeder cells for culturing ESCs. These carried the danger of transmitting undesirable entities, from inappropriate transcription factors to viruses. On the other hand, ASC grafts from one’s own tissues are less likely to be rejected (though xeno-ASCs are even likelier than ESCs to be tagged as foreign and destroyed by the recipient’s immune system).

Studies of all three kinds of stem cells have helped us decipher mechanisms of both development and disease. This research allowed us to discover how to enable cells to remain undifferentiated and how to coax them toward a desired differentiation path. Stem cells can also be used to test drugs (human lines are better indicators of outcomes than mice) and eventually generate tissue for cell-based therapies of birth defects, Alzheimer’s, Parkinson’s, Huntington’s, ALS, spinal cord injury, stroke, diabetes, heart disease, cancer, burns, arthritis… the list is long. Cell-based therapies have advantages over “naked” gene delivery, because genes delivered in cells retain the regulatory signals and larger epi/genetic contexts crucial for long-term survival, integration and function.

People argue that ASCs (particularly hematopoetic precursors used in bone marrow transplants) have been far more useful than ESCs, whose use is still potential. However, they usually fail to note that ASCs have been in clinical use since the late fifties, whereas human ESCs were first isolated in 1998 by James Thomson’s group in Wisconsin. Add to that the various politically or religiously motivated embargoes, and it’s a wonder that our understanding of ESCs has advanced as much as it has.

Despite fulminations to the contrary, women never make reproductive decisions lightly since their repercussions are irreversible, life-long and often determine their fate. Becoming a human is a process that is incomplete even at birth, since most brain wiring happens postnatally. Demagoguery may be useful to lawyers, politicians and control-obsessed fanatics. But in the end, two things are true: actual humans are (should be) much more important than potential ones – and this includes women, not just the children they bear and rear; and embryonic stem cells, because of their unique properties, may be the only path to alleviating enormous amounts of suffering for actual humans.

Best FAQ source: NIH stem cell page

Alien Life in Chondritic Meteorites (Not)

Sunday, March 6th, 2011

I received word of yet another NASA-funded claim of “alien lifeforms”: one more case of shadowy squiggles in a meteorite, it appeared in the Journal of Cosmology (JoC). Rosie Redfield dissecta this in detail, but essentially we have a recap of the “arsenic bacterium” debacle minus (thankfully) the NASA-directed media blitz. Briefly:

1. The author, Richard B. Hoover, has been presenting the same evidence without change since 1997.
2. The only CV I can find for Richard Hoover does not list a PhD in anything (it does say “he authored four species of bacteria” which gives new meaning to the term “conjuring”). [Update: NASA confirms that Hoover has a BSc, not in biology.]
3. The evidence itself is so weak, stale, shoehorned and artifact-prone as to be non-existent. The presentation is also misleading: it juxtaposes suggestive pictures at different scales. It doesn’t meet the criteria for publication in a reputable journal, let alone the justifiably high bar for such claims — which may explain why the author approached Fox News instead.
4. The editors of JoC say that the paper will be peer-reviewed post-publication (file this under “unclear on the concept”).
5. The executive editor of JoC for Astrobiology is Chandra Wickramasinghe of the Hoyle and Wickramasinghe “viruses from space” panspermia theories – enough said.

Memo to NASA: hire bona-fide biologists who can conduct solid research or shut down the Astrobiology division.

Update: NASA has stated that the Hoover paper was published without the required internal NASA critique and approval; it also failed external peer review three years ago.

The Multi-Chambered Nautilus

Monday, February 14th, 2011

How well like a man fought the Rani of Jhansi,
How valiantly and well!

— Indian ballad

My opinion of steampunk is low. However, last week’s lovely Google doodle by Jennifer Hom reminded me that I like at least one steampunk work. After I wrote my Star Trek book, I was asked why I did so. My reply was The Double Helix: Why Science Needs Science Fiction. Here is its opening paragraph:

The first book that I clearly remember reading is the unexpurgated version of Jules Verne’s 20,000 Leagues under the Sea. Had I been superstitious, I would have taken it for an omen, since the book contains just about everything that has shaped my life and personality since then. For me, the major wonder of the book was that Captain Nemo was both a scientist and an adventurer, a swashbuckler in a lab coat, a profile I imagined myself fulfilling one day.

I was five when I first read the novel. Unlike Anglophone readers, I was lucky enough to have the complete version rather than the bowdlerized thin gruel that resulted in Verne being consigned to the category of “children’s author”. Of course, 20,000 Leagues set me up for the inevitable fall. It prompted me to read most of Verne’s other works, in which he’s as guilty of infodumps, cardboard characters and tone-deaf dialogue as most “authors of ideas”. Too, his books are boys’ treehouses: I can recall two women in those I read, both as lively as wooden idols. Even so, Captain Nemo stands apart among Verne’s characters, both in his depth and in the messages he carries.

Verne has Aronnax describe Nemo at length when he first sees him. It takes up more than a page — but even now I remember my frustration when I reached the end and found out Verne says exactly nothing about Nemo’s build, hue, eye and hair color or shape. All he has told, in excruciating detail, is that Nemo looks extraordinarily intelligent and has a formidable presence.

However, my book copy contained several sepia-tinted plates from Disney’s film version of the book (in lieu of Édouard Riou’s engravings that accompanied the original editions). I had no idea who the actors were – I discovered that James Mason was British in my early twenties. On the other hand, several hints in the book, including the “liquid vowel-filled” language spoken by his multinational crew, coded the captain of the Nautilus as different. So in my mind Nemo was olive-skinned, black-haired. He looked like my father the engineer, like my father’s seacaptain father and brothers, like the andártes of the Greek resistance. He looked like me.

He acted like the andártes, as well. He sided with the downtrodden, from helping a Ceylonese pearl diver to giving guns to the Cretans risen against the Turks. And when he lost companions, he wept. Yet he was not merely a warrior; he was also a polymath. Besides being a crack engineer, a marine biology expert and an intrepid explorer, he spoke half a dozen languages, kept a huge library, and was a discerning art collector and a talented musician. The Nautilus is the precursor of Star Trek’s Enterprise: a ship of science and culture that can also wage war. Too, Nemo’s conversations bespoke someone from an old civilization tempered by melancholic wisdom – not an insouciant triumphalist.

Then there was the Lucifer strain that appealed to me just as much, coming as I did from a clan of resistance fighters. Nemo embodies the motto by which I have come to live my life: Never complain, never explain. He’s an evolved incarnation of the Byronic hero. His name is not only the Latin version of Outis (Noone) that Odysseus gave to Polyphemus; it is also a cognate of Nemesis (Vengeance). Today’s security agencies would call Nemo a terrorist, even though he fights in self-defense and retribution after invaders massacre his family and occupy his homeland.

Since victors write history, the losers’ freedom fighters become the winners’ murderers. Beyond that, there’s a fundamental difference between Nemo and fanatics like bin Laden: Nemo is not fighting to establish an Ummah, an Empire, a Utopia, not for power, riches, or glory. He’s not a fundamentalist secure in celestial approval of his actions. He is deeply conflicted and feels grief and guilt whenever he exacts revenge.

In this, Nemo shares his creator’s determined Enlightenment outlook. Verne was never apologetic about his heroes’ secularism or love of political freedom. However, Pierre-Julien Hetzel, Verne’s excessively hands-on editor, was acutely mindful of social and political conventions. As a result, Verne has Nemo go through a deathbed act of contrition in the vastly inferior Mysterious Island – something totally at odds with his character in 20,000 Leagues. Left to himself, Verne might have given a far darker ending to the first novel, as Disney did in his film version and as Verne later did with Robur, a coarsened power-obsessed Nemo clone.

Verne had originally conceived Nemo as a Polish scientist fighting against Russian oppressors. Hetzel did not want to alienate the lucrative Russian market. Also, neither Poland nor Russia are known for their naval prowess: a Russian-hating Nemo would put a serious crimp on the sea battle drama in 20,000 Leagues. So when Verne reveals Nemo’s provenance in The Mysterious Island, he makes him an Indian prince, son of the Rajah of Bundelkhand. Lakshmi Bai, the Rani of Jhansi (a region of Bundelkhand), was one of the leaders of the Sepoy Uprising, the same uprising that cost Nemo his family and home. It makes me glad to think Captain Nemo, Prince Dakkar, may have been Lakshmi Bai’s cousin – that they grew up together, friends and like-minded companions. I’m equally glad Nemo is free of the poisonous concepts of caste purity.

Who could animate Captain Nemo’s complexities and dilemmas onscreen? Mason may have been ethnically incorrect, but he truly captured Nemo – both his torment and his charisma. The incarnations since Mason have been anemic and/or off-key. In The League of Extraordinary Gentlemen, Naseeruddin Shah did his best with the paper-thin material he was given, but the film was so unremittingly awful that I’ve wiped it from long-term memory. Besides him, I have a few other possibles in mind and I’m open to additional suggestions:

Jean Reno, real name Juan Moreno, the stoic ronin whose Andalusian parents had to leave Cadiz during Franco’s regime; Ghassan Massoud, who wiped the floor with the other actors (except Edward Norton as the uncredited Baldwin) as Saladin in Ridley Scott’s Kingdom of Heaven; Ken Watanabe, who left Tom Cruise in the dust in The Last Samurai; Oded Fehr, who made the screen shimmer as the paladin Ardeth Bey in The Mummy; in a decade or so, Ioan Gruffudd, whom Guinevere should have taken as a co-husband in Antoine Fuqua’s Arthur; also in about a decade (provided he keeps lean), Naveen Andrews, the soulful Kip in The English Patient.

It goes without saying that I have an equally long list of candidates who could embody Captain Nemo as a woman – but I’ll keep these names for that never-never time when this becomes possible without the venomous ad feminem criticisms (some from prominent women) that greeted Helen Mirren as Prospero. Because gender essentialism aside, Captain Nemo was not someone I wanted to fall in love with, but someone I wanted to become: a warrior wizard, a creator, a firebringer.

Addendum 1: I received excellent additions to the Nemo candidate list. Calvin Johnson suggested Ben Kingsley, real name Krishna Pandit Bhanji, who needs no further introduction (Calvin and I also agreed that Laurence Fishburne in Morpheus mode would be great for the part). Anil Menon proposed the equally formidable Gabriel Byrne. Eloise Lanouette brought up Alexander (endless full name) Siddig who keeps getting better, like fine wine.

I also received a palpitation-inducing… er, tantalizing thought-experiment from Kay Holt; namely, a film in which each of my candidate Nemos inhabits a parallel reality. Ok, I’ll stop grinning widely now.

Addendum 2: I got e-mails expressing curiosity about my female Nemo candidates. So here’s the list.  Again, I welcome suggestions:

Julia Ormond, who radiates intelligence and made a tough-as-nails underdog hero in Smilla’s Sense of Snow; Karina Lombard, who brought tormented Bertha Mason to vivid life in The Wide Sargasso Sea; Salma Hayek, the firebrand of Frida; Michelle Yeoh, who bested everyone (including Chow Yun Fat) in Crouching Tiger, Hidden Dragon; Angela Bassett, who wore kickass Lornette “Mace” Mason like a second skin in Strange Days; last but decidedly not least, Anjelica Huston — enough said!

Great additional suggestions have come for this half as well: Lena Headey who made a terrific Sarah Connor, Indira Varma of Kama Sutra — both in about ten years’ time.  Sotiría Leonárdhou, who set the world on fire in Rembetiko. And, of course, Sigourney Weaver, the one and only Ellen Ripley.

Images: 1st, the Nautilus as envisioned by Tom Scherman; 2nd, Captain Nemo (original illustration by Édouard Riou; detail); 3rd, James Mason as Nemo; 4th and 5th, my Nemo candidates, left to right; 4th, the men — top, Jean Reno (France/Spain), Ghassan Massoud (Syria), Ken Watanabe (Japan); bottom, Oded Fehr (Israel), Ioan Gruffudd (Wales), Naveen Andrews (India/UK); 5th, the women — top, Julia Ormond (UK), Karina Lombard (Lakota/US), Salma Hayek (Mexico); bottom, Michelle Yeoh (Hong Kong), Angela Bassett (US), Anjelica Huston (US).

Distant Celestial Fires

Saturday, January 22nd, 2011

In line with end-of-the-world prophecies linked to Maya calendars, there’s sudden noise on the Internet that Betelgeuse (the bright red star that marks Orion’s left shoulder) will become a supernova in 2012. The segue is that this will first give us Tattooine-like sunsets, then singe earth and all upon it.

Betelgeuse is a gas-shrouded red supergiant of about 20 solar masses whose circumference would extend to Jupiter and whose hydrogen fuel has run out. This does mean that its days are numbered and its end will be spectacular: when it explodes, it will be visible in broad daylight and will cast shadows as strong as those of the full moon. However, it’s easy to find out that Betelgeuse is about 600 light years away. So it’s not close enough to harm us (the radius for harm is 25 ly or less).  Furthermore, if the explosion becomes visible to us in 2012, the event actually happened sometime around 1400 CE. A more in-depth search also reveals that the star’s axis does not point in the direction of Earth, precluding a potentially lethal directed gamma ray burst.

Betelgeuse is a runaway: it started life as a hot blue star in the prolific stellar nursery around Orion’s belt. This region, which includes the famous nebula that forms the middle “star” of Orion’s sword, is still giving birth to new stars. So after Betelgeuse has dwindled to a neutron cinder, it may have a successor. But its death will change the shape of perhaps the best-known constellation – a reminder that in our universe everything is born and will die.

Adrienne Rich wrote her elegiac poem Orion before many details about Betelgeuse became known. Yet she knew more and said it far better than the apocalypse pornographers of the Internets:

Far back when I went zig-zagging
through tamarack pastures
you were my genius, you
my cast-iron Viking, my helmed
lion-heart king in prison.
Years later now you’re young

my fierce half-brother, staring
down from that simplified west
your breast open, your belt dragged down
by an oldfashioned thing, a sword
the last bravado you won’t give over
though it weighs you down as you stride

and the stars in it are dim
and maybe have stopped burning.
But you burn, and I know it;
as I throw back my head to take you in
an old transfusion happens again:
divine astronomy is nothing to it.
//
Pity is not your forte.
Calmly you ache up there
pinned aloft in your crow’s nest,
my speechless pirate!
You take it all for granted
and when I look you back

it’s with a starlike eye
shooting its cold and egotistical spear
where it can do least damage.
Breathe deep! No hurt, no pardon
out here in the cold with you
you with your back to the wall.

Images: Top, data-congruent rendering of Betelgeuse (ESO, L. Calçada); Bottom, Orion (Hubble ESA, Akira Fujii)