Astrogator's Logs

New Words, New Worlds
Rest
Artist, Heather Oliver             

Archive for the 'Biology & Culture' Category

The Death Rattle of the Space Shuttle

Monday, July 25th, 2011

I get out of my car,
step into the night,
and look up at the sky.
And there’s something
bright, traveling fast.
Look at it go!
Just look at it go!

Kate Bush, Hello Earth

[The haunting a capella chorus comes from a Georgian folk song, Tsin Tskaro (By the Spring)]

I read the various eulogies, qualified and otherwise, on the occasion of the space shuttle’s retirement.  Personally, I do not mourn the shuttle’s extinction, because it never came alive: not as engineering, not as science, not as a vision.

Originally conceived as a reusable vehicle that would lift and land on its own, the shuttle was crippled from the get-go.  Instead of being an asset for space exploration, it became a liability – an expensive and meaningless one, at that.  Its humiliating raison d’ être was to bob in low earth orbit, becoming a toy for millionaire tourists by giving them a few seconds of weightlessness.  The space stations it serviced were harnessed into doing time-filling experiments that did not advance science one iota (with the notable exception of the Hubble), while most of their occupants’ time was spent scraping fungus off walls.  It managed to kill more astronauts than the entire Apollo program.  The expense of the shuttle launches crippled other worthwhile or promising NASA programs, and its timid, pious politics overshadowed any serious advances to crewed space missions.

In the past, I had lively discussions with Robert Zubrin about missions to Mars (and Hellenic mythology… during which I discovered that he, like me, loves the Minoans).  We may have disagreed on approach and details, but on this he and I are in total agreement: NASA has long floated adrift, directionless and purposeless.  Individual NASA subprograms (primarily all the robotic missions), carried on in the agency’s periphery, have been wildly successful.  But the days when launches fired the imagination of future scientists are long gone.

It’s true that the Apollo missions were an expression of dominance, adjuncts to the cold war.  It’s also true that sending a crewed mission to Mars is an incredibly hard undertaking.  However, such an attempt — even if it fails — will address a multitude of issues: it will ask the tough question of how we can engineer sustainable self-enclosed systems (including the biological component, which NASA has swept under the rug as scientifically and politically thorny); it will allow us to definitively decide if Mars ever harbored life; it will once again give NASA – and the increasingly polarized US polity – a focus and a worthwhile purpose.

I’m familiar with all the counterarguments about space exploration in general and crewed missions in particular: these funds could be better used alleviating human misery on earth; private industry will eventually take up the slack; robotic missions are much more efficient; humans will never go into space in their current form, better if we wait for the inevitable uploading come the Singularity.

In reality, funds for space explorations are less than drops in the ocean of national spending and persistent social problems won’t be solved by such measly sums; private industry will never go past low orbit casinos (if that); as I explained elsewhere, we in our present form will never, ever get our brains/minds into silicon containers; and we will run out of resources long before such a technology is even on our event horizon, so waiting for gods… er, AI overlords won’t avail us.

Barring an unambiguous ETI signal, the deepest, best reason for crewed missions is not science. I recognize the dangers of using the term frontier, with all its colonialist, triumphalist baggage. Bravado aside, we will never conquer space. At best, we will traverse it like the Polynesians in their catamarans under the sea of stars. But space exploration — more specifically, a long-term crewed expedition to Mars with the express purpose to unequivocally answer the question of Martian life — will give a legitimate and worthy outlet to our ingenuity, our urge to explore and our desire for knowledge, which is not that high up in the hierarchy of needs nor the monopoly of elites. People know this in their very marrow – and have shown it by thronging around the transmissions of space missions that mattered.

It’s up to NASA to once again try rallying people around a vision that counts.  Freed of the burden of the shuttle, perhaps it can do so, thereby undergoing a literal renaissance.

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

John Fitzgerald Kennedy, September 1962

Images: Pat Rawlings, Beyond; Randy Halverson, Plains Milky Way; European Space Agency, High Aurora.

Ghost in the Shell: Why Our Brains Will Never Live in the Matrix

Thursday, June 23rd, 2011

Introductory note: Through Paul Graham Raven of Futurismic, I found out that Charles Stross recently expressed doubts about the Singularity, god-like AIs and mind uploading.  Being the incorrigible curious cat (this will kill me yet), I checked out the post.  All seemed more or less copacetic, until I hit this statement: “Uploading … is not obviously impossible unless you are a crude mind/body dualist. // Uploading implicitly refutes the doctrine of the existence of an immortal soul.”

Clearly the time has come for me to reprint my mind uploading article, which first appeared at H+ magazine in October 2009. Consider it a recapitulation of basic facts.

When surveying the goals of transhumanists, I found it striking how heavily they favor conventional engineering. This seems inefficient and inelegant, since such engineering reproduces slowly, clumsily and imperfectly, what biological systems have fine-tuned for eons — from nanobots (enzymes and miRNAs) to virtual reality (lucid dreaming). An exemplar of this mindset was an article about memory chips. In it, the primary researcher made two statements that fall in the “not even wrong” category: “Brain cells are nothing but leaky bags of salt solution,” and “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem.”

And it came to me in a flash that most transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons, each exemplified by these sentences. The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.

The analogy du siècle equates the human brain with a computer — a vast, complex one performing dizzying feats of parallel processing, but still a computer. However, that is incorrect for several crucial reasons, which bear directly upon mind portability. A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.

Theoretically speaking, how could we manage to live forever while remaining recognizably ourselves to us? One way is to ensure that the brain remains fully functional indefinitely. Another is to move the brain into a new and/or indestructible “container”, whether carbon, silicon, metal or a combination thereof. Not surprisingly, these notions have received extensive play in science fiction, from the messianic angst of The Matrix to Richard Morgan’s Takeshi Kovacs trilogy.

To give you the punch line up front, the first alternative may eventually become feasible but the second one is intrinsically impossible. Recall that a particular mind is an emergent property (an artifact, if you prefer the term) of its specific brain – nothing more, but also nothing less. Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die – and be aware of the death process. Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness. This is an excellent way to leave a clone-like descendant, but not to become immortal.

What I just mentioned essentially takes care of all versions of mind uploading, if by uploading we mean recreation of an individual brain by physical transfer rather than a simulation that passes Searle’s Chinese room test. However, even if we ever attain the infinite technical and financial resources required to scan a brain/mind 1) non-destructively and 2) at a resolution that will indeed recreate the original, additional obstacles still loom.

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

Let’s now discuss the possible: in situ replacement. Many people argue that replacing brain cells is not a threat to identity because we change cells rapidly and routinely during our lives — and that in fact this is imperative if we’re to remain capable of learning throughout our lifespan.

It’s true that our somatic cells recycle, each type on a slightly different timetable, but there are two prominent exceptions. The germ cells are one, which is why both genders – not just women – are progressively likelier to have children with congenital problems as they age. Our neurons are another. We’re born with as many of these as we’re ever going to have and we lose them steadily during our life. There is a tiny bit of novel neurogenesis in the olfactory system and possibly in the hippocampus, but the rest of our 100 billion microprocessors neither multiply nor divide. What changes are the neuronal processes (axons and dendrites) and their contacts with each other and with other cells (synapses).

These tiny processes make and unmake us as individuals. We are capable of learning as long as we live, though with decreasing ease and speed, because our axons and synapses are plastic as long as the neurons that generate them last. But although many functions of the brain are diffuse, they are organized in localized clusters (which can differ from person to person, sometimes radically). Removal of a large portion of a brain structure results in irreversible deficits unless it happens in very early infancy. We know this from watching people go through transient or permanent personality and ability changes after head trauma, stroke, extensive brain surgery or during the agonizing process of various neurodegenerative diseases, dementia in particular.

However, intrepid immortaleers need not give up. There’s real hope in the horizon for renewing a brain and other body parts: embryonic stem cells (ESCs, which I discussed recently). Depending on the stage of isolation, ESCs are truly totipotent – something, incidentally, not true of adult stem cells that can only differentiate into a small set of related cell types. If neuronal precursors can be introduced to the right spot and coaxed to survive, differentiate and form synapses, we will gain the ability to extend the lifespan of a brain and its mind.

It will take an enormous amount of fine-tuning to induce ESCs to do the right thing. Each step that I casually listed in the previous sentence (localized introduction, persistence, differentiation, synaptogenesis) is still barely achievable in the lab with isolated cell cultures, let alone the brain of a living human. Primary neurons live about three weeks in the dish, even though they are fed better than most children in developing countries – and if cultured as precursors, they never attain full differentiation. The ordeals of Christopher Reeve and Stephen Hawking illustrate how hard it is to solve even “simple” problems of either grey or white brain matter.

The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks. As a result, renewal of large brain swaths will require such a lengthy lifespan that the replacements may never catch up. Not surprisingly, the efforts in this direction have begun with such neurodegenerative diseases as Parkinson’s, whose causes are not only well defined but also highly localized: the dopaminergic neurons in the substantia nigra.

Renewing the hippocampus or cortex of a Alzheimer’s sufferer is several orders of magnitude more complicated – and in stark contrast to the “black box” assumption of the memory chip researcher, we will need to know exactly what and where to repair. To go through the literally mind-altering feats shown in Whedon’s Dollhouse would be the brain equivalent of insect metamorphosis: it would take a very long time – and the person undergoing the procedure would resemble Terry Schiavo at best, if not the interior of a pupating larva.

Dollhouse got one fact right: if such rewiring is too extensive or too fast, the person will have no memory of their prior life, desirable or otherwise. But as is typical in Hollywood science (an oxymoron, but we’ll let it stand), it got a more crucial fact wrong: such a person is unlikely to function like a fully aware human or even a physically well-coordinated one for a significant length of time – because her brain pathways will need to be validated by physical and mental feedback before they stabilize. Many people never recover full physical or mental capacity after prolonged periods of anesthesia. Having brain replacement would rank way higher in the trauma scale.

The most common ecological, social and ethical argument against individual quasi-eternal life is that the resulting overcrowding will mean certain and unpleasant death by other means unless we are able to access extra-terrestrial resources. Also, those who visualize infinite lifespan invariably think of it in connection with themselves and those whom they like – choosing to ignore that others will also be around forever, from genocidal maniacs to cult followers, to say nothing of annoying in-laws or predatory bosses. At the same time, long lifespan will almost certainly be a requirement for long-term crewed space expeditions, although such longevity will have to be augmented by sophisticated molecular repair of somatic and germ mutations caused by cosmic radiation. So if we want eternal life, we had better first have the Elysian fields and chariots of the gods that go with it.

Images: Echo (Eliza Dushku) gets a new personality inserted in Dollhouse; any port in a storm — Spock (Leonard Nimoy) transfers his essential self to McCoy (DeForest Kelley) for safekeeping in The Wrath of Khan; the resurrected Zoe Graystone (Alessandra Torresani) gets an instant memory upgrade in Caprica; Jake Sully (Sam Worthington) checks out his conveniently empty Na’vi receptacle in Avatar.

Why I Won’t Be Taking the Joanna Russ Pledge

Thursday, June 16th, 2011

A Geology Lesson

Here, the sea strains to climb up on the land
and the wind blows dust in a single direction.
The trees bend themselves all one way
and volcanoes explode often.
Why is this? Many years back
a woman of strong purpose
passed through this section
and everything else tried to follow.

— Judy Grahn, from She Who

Between the physical death of Joanna Russ and the latest endless lists and discussions about women’s visibility and recognition in SF/F, well-meaning people have come up with the Russ Pledge. Namely, a pledge to acknowledge and promote women’s work.

As recent history has shown, Twitter notices don’t start revolutions, let alone sustain them. Even if they did, I won’t be taking the Russ pledge for the simplest of reasons. I have been implementing it for the last forty-plus years. It’s not a cute button on my lapel. It’s not a talking point in my public persona. I cannot take it off when I take off my clothes. It’s not an option. It’s an integral component of my bone marrow that has shaped my personal and professional life.

Long before her death, Russ had been marginalized for being too prickly, a prominent target of the “tone” argument. Even many women found her uncomfortable — she might annoy the Powers that Be and compromise paltry gains. As if good behavior brought acceptance to boys’ treehouses. As if she didn’t threaten the status quo by her mere existence, let alone her uncompromising stories, essays and reviews. Most people know of The Female Man and How to Suppress Women’s Writing, if only by rumor, but the rest of her opus is just as radical. If you want to have your preconceptions soothed by feel-good feminism, Russ is not your woman.

It’s not surprising that eventually she burned out (“chronic fatigue syndrome”), like most people in equivalent circumstances. She kept showcasing true aliens — women as autonomous beings with agency! — and asking questions outside the box. She kept pointing out that even if you have been “promoted” from field hand to house servant you can still be sold down the river. An uncomfortable reminder for those who keep clinging to the hope of “change from within”, the illusion that being abjectly nice to the ensconced gatekeepers and kicking the more disenfranchised below will ensure decent treatment, or even survival.

Joanna Russ paved the way for all who walk the path of real change not merely with words, but with her body. Like the women in folk ballads who got buried alive so that bridges would stand, she deserves more than pious twitterings now that she’s safely dead. I recognize the good intentions of those who promote this pledge in her name. But enough already with “mistress lists” and their ilk. If people want to really do something, I suggest (and mind you, this is a suggestion, not the forcible penectomy some obviously consider it to be) that they read women’s books. Publish them for real money, as in pro-rate presses – pathetic as pro rates are, these days. Review them in major outlets. Nominate them for prestigious awards. Hire them as editors, columnists and reviewers (not slush readers or gofers) in major venues, in more than token numbers.  Teach them in courses.

Unconscious bias is a well-documented phenomenon and is alive and thriving even (especially) in self-labeled “progressive” communities. Women have shown up the arguments for intrinsic inferiority by winning open chairs in orchestras when performing behind curtains and winning major literary awards when hiding behind pseudonyms. But this does not change the dominant culture. And it does not make up for the oceans of creative talent that got lost — suppressed or squandered in anger or hopelessness.

I will let Russ herself have the last word. It’s striking how ageless she remains:

“Leaning her silly, beautiful, drunken head on my shoulder, she said, “Oh, Esther, I don’t want to be a feminist. I don’t enjoy it. It’s no fun.”

“I know,” I said. “I don’t either.” People think you decide to be a “radical,” for God’s sake, like deciding to be a librarian or a ship’s chandler. You “make up your mind,” you “commit yourself” (sounds like a mental hospital, doesn’t it?).

I said Don’t worry, we could be buried together and have engraved on our tombstone the awful truth, which some day somebody will understand:

WE WUZ PUSHED.”

from On Strike Against God

Miranda Wrongs: Reading Too Much into the Genome

Friday, June 10th, 2011

Introductory note: My micro-bio in several venues reads “Scientist by day, writer by night.” It occurred to me that for lo these many years I have discussed science, scientific thinking and process, space exploration, social issues, history and culture, books, films and games… but I have never told my readers exactly what I do during the day.

What I do during the day (and a good part of the night) is investigate a process called alternative splicing and its repercussions on brain function. I will unpack this in future posts (interspersed among the usual musings on other topics), and I hope that this excursion may give a glimpse of how complex biology is across scales.

To start us off, I reprint an article commissioned by R. U. Sirius that first appeared in H+ Magazine in April 2010. An academic variant of this article appeared in Politics and the Life Sciences in response to Mark Walker’s “Genetic Virtue” proposal.

“We meant it for the best.” – Dr. Caron speaking of the Miranda settlers, in Whedon’s Serenity

When the sequence of the human genome was declared essentially complete in 2003, all biologists (except perhaps Craig Venter) heaved a sigh of gladness that the data were all on one website, publicly available, well-annotated and carefully cross-linked. Some may have hoisted a glass of champagne. Then they went back to their benches. They knew, if nobody else did, that the work was just beginning. Having the sequence was the equivalent of sounding out the text of an alphabet whose meaning was still undeciphered. For the linguistically inclined, think of Etruscan.

The media, with a few laudable exceptions, touted this as “we now know how genes work” and many science fiction authors duly incorporated it into their opuses. So did people with plans for improving humanity. Namely, there are initiatives that seriously propose that such attributes as virtue, intelligence, specific physical and mental abilities or, for that matter, a “happy personality” can (and should) be tweaked by selection in utero or engineering of the genes that determine these traits. The usual parties put forth the predictable pro and con arguments, and many articles get published in journals, magazines and blogs.

This is excellent for the career prospects and bank accounts of philosophers, political scientists, biotech entrepreneurs, politicians and would-be prophets. However, biologists know that all this is a parlor game equivalent to determining the number of angels dancing on the top of a pin. The reason for this is simple: there are no genes for virtue, intelligence, happiness or any complex behavioral trait. This becomes obvious by the number of human genes: the final count hovers around 20-25,000, less than twice as many as the number in worms and flies. It’s also obvious by the fact that cloned animals don’t look and act like their prototypes, Cc being the most famous example.

Genes encode catalytic, structural and regulatory proteins and RNAs. They do not encode the nervous system; even less do they encode complex behavior. At the level of the organism, they code for susceptibilities and tendencies — that is, with a few important exceptions, they are probabilistic rather than deterministic. And although many diseases develop from malfunctions of single genes, this does not indicate that single genes are responsible for any complex attribute. Instead they’re the equivalent of screws or belts, whose loss can stop a car but does not make it run.

No reputable biologist suggests that genes are not decisively involved in outcomes. But the constant harping on trait heritability “in spite of environment” is a straw man. Its main prop, the twin studies, is far less robust than commonly presented — especially when we take into account that identical twins often know each other before separation and, even when adopted, are likely to grow up in very similar environments (to say nothing of the data cherry-picking for publication). The nature/nurture debate has been largely resolved by the gene/environment (GxE) interplay model, a non-reductive approximation closer to reality. Genes never work in isolation but as complex, intricately regulated cooperative networks and they are in constant, dynamic dialogue with the environment — from diet to natal language. That is why second-generation immigrants invariably display the body morphology and disease susceptibilities of their adopted culture, although they have inherited the genes of their natal one.

Furthermore, there’s significant redundancy in the genome. Knockouts of even important single genes in model organisms often have practically no phenotype (or a very subtle one) because related genes take up the slack. The “selfish gene” concept as presented by reductionists of all stripes is arrant nonsense. To stay with the car analogy, it’s the equivalent of a single screw rotating in vacuum by itself. It doth not even a cart make, let alone the universe-spanning starship that is our brain/mind.

About half of our genes contribute directly to brain function; the rest do so indirectly, since brain function depends crucially on signal processing and body feedback. This makes the brain/mind a bona fide complex (though knowable) system. This attribute underlines the intrinsic infeasibility of instilling virtue, intelligence or good taste in clothes by changing single genes. If genetic programs were as fixed, simple and one-to-one mapped as reductionists would like, we would have answered most questions about brain function within months after reading the human genome. As a pertinent example, studies indicates that the six extended genomic regions that were defined by SNP (single nucleotide polymorphism) analysis to contribute the most to IQ — itself a population-sorting tool rather than a real indicator of intelligence — influence IQ by a paltry 1%.

The attempts to map complex behaviors for the convenience and justification of social policies began as soon as societies stratified. To list a few recent examples, in the last decades we’ve had the false XYY “aggression” connection, the issue of gay men’s hypothalamus size, and the sloppy and dangerous (but incredibly lucrative) generalizations about brain serotonin and “nurturing” genes. Traditional breeding experiments (cattle, horses, cats, dogs, royal families) have an in-built functional test: the progeny selected in this fashion must be robust enough to be born, survive and reproduce. In the cases where these criteria were flouted, we got such results as vision and hearing impairments (Persian and Siamese cats), mental instability (several dog breeds), physical fragility and Alexei Romanov.

I will leave aside the enormous and still largely unmet technical challenge of such implementation, which is light years distant from casual notes that airily prescribe “just add tetracycline to the inducible vector that carries your gene” or “inject artificial chromosomes or siRNAs.” I play with all these beasties in the lab, and can barely get them to behave in homogeneous cell lines. Because most cognitive problems arise not from huge genomic errors but from small shifts in ratios of “wild-type” (non-mutated) proteins which affect brain architecture before or after birth, approximate engineering solutions will be death sentences. Moreover, the proposals usually advocate that such changes be done in somatic cells, not the germline (which would make them permanent). This means intervention during fetal development or even later — a far more difficult undertaking than germline alteration. The individual fine-tuning required for this in turn brings up differential resource access (and no, I don’t believe that nanotech will give us unlimited resources).

Let’s now discuss the improvement touted in “enhancement” of any complex trait. All organisms are jury-rigged across scales: that is, the decisive criterion for an adaptive change (from a hemoglobin variant to a hip-bone angle) is function, rather than elegance. Many details are accidental outcomes of an initial chance configuration — the literally inverted organization of the vertebrate eye is a prime example. Optimality is entirely context-dependent. If an organism or function is perfected for one set of circumstances, it immediately becomes suboptimal for all others. That is the reason why gene alleles for cystic fibrosis and sickle cell anemia persisted: they conferred heterozygotic resistance to cholera and malaria, respectively. Even if it were possible to instill virtue or musicality (or even the inclination for them), fixing them would decrease individual and collective fitness. Furthermore, the desired state for all complex behaviors is fluid and relative.

The concept that pressing the button of a single gene can change any complex behavior is entirely unsupported by biological evidence at any scale: molecular, cellular, organismic. Because interactions between gene products are complex, dynamic and give rise to pleiotropic effects, such intervention can cause significant harm even if implemented with full knowledge of genomic interactions (which at this point is no even partially available). It is far more feasible to correct an error than to “enhance” an already functioning brain. Furthermore, unlike a car or a computer, brain hardware and software are inextricably intertwined and cannot be decoupled or deactivated during modification.

If such a scenario is optional, it will introduce extreme de facto or de jure inequalities. If it is mandatory, beyond the obvious fact that it will require massive coercion, it will also result in the equivalent of monocultures, which is the surest way to extinction regardless of how resourceful or dominant a species is. And no matter how benevolent the motives of the proponents of such schemes are, all utopian implementations, without exception, degenerate into slaughterhouses and concentration camps.

The proposals to augment “virtue” or “intelligence” fall solidly into the linear progress model advanced by monotheistic religions, which takes for granted that humans are in a fallen state and need to achieve an idealized perfection. For the religiously orthodox, this exemplar is a god; for the transhumanists, it’s often a post-singularity AI. In reality, humans are a work in continuous evolution both biologically and culturally and will almost certainly become extinct if they enter any type of stasis, no matter how “perfect.”

But higher level arguments aside, the foundation stone of all such discussions remains unaltered and unalterable: any proposal to modulate complex traits by changing single genes is like preparing a Mars expedition based on the Ptolemaic view of the universe.

Images: The “in-valid” (non-enhanced) protagonist of GATTACA; M. C. Escher, Drawing Hands; Musical Genes cartoon by Polyp; Rainbow nuzzles her clone Cc (“carbon copy”) at Texas A&M University.

What’s Sex Got to Do with It?

Tuesday, May 17th, 2011

(sung to Tina Turner’s à propos catchy tune)

Two events unfolded simultaneously in the last few days: Arnold Schwarzenegger’s admission that he left a household servant with a “love child” and Dominique Strauss-Kahn’s arrest for attempting to rape a hotel maid. Before that, we had the almost weekly litany of celebrity/tycoon/politician/figurehead caught with barely-of-age girl(s)/boy(s). In a sadly familiar refrain, an ostensibly liberal commentator said:

“…we know that powerful men do stupid, self-destructive things for sexual reasons every single day. If we’re looking for a science-based explanation, it probably has more to do with evolutionarily induced alpha-male reproductive mandates than any rational weighing of pros and cons.”

Now I hate to break it to self-labeled liberal men but neither love nor sex have anything to do with sexual coercion and Kanazawa-style Tarzanism trying to pass for “evolutionary science” won’t cut it. Everyone with a functioning frontal cortex knows by now that rape is totally decoupled from reproduction. The term “love child”, repeated ad nauseam by the media, is obscene in this context.

Leaving love aside, such encounters are not about sex either. For one, coerced sex is always lousy; for another, no reproductive mandate is involved, as the gang rapes of invading armies show. What such encounters are about, of course, is entitlement, power and control: the prerogative of men in privileged positions to use others (women in particular) as toilet paper with no consequences to themselves short of the indulgent “He’s such a ladies’ man…” and its extension: “This was a trap. Such men don’t need to rape. Women fling themselves in droves at alpha males!”

As I keep having to point out, there are no biological alpha males in humans no matter what Evo-Psycho prophet-wannabees preach under the false mantra of “Real science is not PC, let the chips fall where they may”. Gorillas have them. Baboons have them, with variances between subgroups. Our closest relatives, bonobos and chimpanzees, don’t. What they have are shifting power alliances for both genders (differing in detail in each species). They also have maternally-based status because paternity is not defined and females choose their partners. Humans have so-called “alpha males” only culturally, and only since hoarding of surplus goods made pyramidal societies possible.

The repercussions of such behavior highlight another point. Men of this type basically tell the world “I dare you to stop my incredibly important work to listen to the grievances of a thrall. What is the life and reputation of a minimum-wage African immigrant woman compared to the mighty deeds I (think I can) perform?” Those who argue that the personal should be separate from the political choose to ignore the fact that the mindset that deems a maid part of the furniture thinks the same of most of humanity — Larry Summers is a perfect example of this. In fact, you can predict how a man will behave in just about any situation once you see how he treats his female partner. This makes the treatment of nations by the IMF and its ilk much less mysterious, if no less destructive.

Contrary to the wet dreams of dorks aspiring to “alpha malehood”, women generally will only interact with such specimens under duress. They’re far more popular with men who (like to) think that being a real man means “to crush your enemies, see them driven before you, and to hear the lamentation of their women.” Civilization came into existence and has precariously survived in spite of such men, not because of them. If we hope to ever truly thrive, we will have to eradicate cultural alpha-malehood as thoroughly as we did smallpox — and figure out how we can inculcate snachismo as the default behavioral model instead.

Images: Top, Malcolm McDowell as Caligula in the 1979 eponymous film; bottom, Biotest’s “Alpha Male” pills.

The Quantum Choice: You Can Have either Sex or Immortality

Tuesday, March 29th, 2011

Note: A long-term study from Northwestern University (not yet in PubMed) has linked participation of young adults in religious activities to obesity in later life. Overhanging waistlines in First World societies undoubtedly contribute to the degenerative illnesses of lengthened lifespan. But it’s important to keep in mind that fat fulfills critical functions. This article, which looks at the other side of the coin, was commissioned by R. U. Sirius of Mondo 2000 fame and first appeared in H+ Magazine in September 2009.

Because of the four-plus centuries of Ottoman occupation, the folklore of all Balkan nations shares a Trickster figure named Hodja (based on 13th century Konyan Sufi wandering philosopher Nasreddin). In one of the countless stories involving him, Hodja has a donkey that’s very useful in carting firewood, water, etc.  The problem is that he eats expensive hay. So Hodja starts decreasing the amount of hay he feeds the donkey. The donkey stolidly continues doing the chores and Hodja, encouraged by the results, further decreases the feed until it’s down to nothing. The donkey continues for a few days, then keels over. Hodja grumbles, “Damnable beast! Just when I had him trained!”

Whenever I hear about longevity by caloric restriction, I immediately think of this story.

But to turn to real science, what is the basis for caloric restriction as a method of prolonging life? The answer is: not humans. The basis is that it appears (emphasis on the appears) that feeding several organisms, including mice and rhesus monkeys, near-starvation diets, seems to roughly double their lifespan. Ergo, reasons your average hopeful transhumanist, the same could happen to me if only I had the discipline and time to do the same –- plus the money, of course, for all the supplements and vitamins that such a regime absolutely requires, to say nothing of the expense of such boutique items as digital balances.

I will say a few words first about such beasties as flies (Drosophila melanogaster) and worms (Caenorhabditis elegans) before I climb the evolutionary ladder. Many organisms in other branches of the evolutionary tree have two “quantum” modes: survival or reproduction. For example, many invertebrates are programmed to die immediately after reproduction, occasionally becoming food for their progeny. In some cases, their digestive tracts literally disintegrate after they release their fertilized eggs. Conversely, feeding a infertile worker bee royal jelly turns her into a fully functioning queen. The general principle behind caloric restriction is that it essentially turns the organism’s switch from reproductive to survival mode.

Most vertebrates from reptiles onward face a less stark choice. Because either or both parents are required to lavish care on offspring, vertebrate reproduction is not an automatic death sentence. So let’s segue to humans. Due to their unique birth details, human children literally require the vaunted village to raise them — parents, grandparents, first degree relatives, the lot. At the same time, it doesn’t take scientific research to notice that when calories and/or body fat fall below a certain minimum, girls and women stop ovulating. It also takes just living in a context of famine, whether chosen or enforced, to notice the effects of starvation on people, from lethargy and fatigue to wasted muscles, brittle bones and immune system suppression, crowned with irritability, depression, cognitive impairment and overall diminished social affect.

Ah, says the sophisticated caloric restriction advocate, but much of this comes from imbalances in the diet –- missing vitamins, minerals, etc. Well, yes and no. Let me give a few examples.

All vitamins except B and C are lipid-soluble. If we don’t have enough fat, our body can’t absorb them. So the excess ends up in odd places where it may in fact be toxic –- hence the orange carotenoid-induced tint that is a common telltale sign of many caloric restriction devotees. Furthermore, if we have inadequate body fat, not only are we infertile, infection-prone and slow to heal due to lack of necessary hormones and cholesterol; our homeostatic mechanisms (such as temperature regulation) also flag. And because caloric restriction forces the body to use up muscle protein and leaches bones of minerals, practitioners can end up with weakened hearts and bone fractures.

Speaking of fat, the brain has no energy reserves. It runs exclusively on glucose. When starved of glucose, it starts doing odd things, including the release of stress chemicals. This, in turn, can induce anything from false euphoria to hallucinations. This phenomenon is well known from anorexics and diabetics entering hypoglycemia, but also from shamans, desert prophets and members of cultures that undertook vision quests, which invariably included prolonged fasting.  So caloric restriction may make its practitioners feel euphoric. But just as people feel they have comprehended the universe while under the influence of psychoactive drugs, so does this practice impair judgment and related executive functions.

So what about those glowing reports which purport to have demonstrated that caloric restriction doubles the lifespans of mice and rhesus monkeys, as well as giving them glossy pelts? Surely we can put up with a bit of mental confusion, even failing erections, in exchange for a longer life, as long as it’s of high quality –- otherwise we’ll end up like poor Tithonus, who was granted immortality but not youth and dwindled into a shriveled husk before the gods in their whimsical mercy turned him into a cicada. And it does seem that caloric restriction decreases such banes of extended human lifespan as diabetes and atherosclerosis. Well, there’s something interesting going on, all right, but not what people (like to) think.

In biology, details are crucial and mice are not humans. In Eldorado Desperadoes: Of Mice and Men, I explained at length why non-human studies are proof of principle at best, irrelevant at worst. Laboratory mice and monkeys are bred to reproduce early and rapidly. They’re fed rich diets and lead inactive lives –- the equivalent of couch potatoes. The caloric restriction studies have essentially returned the animals to the normal levels of nutrition that they would attain in the wild. Indeed, caloric restriction of wild mice does not extend their lives and when caloric levels fall below about 50%, both lab and wild mice promptly keel over, like Hodja’s donkey. In the rhesus studies, lifespans appeared extended only when the investigators counted a subset of the deaths in the animal group they tested.

On the molecular level, much attention has been paid to sirtuin activators, resveratrol chief among them. Sirtuins are a class of proteins that regulate several cell processes, including aspects of DNA repair, cell cycle and metabolism. This means they’re de facto pleiotropic, which should give would-be life extenders pause. As for resveratrol, it doesn’t even extend life in mice –- so the longer lives of the red-wine loving French result from other causes, almost certainly including their less sedentary habits and their universal and sane health coverage. That won’t stop ambitious entrepreneurs from setting up startups that test sirtuin activators and their ilk, but I predict they will be as effective as leptin and its relatives were for non-genetic obesity.

This brings to mind the important and often overlooked fact that genes and phenotypes never act in isolation. An allele or behavior that is beneficial in one context becomes deleterious in another. When longer-lived mutants and wild-type equivalents are placed in different environments, all longevity mutations result in adaptive disadvantages (some obvious, some subtle) that make the mutant strain disappear within a few generations regardless of the environment specifics.

Similarly, caloric restriction in an upper-middle class context in the US may be possible, if unpleasant. But it’s a death sentence for a subsistence farmer in Bangladesh who may need to build up and retain her weight in anticipation of a famine. For women in particular, who are prone to both anorexia and osteoporosis, caloric restriction is dangerous –- hovering as it does near keeling over territory. As for isolated, inbred groups that have more than their share of centenarians, their genes are far more responsible for their lifespan than their diet. So does the fact that they invariably lead lives of moderate but sustained physical activity surrounded by extended families, as long as they are relatively dominant within their family and community.

Human lifespan has already nearly tripled, courtesy of vaccines, antibiotics, clean water and use of soap during childbirth. It is unlikely that we will be able to extend it much further. Extrapolations indicate that caloric restriction will not lengthen our lives by more than 3% (a pitiful return for such herculean efforts) and that we can get the same result from reasonable eating habits combined with exercise. Recent, careful studies have established that moderately overweight people are the longest-lived, whereas extra-lean people live as long as do obese ones.

So what can you really do to extend your life? Well, as is the case with many other quality-of-life attributes, you should choose your parents carefully. Good alleles for susceptibilities to degenerative age-related diseases (diabetes, heart disease, hypertension, dementia) are a great help — as is high income in a developed country with first-rate medical services, which will ensure excellent lifelong nutrition and enough leisure time and/or devoted underlings to make it possible to attend to suchlike things.

Blastocysts Feel No Pain

Monday, March 14th, 2011

In 2010, the recipient for the Medicine Nobel was Robert Edwards, who perfected in vitro fertilization (IVF) techniques for human eggs in partnership with George Steptoe. Their efforts culminated with the conception of Louise Brown in 1978, followed by several million such births since. The choice was somewhat peculiar, because this was an important technical advance but not an increase in basic understanding (which also highlights the oddity of not having a Nobel in Biology). That said, the gap between the achievement and its recognition was unusually long. This has been true of others who defied some kind of orthodoxy – Barbara McClintock is a poster case.

In Edwards’ case, the orthodoxy barrier was conventional. Namely, IVF separates sex from procreation as decisively as contraception does. Whereas contraception allows sex without procreation (as do masturbation and most lovemaking permutations), IVF allows conception minus orgasms and also decouples ejaculation from fatherhood. Sure enough, a Vatican representative voiced his institution’s categorical disapproval for this particular bestowal. However, IVF has detractors even among the non-rabidly religious. The major reason is its residue: unused blastocysts, which are routinely discarded unless they’re used as a source for embryonic stem cells.

Around the same time that Edwards received the Nobel, US opponents of embryonic stem cell research filed a lawsuit contending that this “so far fruitless” research siphoned off funds from “productive” adult stem cell research. The judge in the case handed down a decision that amounted to a ban of all embryonic stem cell work and the case has been a legal and political football ever since. The brouhaha has highlighted two questions: what good are stem cells? And what is the standing of blastocysts?

Let me get the latter out of the way first. Since IVF blastocysts are eventually discarded if not used, most dilemmas associated with them reek with hypocrisy and the transparent desire to curtail women’s autonomy. A 5-day blastocyst consists of 200 cells arising from a zygote that has not yet implanted. If it implants, 50 of these eventually become the embryo; the rest turn into the placenta. A blastocyst is a potential human as much as an acorn is a potential oak – perhaps even less, given how much it needs to attain viability. Equally importantly, blastocysts don’t feel pain. For that you need to have a nervous system that can process sensory input. In humans, this happens roughly near the end of the second trimester – which is one reason why extremely premature babies have severe neurological defects.

This won’t change the mind of anyone who believes that a zygote is “ensouled” at conception, but if we continue along this axis (very similar to much punitive fundamentalist reasoning) we will end up declaring miscarriage a crime. This is precisely what several US state legislatures are currently attempting to do, with the “Protect Life Act” riding pillion, bringing us squarely into Handmaid’s Tale territory. It is well known by now that something like forty percent of all conceptions end in early miscarriages, many of them unnoticed or noticed only as heavier than usual monthly bleeding. A miscarriage almost invariably means there is something seriously wrong with the embryo or the embryo/placenta interaction. Forcing such pregnancies to continue would result in significant increase of deaths and permanent disabilities of both women and children.

The “instant ensoulment” stance is equivalent to the theories that postulated a fully formed homunculus inside each sperm and deemed women passive yet culpable vessels. It is also noteworthy that the concern of compulsory-pregnancy advocates stops at the moment of birth. Across eras, girls have been routinely killed at all ages by exposure, starvation, poisoning, beatings; boys suffered this fate only if they were badly deformed in cultures or castes that demanded physical perfection.

Let’s now focus on the scientific side. By definition, stem cells must have the capacity to propagate indefinitely in an undifferentiated state and the potential to become most cell types (pluripotent). Only embryonic stem cells (ESCs) have these attributes. Somatic adult stem cells (ASCs), usually derived from skin or bone marrow, are few, cannot divide indefinitely and can only differentiate into subtypes of their original cellular family (multipotent). In particular, it’s virtually impossible to turn them into neurons, a crucial requirement if we are to face the steadily growing specter of neurodegenerative diseases and brain or spinal cord damage from accidents and strokes.

Biologists have discovered yet another way to create quasi-ESCs: reprogrammed adult cells, aka induced pluripotent cells (iPS). However, it comes as no surprise that iPS have recently been found to harbor far larger numbers of mutations than ESCs. To generate iPS, you need to jangle differentiated cells into de-differentiating and resuming division. The chemical path is brute-force – think chemotherapy for cells and you get an inkling. The alternative is to introduce an activated oncogene, usually via a viral vector. By definition, oncogenes promote cell division which raises the very real prospect of tumors. Too, viral vectors introduce a host of uncontrolled variables that have so far precluded fine control.

ESCs are not tampered with in this fashion, although long-term propagation can cause epi/genetic changes on its own. Additionally, recent advances have allowed researchers to dispense with mouse feeder cells for culturing ESCs. These carried the danger of transmitting undesirable entities, from inappropriate transcription factors to viruses. On the other hand, ASC grafts from one’s own tissues are less likely to be rejected (though xeno-ASCs are even likelier than ESCs to be tagged as foreign and destroyed by the recipient’s immune system).

Studies of all three kinds of stem cells have helped us decipher mechanisms of both development and disease. This research allowed us to discover how to enable cells to remain undifferentiated and how to coax them toward a desired differentiation path. Stem cells can also be used to test drugs (human lines are better indicators of outcomes than mice) and eventually generate tissue for cell-based therapies of birth defects, Alzheimer’s, Parkinson’s, Huntington’s, ALS, spinal cord injury, stroke, diabetes, heart disease, cancer, burns, arthritis… the list is long. Cell-based therapies have advantages over “naked” gene delivery, because genes delivered in cells retain the regulatory signals and larger epi/genetic contexts crucial for long-term survival, integration and function.

People argue that ASCs (particularly hematopoetic precursors used in bone marrow transplants) have been far more useful than ESCs, whose use is still potential. However, they usually fail to note that ASCs have been in clinical use since the late fifties, whereas human ESCs were first isolated in 1998 by James Thomson’s group in Wisconsin. Add to that the various politically or religiously motivated embargoes, and it’s a wonder that our understanding of ESCs has advanced as much as it has.

Despite fulminations to the contrary, women never make reproductive decisions lightly since their repercussions are irreversible, life-long and often determine their fate. Becoming a human is a process that is incomplete even at birth, since most brain wiring happens postnatally. Demagoguery may be useful to lawyers, politicians and control-obsessed fanatics. But in the end, two things are true: actual humans are (should be) much more important than potential ones – and this includes women, not just the children they bear and rear; and embryonic stem cells, because of their unique properties, may be the only path to alleviating enormous amounts of suffering for actual humans.

Best FAQ source: NIH stem cell page

“As Weak as Women’s Magic”

Tuesday, March 1st, 2011

Ursula Le Guin is one of the speculative fiction authors I respect and admire. Her imagination seems endless, her capacity enormous. I’ve read her novels, her stories, her essays — even her poems. I’ve written reviews of her books and she’s the SF/F author I most often reference and hold up as an example in my essays.

However, some of her stances make me uneasy as a woman, a feminist and a scientist. One of them is her insistence that magic must be gender-specific. When she recently reiterated this credo (in the context of ridiculing the idea of a female Prospero or Lear), my views on this aspect of her outlook crystallized.

The result of these reflections just appeared in Crossed Genres: “As Weak as Women’s Magic”.

Image: Cover of Tombs of Atuan: Yvonne Gilbert/Gail Garraty.

Science Fiction, Science and Society

Monday, February 21st, 2011

by Laura J. Mixon; originally posted on Feral Sapient.

Athena’s note: In September of 2008 I participated in the Viable Paradise workshop. The experience was lukewarm at best. Laura J. Mixon was the major exception: she gave me the sole critique of my submission I value and has since become a friend.

Laura is a working scientist who writes space opera of the highest quality. Her works brim with those beasties presumably lacking in women’s writing: ideas that are imaginative yet grounded. But unlike the “authors of ideas” who cannot write their way out of a wet paper bag, Laura also has writing chops. Her plots are intricate, her characters vivid, her worldbuilding meticulous.

Next month Tor is bringing out Laura’s new space opera, Up Against It. I was lucky enough to see the novel in draft (and opine about it at length, as is my wont when works engage me). It’s a great story and, like all of Laura’s works, it pushes all kinds of boundaries. My one complaint is that Laura’s publishers gave her a gender-obscuring pen-name for her new work. I understand the need to sell books but such practices obscure the large number of women who yes, Virginia, do write hard SF.

My publicist put together some questions for me to answer to promote the upcoming release of Up Against It, and some of my writeup ended up on the cutting room floor. I wanted to share those thoughts here.

The driving force in the book is a resource crisis. Abruptly and unexpectedly, the Phocaeans lose nearly all their energy, water, and clean air. Their own little ecosystem — Kukuyoshi, the arboretum that meanders through their living space — is under threat. They are utterly dependent on technology: the nanomachines that produce their clean air and water, their computer systems and robotics. And now, suddenly, all that is endangered. And they only have this narrow window of opportunity to act.

If you squint at it at the right angle, isn’t that similar to what we are facing here and now on planet Earth? Are we not dependent on our technology to survive? What would we do if suddenly we had no fuel for our cars and our buildings, and to produce our food and clothes and medicines? What if our own air and water were turning to poison? Do we not also have only this narrow opportunity to act?

When I was seven, I first looked through the business end of a telescope. A father of a friend of mine took us out to a park one night for some star- and planet-gazing. He showed us Saturn, Mars, Venus, and the moon, along with a couple of nebulae and galaxies (the latter of which I found rather boring: they were very faint and hard to see. Thank you, Hubble, for transforming that faint trickle of photons from the cosmos into all those amazing images for us to gape at).

I remember looking at Saturn and then Mars, first through the scope and then with my unaugmented gaze, over and over. Saturn had rings! Mars had ice caps! I had this thrill of joy and awe as the reality settled in. Those little specks of light were other worlds, and science was putting them within reach.

I had a similar reaction to the Apollo 11 landing in 1969. I was twelve. That day my family raced down to Alamogordo to watch with our cousins as Neil Armstrong hopped down off the Eagle’s ladder and said his famous words: one small step. It felt as if the whole world was holding its breath in that instant before his boots touched the dust. The scene is so familiar now that it’s hard to describe just how important it was. Here we all were, many millions of people from all over the world, watching transfixed as these two guys hopped around. In spacesuits! On another world! In sixty-six years, we had gone being land-bound, to first powered air flight, to leaving our planet’s atmosphere and landing on its moon. Holy apes in space, Batman!

Our ability to think things through, figure things out, and make things go has enabled us to launch ourselves far, far beyond our origins. Ten thousand years ago, there were about ten million of us, living in small clans dotted around the globe. Now the world supports seven billion souls. Technology has saved countless lives. It brings us fulfillment on so many levels, enabling, quite literally, a real-time conversation between people on different continents, in different cultures, who even a hundred years ago would not have been able to communicate.

And our advances in medicine are nothing short of miraculous. At the turn of the 20th century in the U.S., nearly one in 100 women died in childbirth, and one in ten infants died. Stop and think for a moment about that. My own grandmother lost her mother to childbirth. My mother required emergency abdomenal surgery in her sixth month of pregnancy with me. Fifty years earlier, she and I both would almost certainly have died.

In the fourteenth century, one third of the population of Europe died of bubonic plague. Today, bubonic plague is exceedingly rare, and of those who contract it, over nine in ten who receive treatment survive. In 1918, the Spanish flu killed somewhere between 50 and 100 million people. There are still risks of deadly new viral mutations, such as a new avian or swine flu, which could cause widespread serious illness and death. But unlike our ancestors, we have an international network that tracks flu mutations and prepares vaccines and other measures to protect us. Odds of survival are better for most of us than they ever were before.

In short, I feel very blessed to live in this time and place. And yet, we have paid a high price for that success. Consider how time and again our technology has threatened rather than rescued us.

Sixteen million people died in World War I. More than sixty million in World War II. We saw the horrors of Hiroshima and Nagasaki. We can’t help but wonder in the back of our minds how easy it would be for someone somewhere to trigger a nuclear war that would kill many millions more. We have watched Challenger explode, the twin towers go down, New Orleans drown, and Port au Prince collapse into rubble. These tech failures, and tech abuses, have caused so much anguish and harm to so many.

And we live with other fears as well, fears large and small: of terrorist attacks and genetically modified foods; of global warming, famine, and super-bugs like MRSA; of mass extinctions; contaminants in our food and water supply; pedophiles stalking our kids on the internet. The list goes on and on. And technology certainly hasn’t solved the problems of uneven distribution of the resources we take from the Earth. Many, many children go to bed hungry every night. It isn’t right.

As I write this, the world’s sixth great extinction event is well underway and human growth and consumption are the cause. Three entire species die off each hour—irrevocably lost. We have loaded the atmosphere with sufficient carbon to continue heating the world for another thirty to forty years, even if we stopped burning fossil fuels tomorrow. We humans are, bluntly, racing toward a cliff edge and seem unable to find a way to put on the brakes. If we don’t do it, soon, nature will do it for us.

I became an environmental engineer because I wanted to put my own passion for science and technology to work to help steer us toward that better future, in some small way. For the same reason, I wanted to write novels that cast light on some of these issues in a more personal way. I want readers to feel the same love for Phocaea and its people that I do for this world, and on some deep personal level, to have hope that if we work hard enough and work together, we might find a way through our own twin crises of resource loss and technology out of control.

To find a way through to a solution, one first has to be able to imagine it. One of the things I love most about science fiction is how it permits us to imagine what might be.

The House with Many Doors (or: At the Caucasus, Hang a Right!)

Tuesday, January 18th, 2011

When people think of fiction that depicts human prehistory, Jean Auel’s Cave Bear books invariably poke up their woolly heads.  The SF-learned may also recall William Golding’s The Inheritors and two Poul Anderson stories dealing with Cro-Magnons; the literati may be aware of Björn Kurtén’s Dance of the Tiger.  But few have read what I deem the best entries in this group: Elizabeth Marshall Thomas’ Reindeer Moon and Animal Wife.  The riveting world she created in those novels is far closer to the truth than Auel’s sugar-coated anachronisms.

Recently I had yet another reason to think of Thomas’ works beyond their excellence as both science and fiction.  She set her universe in Siberia during a warmer spell when it was not tundra but a mixture of steppe and taiga alive with wolf packs, mammoths, herds of game animals – and the mighty Amur tigers, who leave an indelible pawprint in Animal Wife.  In the same story, her band of humans meets a band of others – different enough to awaken the fight-or-flight reflex, though not different enough to preclude progeny and with it, the tortured, conflicted love endemic in such circumstances.

Which brings us to the just-confirmed cousins in addition to the Neanderthals who walked the earth with us and mingled their genes with ours: the Denisovans.  Just like the people in Thomas’ stories, the Denisovans made their home in Siberia.  One of their homes, for they were wanderers like the rest of humanity until we were immobilized (in more ways than one) by agriculture.

The intertwined human family tree (from Nature)
[Click on the diagram to see a larger version]

Bones and artifacts can tell us much, but nucleic acids can tell us more.  Mitochondrial and Y DNA analyses have allowed us to map human migrations and group interactions, nuancing simplistic single-lineage theories.  The recent draft of the Neanderthal genome showed they still live in us, sharing 5% of the genome of non-Western Africans.  Their FOXP2 gene, which allows speech-enabling facial development, was identical to ours – and many were red-haired, making Jean-Jacques Annaud’s Quest for Fire eerily prescient on all these points.

Sequencing of DNA from two Denisovan bones, a tooth and a finger joint, showed they belonged to the Neanderthal clan and branched about 500,000 years ago, after an early exodus from Africa that predated that of Sapiens by nearly one million years.  Somewhere in Eurasia (probably around the Caucasus range which forms the first large obstacle), these early wanderers split into two streams.  The Neanderthals went west, the Denisovans headed east.  And like the Amur tigers, they roamed wide and were still around when Sapiens bands in their turn migrated east.  We know this because 5% of Melanesian DNA is derived from Denisovan ancestors.

There findings have caused a sea change in how we see ourselves and our predecessors.  Like all scientific findings, they can be (and have been) used to advance agendas.  Some argue that the lack of Neanderthal admixture makes sub-Saharan Africans the pure human strain, others that the Neanderthal input gave Europeans hybrid vigor.  Both choose to ignore inconvenient facts.  By the ironclad criterion of inter-fertility, Neanderthals and Denisovans were fully human.  On the other side, sub-Saharan Africans exhibit as much genetic diversity as most other human groups combined.  The take-home message is: we’re all mongrels and we do best when we acknowledge and celebrate this, instead of taking refuge in fallacious superiority fantasies.

Buddhist Monks from Central Asia (fresco, Kizil cave, ~900 CE; the one on the left is a Tocharian)

This split (as well as the agendas that attempt to harness it) has a later, equally fascinating echo.  Around 5000 BCE another migration wave broke on the Caucasus, splitting in two – the Indo-Europeans.  There’s consensus on that, even if the details are still hotly debated.  Less known is how far-flung were the travels of some of the Indo-Europeans who turned east.  The outliers were the Tocharians, a Silk Road culture that occupied the Tarim basin of Inner Mongolia from 2000 BCE to 1000 CE before being displaced and subsumed by the Uyghur.

For a long time, the Tocharian civilization was lost from sight as wars and the shifting sands of the Taklamakan desert destroyed them and most of their artifacts.  But they left behind items that are hard to ignore: a treasure trove of scrolls that include both texts and illustrations; several frescoes on cave walls; and the mummies of Ürümqi, preserved perfectly in the dry local climate.  The Tocharians were blue-eyed dolichocephalic redheads who wore garments of plaid wool and spoke a language whose closest relative appears to be Old Gaelic.  In short, the Tocharians were Celts and preliminary genetic analysis has confirmed the link.

Like Kennewick Man (who belonged to the Jomon people, the predecessors of the Ainu), the Ürümqi mummies have been used for politics: the Uyghur have adopted them as symbols in their struggle for independence, the Chinese have tried to suppress them by neglect and red tape in the way of scholars who want to analyze them in more detail.

Map of the Silk Road [Click on it for a larger version]

I don’t believe the presence of Celts in Mongolia threatens the achievements of those who succeeded them.  But I love to think of the strains mingling in that stark part of the world which nevertheless gave so much to human culture and acted as a thoroughfare between West and East.  And my heart is glad to contemplate that Alexander’s Roxanne, born in adjacent Sogdia, perhaps had hazel eyes and glints of auburn in her hair, a strand from a Tocharian grandparent woven into her tapestry.

Further reading:

Luigi Luca Cavalli-Sforza, Genes, Peoples, and Languages
Elizabeth Wayland Barber, The Mummies of Ürümchi
Susan Whitfield, Life along the Silk Road
Kenneth Wimmel, The Alluring Target

Related articles:

Iskander, Khan Tengri
Neanderthal Genes: The Hidden Thread in Our Tapestry
A (Mail)coat of Many Colors: The Songs of the Byzantine Border Guards

Woolen fabric from a Tarim basin mummy (~1000 BCE).

At Long Last, Have You Left No Sense of Decency?

Wednesday, January 12th, 2011

— Joseph Welch to Joseph McCarthy

The Tuscon tragedy (with more like it almost certain to follow) has really been a matter of time, because the extreme right in the US has shifted the goalposts of discourse and legislation so far that a Taliban mindset passes as normal.  An amazing part of the aftermath is to hear people say “Both sides must moderate their rhetoric.”  This would be funny if it weren’t dangerous.  As for Teabagger messiahs — ignorant bigots with narcissistic personality disorder who don’t care to understand the real meaning of terms like treason, death panels and blood libel — Professor Anthea Butler said it best.  An excerpt from her essay:

“What matters is that people like Palin, Beck and others can’t take time to figure out that this time is not about them, but about those who have lost loved ones, and their incredible hubris in not owning up to their own sideshow of hate.”

Even as we speak, Republican Party functionaries in Arizona are resigning, having received death threats from Teabaggers and their ilk.  The Westboro Baptist cult intends to picket the funeral of the 9-year old shot during the Giffords assassination attempt. The Arizona legislature, instead of banning the carrying of concealed weapons, is dithering about how many feet must separate the Westboro cultists from their targets.  And other legislators are calling for involuntary incarceration of the mentally ill, rather than address the fact that anyone can buy a semi-automatic weapon from Walmart.

I have mentioned the Weimar Republic before in such discussions.  The downspiral to fascist theocracy is accelerating.  We’ve let it go too far.  All of us are guilty of accepting increasing extremism and curtailment of civil rights.  We will all pay the price of our aquiescence, while the extremists finally get their Rupture.

Update: I urge everyone to compare Obama’s speech to Palin’s video.

Cartoons by David Horsey (top), Monte Wolverton (bottom)

Never Complain, Never Explain

Monday, December 27th, 2010

I’m the wall with the womanly swagger. – Judy Grahn, She Who

During Thanksgiving dinner at a dear friend’s house, another guest airily stated that climate change is a myth.  An environmental scientist and once a department chair in a well-known university, he now writes regularly for such venues as The Wall Street Journal using arguments along the lines of “There was radical climate change on earth way before humans came along.”  And like many self-satisfied self-promoters, he’s set on Transmit Only.

I was spoiling to kick him in a tender spot of his anatomy and it was obvious the other guests shared my wish.  But then the visit would have revolved around him.  So I looked straight at him and said, “If you really believe this, go discuss it with the people of Tuvalu, Kiribati and the Maldives, who are watching their land go under water as we speak.”  We all essentially ignored him for the rest of the visit. We had a terrific time.

I thought of this incident during the Stone Telling 2 roundtable.  Among other questions, Julia Rios, the discussion moderator, asked me, “In your own life you’ve chosen to engage in battles, and to speak out against things you perceive as unjust. How do you reconcile those choices with the costs attached to them, and is there ever a time when you choose not to engage?”

After I had answered Julia’s question, it lingered in my thoughts.  I often say (only half in jest) that humanity consists of several subspecies which happen to be interfertile.  We’re far more hardwired culturally than we’d like to think.  Progressive notions notwithstanding, facts almost never change the minds of adult humans unless something happens to affect them concretely in their health, status or wallet.  Battles against injustice, vested interests, willful ignorance, complacency, cruelty never end.  Before this relentless flood, people like me are Dutch boys with fingers in the dam.

Halfway through my fifth decade, I still haven’t learned to leave well enough alone.  By deed and by word, I’m still fighting on many fronts.  I’ll go to my grave angry, though the waters of Lethe will close over my small efforts as if I’d never been.  Yet after a lifetime of battles, I don’t have even a partial answer about how to engage effectively — especially since anger is still a heavily punished transgression for women.

Naturally, intelligent fighters adjust their strategy and tactics to fit circumstances.  Sometimes you have to be Odysseus, sometimes Alexander.  But I think there are two powerful weapons tikkun paladins need to use more often.  One is laughter.  Another is celebration.  The two share a crucial attribute: they’re not defensive; they assume legitimacy.

Accusing someone of humorlessness is a standard bludgeon used by knuckle-draggers of all persuasions.  “Can’t you take a joke?” is the constant taunt to outsiders grudgingly allowed into previously exclusionary clubs.  In all fairness, much humor relies on Us-versus-Them distinctions.  What’s amazing, though, is how fast bullies crawl off (bawling “You’re mean to me!”) when you turn humor against them.  There’s a reason why satirists and cartoonists are among the first to be arrested in dictatorships.  Of course, this tactic can get you badly hurt or killed.  Men in particular grow furious when women laugh at them.

I once was at another party where the guests were finance professionals.  The host held forth about the deep wisdom of polygamy: it’s nature’s way, our ape cousins have harems (obviously he knew zilch about either bonobos or chimpanzees) and as a dominant male himself, etc.  The other women guests were fuming, but too polite to contradict him.  Finally, I smiled at him sweetly and said, “I agree with you.”  Into the dead silence that ensued, I dropped, “Personally, I could handle at least three husbands.”  The women burst out laughing.  Several of the men spent the rest of the evening huddled in the kitchen, muttering darkly into their drinks.

The other way to short circuit reactionaries is to celebrate.  When some dim bulb bleats “Diversity brings down quality,” my reflex reaction is to verbally rip them to shreds.  However, a far more satisfying response is to issue invitations to a feast and call the endless rosters of Others who excel in a specific domain or task.  Women don’t write space operas?  Andre Norton, Ursula Le Guin, Alice Sheldon (aka James Tiptree Jr.), Joanna Russ, Joan Vinge, C. J. Cherryh, C. S. Friedman, Joan Slonczewski, Octavia Butler, Melissa Scott, Laura Mixon, Sydney van Scyoc, Kristin Landon, Elizabeth Bear, Gwyneth Jones, Liz Williams… without pausing to think and listing just those whose works I’ve read.

Celebration serves a dual purpose.  Not only does it lift the morale of those who stand arrayed against oblivious idiocy; it also prevents (ab)use of the “tone” argument.  Celebration recognizes past achievements and encourages forging ahead, rather than stooping to pick up every piece of broken furniture flung in our path.  Let god-wannabes stew in their own toxic wastes.  We have worlds to build and sustain.

Here’s the small personal roster of worlds I’m part of and want to celebrate as loudly as possible: my modest contribution to alternative splicing regulation and dementia research; Crossed Genres, Science in My Fiction, Stone Telling; my slowly emerging fictional universe, and the artists who depicted it so beautifully: Heather D. Oliver, Kathryn Bragg-Stella; and this site, now starting its fourth year, whose blog portion consistently hovers at high positions in Technorati rankings.

“Well we made a promise we swore we’d always remember,
No retreat, baby, no surrender.
Like soldiers in the winter’s night with a vow to defend,
No retreat, baby, no surrender.”

Bruce Springsteen, No Surrender

Images: 1st, Pict warrior Guinevere (Keira Knightley) in Antoine Fuqua’s King Arthur; 2nd, andártissa (resistance fighter) in WWII Greece (national historical archives); 3rd, Haldír of Lórien (Craig Parker) in Peter Jackson’s Lord of the Rings.

Yes, Virginia, Hellenes Have Christmas Traditions

Saturday, December 25th, 2010

Two decades ago, Ann Landers did a column about how various cultures celebrate Christmas.  Halfway down her list was this gem: “If you are Greek Orthodox, your sect celebrates Christmas on January 7.”  Several people wrote back that 1) the Orthodox church is not a sect – it is the original church from which the Catholic one split after the Schism of 1054 and 2) only the so-called Old Believers track Christmas by the Julian calendar.

I was reminded of this when I was leaving work two days ago, and a colleague asked, “Should I wish you Merry Christmas?  I heard you Greeks don’t celebrate it like we do.”  As readers of this blog know, I’m an atheist who misses many of my culture’s old customs, particularly those that thrum with pagan echoes. So I’m going to put my tour guide’s hat briefly on, and tell you what we Hellenes do around the time of the winter solstice.

The holiday lasts two weeks, from December 25 to January 6.  At the three punctuation points (Christmas, New Year’s, Epiphany) children make the rounds of the neighborhood houses, singing songs called kálanda.  These remain unchanged from the Byzantine era; they’re different for each of the three days and the kids sing them to the accompaniment of hand-held metal triangles – and more rarely, small bodhrán drums.  During these two weeks, people thought that mischievous spirits (kallikántzaroi) prowled the dark.  These obvious descendants of fauns and satyrs take a solstice break from trying to cut down the world tree that holds up the earth.  During the interruption the tree heals, leading to infinite annual repetitions.

People decorate their homes and start the feast preparations on Christmas Eve – and the original focus of the activities was not a pine or fir tree (a recent import from Northern Europe) but a small ship.  After all, we were seafarers even before Iáson sailed Arghó to the Sea of Azov in search of the Golden Fleece.  The main dishes vary regionally, but ham is not on the list.  Piglet, kid and lamb on the spit are, as is hen stuffed with chestnuts and raisins – turkey is too bland for Hellenic palates. The ubiquitous sweets are finger-sized melomakárona (honey macaroons) and kourabiédhes (butter almond cookies).

On December 31, families gather for the countdown, nibbling finger food – and at midnight, the ship horns can be heard from harbors and seashores, ushering in the new year.  Presents are put under the ship or tree when it is decorated but they get opened on January 1, either right after midnight strikes or in the morning.  The gifts are not brought by Santa Claus (Nicholas) who in the Hellenic hagiology is the patron saint of sailors.  Our giftbearer is Saint Basil, based on a real person: Vasílios the Great Hierarch, bishop of Caesarea in Cappadocia in the 4th century.  From a wealthy and influential family, he took time between arguments about dogma to succor the poor and needy, spending his entire inheritance on charity.

On the night of December 31, a candle is left burning next to a goblet of wine and a small plate that holds a golden coin (flourí).  On New Year’s Day the coin, presumably touched by Saint Vasílios, is baked into a rich bread pudding (vasilópita), which is later cut into named sections.  Whoever gets the coin will have an exceptionally good year.  On the same day, the youngest child of the family is the first to walk through the front door for good luck – often bearing a just-budding wild onion bulb, or cracking open a pomegranate… old, old symbols of wealth and fertility from the time when the virgins giving birth were called Isis, Astarte, Pótnia.

Epiphany, which rounds out the holiday, is also called The Lights.  On that day the priests go to each house, blessing it with a sprig of basil dipped in water.  Afterward, the priests from every coastal city, town or village throw a cross into the sea.  Young men dive to retrieve it, and whoever brings it back is blessed.  Just so did priests and priestesses of other religions also appease the oldest goddess of all – Tiamat, Thálassa – by offering her rings and other treasure instead of crosses.  The custom was retained by the Doges of Venice, the city state that owed its existence to the sea.

A few years ago Mr. Snacho and I found ourselves in Tarpon Springs, Florida, at the turn of the year.  The city was founded by sponge divers from the island of Kálymnos.  They still throw the cross into the sea.  Young men still compete for the honor of retrieving it.  And I, an exile by choice who’s often homesick for the place I left almost forty years ago, wept at the sight.

Images: 1st, Eiríni Vasileíou, cover for her The Christmas Ship; 2nd, kids singing kálanda; 3rd, photo by Mithymnaíos.

Footprints on Two Shores

Monday, December 13th, 2010

A lengthy quote from my article The Agency That Cried “Awesome!” appeared in today’s Guardian on a page that tracks the NASA debacle.  Another chunk appeared on Futurismic (thank you, Mr. Raven!).  The post itself showed up on the front page of The Huffington Post.

Also, L. Timmel Duchamp, author of the Marq’ssan Cycle and founder of Aqueduct Press, invited me to the year’s-end roundup she hosts at her blog, Ambling Along the Aqueduct. My list of books, albums and films, with commentary, appeared today.

Image: Cool Cat, Ali Spagnola

The Agency That Cried “Awesome!”

Sunday, December 12th, 2010

“Those whom the gods wish to destroy they first make mad.” – Anonymous ancient proverb

In the 1961 film The Guns of Navarone, Greek resistance fighters and Allied demolition experts set out to destroy a nest of large cannons so that a rescue convoy can go through the straits the guns overlook.  A young Greek who’s part of the mission goes after a group of Germans gunslinger-style, jeopardizing the venture.  The Germans cut him to ribbons.  When the mission members meet at their rendezvous point, his sister María (Iríni Pappás) says to his partner Andréas (Anthony Quinn, obligatory at that time whenever swarthy ethnics were required): “Tell me what happened.”  Andréas replies: “He forgot why we came.”

Last week, NASA administrators forgot why we came.  They forgot the agency’s mission, they forgot science, they forgot their responsibility to their own people and to the public.  Instead, they apparently decided that all publicity is good, as long as they don’t misspell your name.

Ever since I became fully conscious, I’ve dreamed of humanity exploring the stars.  These dreams were part of the reason I left my culture, my country, my family and came over here, determined to do research.  Every launch made my heart leap.  I wept when I saw the images sent by the Voyagers, Sojourner negotiating Martian rocks.  I kept thinking that perhaps in my lifetime we might find an unambiguous independent life sample.  Then, at long last, astrobiology would lift off and whole new scientific domains would unfurl and soar with it.

Instead of that, last week we got bacterial isolate GFAJ-1.  We got an agency which appears so desperate that it shoved experiments with inadequate controls into a high profile journal and then shouted from the rooftops that its researchers had discovered a new form of life (de facto false, even if the results of the increasingly beleaguered Science paper stand).

This is not the first or only time NASA administrators have been callously cavalier.  Yet even though the latest debacle didn’t claim lives like the Challenger incident did, it was just as damaging in every other way.  And whereas the Challenger disaster was partly instigated by pressure from the White House (Reagan needed an exclamation point for his State of the Union address), this time the hole in NASA’s credibility is entirely self-inflicted.  Something went wrong in the process, and all the gatekeeping functions failed disastrously.

Let’s investigate a major claim in the Science paper: that GFAJ-1 bacteria incorporate arsenic in their DNA, making them novel, unique, a paradigm shift.  Others have discussed the instability of the arsenate intermediates and of any resulting backbone.  Three more points are crucial:

1.  This uniqueness (not yet proved) has come about by non-stop selection pressure in the laboratory, not by intrinsic biochemistry: the parent bacterium in its normal environment uses garden-variety pathways and reverts to them as soon as the pressure is lifted.  This makes the “novel life” claim patently incorrect and the isolate no more exotic than the various metallophores and metallovores that many groups in that domain (Penny Boston, Ken Nealson) have been studying for decades.

2.  The arsenic-for-phosphorus substitution in the DNA is circumstantial at best.  The paper contained no sequencing, no autoradiography, no cesium chloride density gradients.  These are low-tech routine methods that nevertheless would give far more direct support to the authors’ claims.  Density gradients are what Meselson and Stahl used in 1958 to demonstrate that DNA replication was semi-conservative.  Instead, Wolfe-Simon et al. used highly complex techniques that gave inconclusive answers.

The reagents for the methods I just listed would cost less than $1,000 (total, not each). A round of sequencing costs $10 – the price of a Starbucks latte. In a subsequent interview, Oremland (the paper’s senior author) said that they did not have enough money to do more experiments. This is like saying that you hired the Good Year blimp to take you downtown but didn’t have enough money for a taxi back home.

3.  Even if some of the bacteria incorporate arsenic in their DNA, it means nothing if they cannot propagate.  Essentially, they can linger as poison-filled zombies that will nonetheless register as “alive” through such tests as culture turbidity and even sluggish metabolism.

NASA spokespeople, as well as Wolfe-Simon and Oremland, have stated that the only legitimate and acceptable critiques are those that will appear in peer-reviewed venues – and that others are welcome to do experiments to confirm or disprove their findings.

The former statement is remarkably arrogant and hypocritical, given the NASA publicity hyperdrive around the paper: embargoes, synchronized watches, melodramatic hints of “new life”, of a discovery with “major impact on astrobiology and the search for extraterrestrial life”.  This is called leading with your chin.  And if you live by PR, you cannot act shocked and dismayed when you die by PR.

As for duplicating the group’s experiments, the burden of proof lies with the original researchers. This burden increases if their claims are extraordinary.  The team that published the paper was being paid to do the work by a grant (or, possibly, by earmarked NASA money, which implies much less competition). For anyone else to confirm or disprove their findings, they will have to carve effort, time and money out of already committed funds — or apply for a grant specifically geared to this, and wait for at least a year (usually more) for the money to be awarded.  It’s essentially having to clean up someone else’s mess on your own time and dime.

Peer review is like democracy: it’s the worst method, except for all others.  It cannot avoid agendas, vendettas, pet theories or hierarchies.  But at least it does attempt judgment by one’s peers.  Given the kernel of this paper, its reviewers should have been gathered from several disciplines.  I count at least four: a microbiologist with expertise in extremophiles, a molecular biologist specializing in nucleic acids, a biochemist studying protein and/or lipid metabolism and a biophysicist versed in crystallography and spectrometry.

Some journals have started to name reviewers; Science does not, and “astrobiology” is a murky domain.  If the scientific community discovers that the reviewers for the GFAJ-1 paper were physicists who write sciency SF and had put on the astrobio hat for amusement and/or convenience, Lake Mono will look mild and hospitable compared to the climate that such news will create.

Because of the way scientific publishing works, a lot of shaky papers appear that never get corrected or retracted.  As a dodge, authors routinely state that “more needs to be done to definitively prove X.”  Even if later findings of other labs completely contradict their conclusions, they can argue that the experiments were correct, if not their interpretation.  Colleagues within each narrow domain know these papers and/or labs – and quietly discount them. But if such results get media attention (which NASA courted for this paper), the damage is irreversible.

People will argue that science is self-correcting.  This is true in the long run – and as long as science is given money to conduct research.  However, the publication of that paper in Science was a very public slap in the face of scientists who take time and effort to test their theories.  NASA’s contempt for the scientific process (and for basic intelligence) during this jaw-dropping spectacle was palpable.  It blatantly endorsed perceived “sexiness” and fast returns at the expense of careful experimentation. This is the equivalent of rewarding the mindset and habits of hedge fund managers who walk away with other people’s lifelong savings.

By disbursing hype, NASA administrators handed ready-made ammunition to the already strong and growing anti-intellectual, anti-scientific groups in US society: to creationists and proponents of (un)intelligent design; to climate change denialists and young-earth biblical fundamentalists; to politicians who have been slashing everything “non-essential” (except, of course, war spending and capital gains income).  It jeopardized the still-struggling discipline of astrobiology.  And it jeopardized the future of a young scientist who is at least enthusiastic about her research even if her critical thinking needs a booster shot – or a more rigorous mentor.

Perhaps NASA’s administrators were under pressure to deliver something, anything to stave off further decrease of already tight funds.  I understand their position – and even more, that of their scientists.  NIH and NSF are in the same tightening vise, and the US has lost several generations of working scientists in the last two decades.  Everyone is looking for brass rings because it’s Winner Take All – and “all” is pennies.  We have become beggars scrambling for coins tossed out of rich people’s carriages, buskers and dancing bears, lobsters in a slowly heating pot.

NASA should not have to resort to circus acts as the price for doing science.  It’s in such circumstances that violence is done to process, to rigor, to integrity.  We are human.  We have mortgages and doctors’ bills and children to send to college, yes.  But we are scientists, first and foremost.  We are – must be – more than court jesters or technicians for the powerful.  If we don’t hold the line, no one else will.

The paper: Wolfe-Simon F, Blum JS, Kulp TR, Gordon GW, Hoeft SE, Pett-Ridge J, Stolz JF, Webb SM, Weber PK, Davies PCW, Anbar AD, Oremland RS (2010) A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus. DOI: 10.1126/science.1197258.

My early summation of this paper: Arsenic and Odd Lace

Images: Top, María tries to keep her brother focused on the mission in The Guns of Navarone; middle, the Meselson and Stahl experiment; bottom, Quiros circus, Spain, 2007.

Perhaps I Should Be Called Cassandra

Thursday, December 9th, 2010

A colleague once called me a hopeful romantic. There’s more than a grain of truth to that. So it’s ironic that the two Wikipedia entries which quote me are linked to my critiques of extraordinary claims that did not provide even ordinary evidence.

The first was my review of Ward and Brownlee’s Rare Earth, in which I pointed out errors that cast serious doubts on their hypothesis. When I wrote the review, I didn’t know that one of their major advisors was Guillermo Gonzalez, an unabashed creationist who made his science fit his philosophy. A few things from my review must have registered, since subsequent editions corrected at least some of these errors (for example, their initial statement that both Mars and Venus are tidally locked).

Yesterday I discovered that I appear in a second Wikipedia entry about “arsenic DNA” — the purported genetic material of the fabulous beastie that NASA announced last Friday during its science-by-press-conference. This entry is fluctuating right now, as people are editing it back and forth (plus they incorrectly call me a microbiologist).  Those of you who are keeping track may know I was an early pebble in the avalanche of criticism that has since fallen on that work, led by Dr. Rosemary Redfield of UBC and summarized by Carl Zimmer in Slate.

I have more to say about the Science paper but first I must meet immovable looming deadlines. For now I will only say that I wish the work had been as exciting as its hype promised. Because I’m an old-fashioned scientist — and a hopeless romantic.

Images: 1st, Stick in the Mud, Steve Jurvetson; 2nd, Outlier, Ben Shabad.

Update: More, as promised, in The Agency That Cried “Awesome!”

Arsenic and Odd Lace

Thursday, December 2nd, 2010

When you hear about lots of cherries, bring a small basket. — Greek proverb

About a week ago, I started receiving a steady and progressively swelling stream of e-mails, asking me if I knew anything about the hush-hush “amazing astrobiology discovery” that NASA would announce on December 2. I replied I would opine when I read the associated paper, embargoed by Science until after the press conference. I also added that my bets were on a terrestrial extremophile that pushes the exotic envelope. Many bloggers and news sites disagreed, posting entries with titles and guesses taken straight from the pulp SF era.

Today NASA made its announcement and Science released the paper. To give you the punchline first, the results indeed concern a terrestrial extremophile and show that bacteria are very flexible and will adapt to suboptimal conditions. This is not exactly news, although the findings do push the envelope… slightly.

What the results decidedly do not show is a different biochemistry, an independent genesis or evidence for a shadow biosphere, contrary to co-author Paul Davies’ attempts to shoehorn that into the conclusions of an earlier (2008) related paper. It’s not arsenic-based life, it’s not an arsenic-eating bacterium and the biology textbooks don’t need to be rewritten.

The experiment is actually very clever in that it follows a given to its logical conclusion. The researchers took an inoculum from the hypersaline, alkaline Mono lake and grew it in serial dilutions so that the medium contained progressively increasing amounts of arsenic (As) substituting for phosphorus (P). Lake Mono has arsenic levels several orders of magnitude above the usual, so bacteria living in it have already adapted to tolerate it.

The bacteria that grew in severely P-depleted and As-enriched conditions were identified as members of a halophile (salt-loving) family already known to accumulate intracellular As. When deprived of P, they grew slowly and appeared bloated because they were full of structures that look like vacuoles, cellular organelles that manage waste and grow larger and more numerous when cells are under stress. Additionally, there was still some phosphorus in the growth medium (it’s almost impossible to leach it completely) and there is no direct proof that As was incorporated into the bacterial DNA [see addendum]. So essentially the bacteria were trying to do business as usual under trying circumstances.

Phosphorus means “lightbringer” because the element glows faintly under illumination, giving its name to Venus when it’s the Morning Star. It is deemed to be among the six elements vital for life (in alphabetical order: carbon, hydrogen, nitrogen, oxygen, phosphorus, sulfur; often acronymed as CHNOPS, which sounds like the name of an Egyptian pharaoh). Indeed P appears in all three classes of biomolecules. It’s obligatory in nucleic acids (DNA, RNA) and phospholipids, the primary components of cell membranes; phosphate groups are crucial covalent additions to proteins, regulating their activity and ligand affinities; it’s also the energy currency of cells, primarily in the form of ATP (adenosine triphosphate). On the scale of organisms, bones contain phosphorus in apatite form and it’s also an essential nutrient for plants, though P excess is as much a problem as its lack.

Arsenic is directly below phosphorus in the periodic table, just as silicon is directly below carbon. Arsenic is highly toxic to lifeforms precisely because it looks similar enough to phosphorus in terms of atomic radius and reactivity that it is occasionally incorporated in metabolic intermediates, short-circuiting later steps in cascades. [This, incidentally, is not true for silicon vis-à-vis carbon, for those who are contemplating welcoming silicon overlords. Silicon is even more inferior than arsenic in its relative attributes.] Arsenic was used in pesti-, herbi- and insecticides (and in stealth murders), until it became clear that even minute amounts leaching into the water table posed a serious health problem.

The tables in the Science paper are eloquent on how reluctant even hardy extremophiles are to use As instead of P. Under normal growth conditions, the As:P ratio in their biomass was 1:500. When P was rigorously excluded and As had been raised to three times the level in lake Mono, the As:P ratio remained at a measly 7:1. Furthermore, upon fractionation As segregated almost entirely into the organic phase. Very little was in the aqueous phase that contains the nucleic acids. This means that under extreme pressure the bacteria will harbor intracellular As, but they will do their utmost to exclude it from the vital chains of the genetic material.

As I wrote elsewhere, we biologists are limited in our forecasts by having a single life sample. So we don’t know what is universal and what is parochial and our searches are unavoidably biased in terms of their setup and possible interpretations. The results from this work do not extend the life sample number. Nor do they tell us anything about terrestrial evolution, because they showcase a context-driven re-adaptation, not a de novo alternative biochemistry. However, they hint that at least one of the CHNOPS brigade may be substitutable in truly extreme (by our circumscribed definition) conditions.

On the larger canvas, it was clever of NASA to disclose this right around budget (-cutting) time. But it would have been even cleverer if they had managed to calibrate the hype volume correctly — and kept squarely in their memory the tale of the boy who cried wolf.

Addendum 1: The paper has evidence that the DNA of the final isolate contains 11% of the total arsenic by incorporation of radioactivity and mass spectrometry comparison studies. However, important controls and/or purification steps seem to be missing. The crucial questions are: exactly where is arsenic located, how much substitution has occurred in the DNA, if any, and how does it affect the layers of DNA function (un/folding, replication, transcription, translation)? Definitive answers will require at minimum direct sequencing and/or crystallographic data. The leading author, Felisa Wolfe-Simon, said that this is fertile ground for thirty years of future work — and in that, at least, she’s right.

Addendum 2: Detailed devastating critiques and dissections are appearing.

The paper: Wolfe-Simon F, Blum JS, Kulp TR, Gordon GW, Hoeft SE, Pett-Ridge J, Stolz JF, Webb SM, Weber PK, Davies PCW, Anbar AD, Oremland RS (2010) A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus. Science DOI: 10.1126/science.1197258.

My extended analysis: The Agency That Cried “Awesome!”

Note for young(er) readers: the title is a take off on Arsenic and Old Lace, Joseph Kesselring’s black comedy about decorous yet murderous old ladies, later made into a film by Frank Capra starring Cary Grant.

Images: 1st, the vacuolated cells of the As-fed bacteria (Wolfe-Simon et al, Figure 2E); 2nd, the elements important to life; 3rd, Mono Lake (LA Times)

Rises and Falls

Saturday, November 27th, 2010

As is common with me, things have once again come in groups.  In addition to the acceptance of The Wind Harp, a book has just come out with a tiny contribution from me.  It is The Rough Guide to the Future by biochemist, science historian and science writer Jon Turney.  The book surveys new technologies and their impact on humanity and the planet, and includes the hopes, fears and predictions of “fifty of the world’s leading futurologists and scientists” (blush!)  Here’s Jon’s introduction to it, and here’s my contribution:

Highest hopes: In decreasing order of likelihood, that we will conquer – or at least tame – dementia, which will make the increase in average life expectancy individually worthwhile and collectively feasible; that we will pick up an unambiguous SETI signal (search for extraterrestrial intelligence); and that we will decipher the ancient script Linear A and find out that the Minoans were indeed enlightened, if not matriarchal.

Worst fears: Most of our activities will devolve into inward navel-gazing (“social” Internet, virtual reality) rather than outward exploration, and our politics (broadly defined) will force all research into applied/profit mode, doomed to produce results and reagents that will make the long-term survival of the planet and all its species increasingly problematic.

Best bet: Barring a natural or human-created catastrophe, we’ll muddle along just as before and run out of resources and lebensraum before we’re able to establish either a sustainable terrestrial footprint or expand beyond Earth.

Additionally, I will be one of the reviewers of Rise Reviews, the brainchild of Bart Leib, the co-founder of Crossed Genres and Science in My Fiction.  As Bart said in his blog:

“I’m pleased to announce that I will soon be launching Rise Reviews, a site dedicated to reviewing quality speculative fiction that did not receive professional pay.

This is something that I’ve been thinking about for a long time. See, most review sites either only review fiction from professional-paying markets, or take most of what they review from those markets. I don’t hold that against them – every review site receives FAR more review requests than it could possibly accomplish, and each one has to decide for itself how to narrow down the pile.

But the result is that, more often than not, smaller presses which don’t pay pro rates are the ones that get passed over. And that’s what Rise Reviews will cover. Many of these publishers produce excellent quality publications, and I hope that Rise will be able to help bring new writers and smaller presses to the attention of readers.”

Rise Reviews will be a partial corrective to those who think, à la Tangent, that only Leaden Era-style speculative fiction (aka boys and their toys) deserves to be read and reviewed.  And it may give an incentive to independents to start paying in more than copies, even if it’s the proverbial $5 — it will make the works eligible for reviewing in Rise.  The site will launch January 1, 2011 with a veritable avalanche of reviews across subgenres.

Images: The covers of The Rough Guide to the Future (by Tom Cabot/ketchup) and the 1st year anthology of Crossed Genres (by Nicc Balce).

To the Hard Members of the Truthy SF Club

Friday, November 19th, 2010

“Nothing is as soft as water, yet who can withstand the raging flood?” — Lao Ma in Xena (The Debt I)

Being a research scientist as well as a writer, reader and reviewer of popular science and speculative fiction, I’ve frequently bumped up against the fraught question of what constitutes “hard” SF. It appears regularly on online discussions, often coupled with lamentations over the “softening” of the genre that conflate the upper and lower heads.

In an earlier round, I discussed why I deem it vital that speculative fiction writers are at least familiar with the questing scientific mindset and with basic scientific principles (you cannot have effortless, instant shapeshifting… you cannot have cracks in black hole event horizons… you cannot transmute elements by drawing pentagrams on your basement floor…), if not with a modicum of knowledge in the domains they explore.

So before I opine further, I’ll give you the punchline first: hard SF is mostly sciency and its relationship to science is that of truthiness to truth. Remember this phrase, grasshoppahs, because it may surface in a textbook at some point.

For those who will undoubtedly hasten to assure me that they never pollute their brain watching Stephen Colbert, truthiness is “a ‘truth’ that a person claims to know intuitively ‘from the gut’ without regard to evidence, logic, intellectual examination, or facts.” Likewise, scienciness aspires to the mantle of “real” science but in fact is often not even grounded extrapolation. As Colbert further elaborated, “Facts matter not at all. Perception is everything.” And therein lies the tale of the two broad categories of what gets called “hard” SF.

Traditionally, “hard” SF is taken to mean that the story tries to violate known scientific facts as little as possible, once the central premise (usually counter to scientific facts) has been chosen. This is how authors get away with FTL travel and werewolves. The definition sounds obvious but it has two corollaries that many SF authors forget to the serious detriment of their work.

The first is that the worldbuilding must be internally consistent within each secondary universe. If you envision a planet circling a double sun system, you must work out its orbit and how the orbit affects the planet’s geology and hence its ecosystems. If you show a life form with five sexes, you must present a coherent picture of their biological and social interactions. Too, randomness and arbitrary outcomes (often the case with sloppily constructed worlds and lazy plot-resolution devices) are not only boring, but also anxiety-inducing: human brains seek patterns automatically and lack of persuasive explanations makes them go literally into loops.

The second is that verisimilitude needs to be roughly even across the board. I’ve read too many SF stories that trumpet themselves as “hard” because they get the details of planetary orbits right while their geology or biology would make a child laugh – or an adult weep. True, we tend to notice errors in the domains we know: writing workshop instructors routinely intone that authors must mind their p’s and q’s with readers familiar with boats, horses and guns. Thus we get long expositions about stirrups and spinnakers while rudimentary evolution gets mangled faster than bacteria can mutate. Of course, renaissance knowledge is de facto impossible in today’s world. However, it’s equally true that never has surface-deep research been as easy to accomplish (or fake) as now.

As I said elsewhere, the physicists and computer scientists who write SF need to absorb the fact that their disciplines don’t confer automatic knowledge and authority in the rest of the sciences, to say nothing of innate understanding and/or writing technique. Unless they take this to heart, their stories will read as variants of “Once the rockets go up, who cares on what they come down?” (to paraphrase Tom Lehrer). This mindset leads to cognitive dissonance contortions: Larry Niven’s work is routinely called “hard” SF, even though the science in it – including the vaunted physics – is gobbledygook, whereas Joan Slonczewski’s work is deemed “soft” SF, even though it’s solidly based on recognized tenets of molecular and cellular biology. And in the real world, this mindset has essentially doomed crewed planetary missions (of which a bit more anon).

Which brings us to the second definition of “hard” SF: style. Many “hard” SF wannabe-practitioners, knowing they don’t have the science chops or unwilling to work at it, use jargon and faux-manliness instead. It’s really the technique of a stage magician: by flinging mind-numbing terms and boulder-sized infodumps, they hope to distract their readers from the fact that they didn’t much bother with worldbuilding, characters – sometimes, not even plots.

Associated with this, the uglier the style, the “harder” the story claims to be: paying attention to language is for sissies. So is witty dialogue and characters that are more than cardboard cutouts. If someone points such problems out, a common response is “It’s a novel of ideas!” The originality of these ideas is often moot: for example, AIs and robots agonizing over their potential humanity stopped being novel after, oh, Metropolis. Even if a concept is blinding in its brilliance, it still requires subtlety and dexterity to write such a story without it devolving into a manual or a tract. Among other things, technology tends to be integral in a society even if it’s disruptive, and therefore it’s almost invariably submerged. When was the last time someone explained at length in a fiction piece (a readable one) how a phone works? Most of us have a hazy idea at best how our tools work, even if our lives depend on such knowledge.

To be fair, most writers start with the best of intentions as well as some talent. But as soon as they or their editors contract sequelitis, they often start to rely on shorthand as much as if they were writing fanfiction (which many do in its sanctioned form, as tie-ins or posthumous publication of rough notes as “polished products”). Once they become known names, some authors rest on their laurels, forgetting that this is the wrong part of the anatomy for placing wreaths.

Of course, much of this boils down to personal taste, mood of the moment and small- and large-scale context. However, some of it is the “girl cooties” issue: in parallel with other domains, as more and more women have entered speculative fiction, what remains “truly masculine” — and hence suitable for the drum and chest beatings of Tin… er, Iron Johns — has narrowed. Women write rousing space operas: Cherryh and Friedman are only the most prominent names in a veritable flood. Women write hard nuts-and-bolts SF, starting with Sheldon, aka Tiptree, and continuing with too many names to list. Women write cyberpunk, including noir near-future dystopias (Scott, anyone?). What’s a boy to do?

Some boys decide to grow up and become snacho men or, at least, competent writers whose works are enjoyable if not always challenging. Others retreat to their treehouse, where they play with inflatable toys and tell each other how them uppity girls and their attendant metrosexual zombies bring down standards: they don’t appreciate fart jokes and after about a week they get bored looking at screwdrivers of various sizes. Plus they talk constantly and use such nasty words as celadon and susurrus! And what about the sensawunda?

I could point out that the sense of wonder so extolled in Leaden Era SF contained (un)healthy doses of Manifesty Destiny. But having said that, I’ll add that a true sense of wonder is a real requirement for humans, and not that high up in the hierarchy of needs, either. We don’t do well if we cannot dream the paths before us and, by dreaming, help shape them.

I know this sense of wonder in my marrow.  I felt it when I read off the nucleotides of the human gene I cloned and sequenced by hand. I feel it whenever I see pictures sent by the Voyagers, photos of Sojourner leaving its human-proxy steps on Mars. I feel it whenever they unearth a brittle parchment that might help us decipher Linear A. This burning desire to know, to discover, to explore, drives the astrogators: the scientists, the engineers, the artists, the creators. The real thing is addictive, once you’ve experienced it. And like the extended orgasm it resembles, it cannot be faked unless you do such faking for a living.

This sense of wonder, which I deem as crucial in speculative fiction as basic scientific literacy and good writing, is not tied to nuts and bolts. It’s tied to how we view the world. We can stride out to meet and greet it in all its danger, complexity and glory. Or we can hunker in our bunkers, like Gollum in his dank cave and hiss how those nasty hobbitses stole our preciouss.

SF isn’t imploding because it lost the fake/d sensawunda that stood in for real imaginative dreaming, just as NASA isn’t imploding because its engineers are not competent (well, there was that metric conversion mixup…). NASA, like the boys in the SF treehouse, is imploding because it forgot — or never learned — to tell stories. Its mandarins deemed that mesmerizing stories were not manly. Yet it’s the stories that form and guide principles, ask questions that unite and catalyze, make people willing to spend their lives on knowledge quests. If the stories are compelling, their readers will remember them after they finish them. And that long dreaming will lead them to create the nuts and bolts that will launch starships.

Images: 1st, Stormtrooper Walking from Grimm’s Pictures; 2nd, the justly famous Sidney Harris classic from What’s So Funny About Science?; 3rd, Jim Parsons, The Big Bang Theory’s Sheldon Cooper, photo by Robert Trachtenberg for Rolling Stone; 4th, The Gate, by Peter Cassidy.

Note: This is part of a lengthening series on the tangled web of interactions between science, SF and fiction.  Previous rounds:

Why Science Needs SF
Why SF Needs Science
Why SF Needs Empathy
Why SF Needs Literacy
Why SF Needs Others

I guess this one might be called Why SF Needs Fiction!

The Volcano Always Wins

Tuesday, November 2nd, 2010

In Kazuo Ishiguro’s Remains of the Day the main character is James Stevens, a butler proud to serve his master, Lord Darlington – a rather dim aristocrat with political ambitions who becomes close to Mosley’s philo-Nazi Blackshirts. Stevens sacrifices all vestiges of self-expression, including the possibility of love, to become the perfect servant. His dignity and sense of office forbid him to question social and political rules and he remains loyal to the master-servant ideal even when its time is long past.

A week ago, Mas Penewu Surakso Hargo, known as Mbah (Grandfather) Maridjan, died on Mount Merapi in the Yogyakarta region of Java (founded as a sultanate in 1755). Maridjan, like his father before him, had been appointed guardian of Merapi by the sultan of Yogyakarta. He was in charge of ceremonies to appease the spirit of the mountain and he described his job as being “to stop the lava from flowing down”.

In 2006 and again in 2010, Maridjan refused to evacuate when Merapi erupted, calling himself and his fellow villagers the fortress whose function was to protect the sultan’s palace. Both times, others followed his example on the strength of his moral authority. He was found in a praying position, overwhelmed by pyroclastic flow from the mountain. Also killed were thirteen people who were in his home trying to persuade him to leave. The local populace is clamoring for a new guardian, and the sultan plans to appoint one soon.

Most people consider Stevens a deluded pathetic figure, despite his massive dignity and loyalty. Ditto for Harry Randall Truman, who elected to stay on Mt. St. Helens in 1980. In contrast, many consider Maridjan admirable, a laudable example of spirituality and adherence to principle, even though his actions led to preventable deaths.

Inevitably, there are more threads to this braid. Truman and Maridjan were in their mid-eighties; both voiced the sentiment that their time had come, and that such a death was preferable to dwindling away in increasing helplessness. The people of Yogyakarta are trying to preserve the pre-Islamic heritage of Indonesia against mounting pressure from the increasingly hardline official policies and the imams who enforce them. Additionally, many Merapi evacuees were left with nothing but the little they could carry, in a nation that has a rich legacy and tremendous recources – but one that also has had more than its share of natural and man-made disasters and whose political, ecological and economic status is wobbly.

Maridjan is admired as the keeper and transmitter of endangered cultural knowledge. I have already discussed this issue from the angles of deracination and art. The time has come to also point out the problems and dangers of tradition.

There’s no doubt that unique cultural customs keep the world multicolored and kaleidoscopic. Even though I’m an atheist and consider all organized religions unmitigated disasters for women, I’m still moved by the Easter ceremonies of the orthodox church. However, I’m not interested in their Christian-specific narrative. What moves me are the layers embedded in them: the laments of Mariam for her son are nearly identical to those of Aphrodite for Adonis, and they’re echoed in folk and literary poetry in which mothers lament dead sons (the most famous is Epitáfios by Yiánnis Rítsos, set to unforgettable music by Mikis Theodorákis). When I hear them, I hear all the echoes as well, see all the images superimposed like ghostly layers on a palimpsest. For me, that’s what lends them resonance and richness.

But there are times when I must part most decisively with tradition. There are plenty of traditions whose disappearance has made (or will make – many are still extant) the world a better place: from spreading bloody wedding sheets to foot binding to female genital mutilation; from forbidding women to sing lest they distract their husbands to knocking out teeth of new wives to show they will rely on their husbands’ prowess henceforth; from slavery and serfdom to polygyny and concubinage; from having unprotected sex with virgins to “cure” sexually transmitted diseases to “laying hands” on a child sinking into a diabetic coma.

Then there are the power-mongering charlatans who prey on fear and despair, particularly when hard times fall upon people: sickness, natural catastrophe, occupation, war. It’s true that Western medicine follows the heroic model – and as such it’s outstanding at treating acute illnesses but tends to over-specialize, sometimes at the expense of a holistic approach that treats the root cause rather than the symptoms. It’s equally true that modern technology has allowed ecological depradations at an enormous scale that threaten to become irreversible. Finally, it’s painfully true that deracination and colonialism often go hand in hand with modernization. Oppressed people revive or revert to traditions, often the last vestiges of suppressed cultural identity, as an act of resistance.

However, prayers don’t shrink a tumor nor frighten invaders away and the sun rises and sets whether beating hearts are offered to it or not. Too, if someone jumps from an airplane or a high ledge without a parachute, no amount of belief in divine favor will waft them away on a magic carpet or give them wings. Nor were traditional states pre-lapsarian paradises, as an objective reading of Tibetan, Aztec and Maori history will attest.

When we didn’t know the reasons behind phenomena, such customs were understandable if not necessarily palatable. Not any more, not with today’s knowledge and its global reach. The mindset that clings to the concept that incantations will stop a volcano is kin to the mindset the refuses to accept evolution as established fact. Standing in the path of a meteor is not the same as standing at Thermopylae, romantic notions of doomed last stands notwithstanding. The 300 Spartans who stood at Thermopylae had a concrete goal as well as a symbolic one: they stopped the Persian army long enough to give the rest of the Greek city-states time to strategize and organize. And the rarely-mentioned 1,000 Thespians who stood with them did so against their particular customs – for the sake of the new-fangled, larger concept of living in freedom.

In the end, the traditions that deserve to survive are those that are neutral or positive in terms of improving human life across the hierarchy of needs (and that includes taking care of our planet). Mbah Maridjan was the guardian of the mountain, which put him in the position of caretaker of his fellow villagers as well as of the putative Merapi spirit. If he saw his function as loyalty to an abstract principle of servitude rather than protecting his very real people, he was misguided at best – and his stance had far worse repercussions than those of Ishiguro’s Stevens, who only harmed himself and the woman who hoped to love him.

I once read an almost certainly apocryphal tale of a young woman who asked her rabbi, “Rebbe, is it ever acceptable to eat pork?” “Never!” said the rabbi. “Pig meat is always treff. Why do you ask?” “During last winter’s famine, I fed my young brothers sausages,” replied the girl. “It was either that or watch them starve.” “In that case, it was kosher,” decided the rabbi.

That’s the type of humane traditionalism I can live with. Tribalism was adaptive once, but has become a mixed blessing at best. Tradition encourages blind faith, satisfaction with rote answers and authority – and history demonstrates that humans don’t do well when they follow orders unquestioningly. As for the questing mindset ushered and encouraged by science, I will close with words I used elsewhere:

Science doesn’t strip away the grandeur of the universe; the intricate patterns only become lovelier as more keep appearing and coming into focus. Science leads to connections across scales, from universes to quarks. And we, with our ardent desire and ability to know ever more, are lucky enough to be at the nexus of all this richness.

Images: top, pyroclastic cloud from the Rinjani volcano, part of the Ring of Fire to which Merapi also belongs (photo by Oliver Spalt); middle, a Han Chinese woman’s “golden lotus”; bottom, wayang kulit — the Javanese shadow puppets, part of the Yogyakarta people’s heritage.