Monday, September 21, 2009

Talking the Talk of Science

Common sense, as discussed here some time ago, is a tool we use in everyday life to sort out our surroundings. But it evolved to solve certain problems arising in the everyday experience of cavemen, or hunter-gatherers. Science takes us far from that realm of experience and so, common sense is not to be trusted if one wants to understand the world scientifically.

Scientists have common sense too, but they learn not to rely too much on it.

Speaking of which I am reminded of another tool for everyday life that cannot be applied unchanged to science --language. Languages, like common sense, developed in the “normal” world of everyday experience. Guided by our limited perception, we invented concepts like “light” and “sound,” and gave them special names We created words for everything we could see, hear, or otherwise perceive. We named what we could imagine. But our imagination rarely creates something new. It only puts together existing conceptual elements, however artfully. Our mental constructs are not unlike Frankenstein’s monster.

In the relatively recent past science and technology have revealed that what we call light is only a tiny part of the electromagnetic spectrum, the part that our eyes can see; and what we call sound is only the range of pressure wave frequencies that our ears can detect. We now use these words for light that cannot be seen and sound that cannot be heard. When confronted with new scientific developments, everyday language is forced to either stretch the meaning of extant words, or create new words by agglutination. The term electromagnetic is a case in point. Like the concept it labels, it is the result of putting together “electric” and “magnetic.” Now elektron is Greek for amber. When rubbed with a piece of cloth, amber has the strange property of attracting small objects placed nearby. This property we call electricity. Magnetism is the property of the lodestone, a material also known as magnetite after the ancient Greek city of Magnesia. The word electromagnetic is a Frankenstein monster assembled with parts of other Frankenstein monsters.

There is yet another typical reaction of language to new developments, and that is not to react at all. We have known that the earth spins on its axis these four hundred years. Yet we still say that the sun rises and that the sun sets.

So, not surprisingly, scientific language sometimes clashes with the rules of “good” English (or “good” Spanish). A scientist friend of mine was recently asked by a Spanish teacher to write a short text as an example of scientific language for a book she was writing. My friend complied, and soon got back a “corrected” version (corrupted is more like it) of his text from the teacher. She objected to his use of the phrase “almost constant.” She argued that being constant is not a matter of degree --either you are or you’re not. She is right, of course, but as my friend points out, the meaning of this phrase is self-explanatory, and to say the same thing in pristine Castillian Spanish would require a lengthy circumlocution. Scientists often don’t have time or space for such niceties. (Which is not to say, I hasten to add, that they should not try to write good Spanish or good English whenever possible.)

Tuesday, September 15, 2009

Tempted into Censorship

I remember once doing a reprehensible thing. I remember doing many reprehensible things, but this one is related to my being a scientist and priding myself, perhaps in a self-congratulatory and delusional way, in being a lover of the “truth,” whatever that may mean.

I was browsing around in the science section of a bookstore in Mexico City. The science section of Mexico City bookstores can be quite bewildering because bookstore owners have a very dim idea of what the term “science” means (witness the book department in any Sanborn’s store, where bona fide science consorts promiscuously with astrology, UFO-abductee accounts, and New Age self-help pap.) Next to some physics textbooks I found a little tome. I don´t remember the title or who the author was. I just remember that it was an enraged accusation of physicists and their strange ideas about relativity. The author, obviously a crank, was not comfortable with the notion that someone might know for a fact something he did not understand. A little training in math and physics shows relativity to be a logical necessity despite its weird predictions (which, by the way, are perfectly established by experiment), but this the author did not know and did not bother to find out. Too much trouble. Instead, he just ranted and raved and argued nonsensically against the special theory of relativity --whithout a single equation.

Now, this is what I did: I pushed the little volume all the way back in its shelf, effectively hiding it from view for the rest of time.

Later it dawned on me what I had really done. I had tried to suppress an idea --an act of censorship. Censorship, of course, is what people do who are not sure they are right. Totalitarian states do it, and the Inquisition did it. It is the weapon of the liar and the usurper, the corrupt and the power-hungry. Censorship, as opposed to argument, is contrary to the search for truth. I don´t mean to say that censorship is unheard-of in science, because it most certainly is not. Scientists are human and subject to human passions. But science has an advantage over political or religious systems of belief, where censorship is common --that its criteria for truth or falseness are clear, and shared by most scientists (the criteria include reproducibility of results and consistency of explanations, among others). The best way to challenge ideas is with ideas. My deed, although inspired by the noblest sentiments (as I’m sure the author of the little book is convinced his own deeds are), was reprehensible and childish.

And then again... The author wasn´t there for me to argue with. With the mass media gone over to the cranks, scientists are reduced to guerrilla tactics. We can´t speak out as they can, and most scientists will not upset their schedules to take part in the war effort. What could I do? I was feeling frustrated. (And please don´t go telling me that maybe so were Hitler, Stalin and Torquemada.)

Tuesday, September 8, 2009

The Minds of Cranks

(I wrote this piece in 1997, but much of its content still holds, so here it is)

My friend Miguel Alcubierre is a researcher at the Max Planck Institut für Gravitantionsphysik, in Germany. Several years back he wrote a brief paper showing that, contrary to widespread belief, it is possible to travel faster than light without infringing the laws of relativity. The paper brought him some notoriety among physicists and sci-fi buffs, but particularly among scientific cranks. One of the chief aims of every self-respecting crank is to debunk the theory of relativity (and, of course, evolution). Miguel gets e-mail from the crackpot fringe on a regular basis. He says he has no time for them, so I asked him to forward them to me.

Scientific cranks are not just any kind of crank. They are usually curious and hard-working, sometimes even bright, and without exception completely innocent of the methods of science. Many cranks, I believe, are the possessors of scientific minds gone stale for lack of rigorous training. They believe that all that rings true to them must actually be true, and they cling fiercely to their prejudices. They are absolutely confident that all that glitters must be gold. Cranks think hard and come up with ideas, like scientists; but unlike true scientists they have an unrelenting faith in common sense.

Unfortunately for them, common sense --that invaluable aid in everyday life-- has proven a very poor guide for scientific discovery. As early as the sixth century BC, the Greek philosopher Parmenides attacked common sense, calling it “that heart devoid of the tremor of truth.” Syrian-born historian Ikram Antaki writes: “Common sense is the locus of our prejudices, where thought is reduced to its inertia (...) it provides ready-made answers; it inhibits and conditions our reflexes; it fabricates and channels our reactions (...) (common sense) is like the minimum wage of intelligence.” The investigation of the natural world has produced countless results that challenge common sense. Who would have thought, before Einstein, that an object’s mass increased as it moved faster? Or that it is possible to slow down time by moving at great speed? Still, these strange ideas are true in the sense that every experiment to test them has yielded positive results.

Why is common sense so unreliable in science? Common sense was a faithful guide to our forebears for thousands of years. We evolved it as a response to the world as perceived by our senses. But our senses are limited. Our eyes, for example, are sensitive to only a tiny portion of the electromagnetic spectrum. There are many more kinds of light than we perceive. The same applies to our hearing. We detect sound only in a limited range of frequencies. We should expect, then, that common sense is valid only in the realm of everyday experience. Scientific inquiry, however, routinely takes its partisans far from everyday experience. The physics of atoms seems very unnatural, but that is because our idea of what’s natural was forged among objects trillions of times larger than individual atoms. And the physics of objects moving at close to the speed of light is extremely weird, but then again, we don’t usually encounter such speeds on the Periferico.

Miguel’s cranks write with disarming confidence. In a way, I envy them. Their zeal and even their venom come from the certainty of being right. I wish I could be that certain of being right just once. But I guess I’m too far gone down a path where simple certainties dissolve. Don’t pity me, however. My simple certainties have dissolved into endless wonderment, and I think I know which one is best.

Wednesday, May 27, 2009

Are you innumerate?

A person who cannot read or write is said to be illiterate. Similarly, someone who is incapable of dealing with simple numerical ideas is referred to as innumerate. Innumeracy turns its victims into sitting ducks in a world of greedy commercialism, agressive marketing schemes and politicians who overwhelm voters with figures and statistics.

         Innumeracy is so common that marketing experts and salespeople seem to take it for granted. The car sales representatives that stalk innocent passersby at shopping malls in Mexico City are a case in point. Allow me to illustrate with a personal experience.

         One day I was at a Mexico City shopping mall, innocently passing by, when I was accosted by a car sales representative who offered me a financing scheme he thought I could not refuse. I decided (foolishly) to indulge him and sat down at his table. With a wide PR smile on his treacherous little face he then explained that his company´s financing plan was equivalent to placing a certain amount of money in the bank and paying the monthly instalments for the car off the interest.

         "So," he concluded triumphantly, "in the end you don´t really pay for the car!"

         There is a well known law of physics that establishes that in any and all physical processes energy is neither created nor destroyed, only transformed. This is the time-honored law of conservation of energy. A similar law --which we might call the law of conservation of money-- applies to commercial transactions. In any such transaction money is neither created nor destroyed; it merely flows from someone´s pocket to somebody else´s bank account, which means generally that no one makes money out of nothing. I´m sure the salesman was not aware that I was a physicist, and thus conversant with the not-really-too-arcane principle of monetary conservation (nothing in my appearance or demeanor gave away the fact), so I almost actually forgave him his trying  to bamboozle me.  I just gave him a knowing glance.

         "Oh, yeah?" quoth I. "Then why don´t I just take the car with me right away?"

         The guy looked confused. He blushed and chuckled uncomfortably.

         "I´m afraid that´s impossible, sir," he replied. It was obvious that the possibility of a prospective customer getting wise to the scam had not been contemplated in his training. I´m not even sure he was himself aware that his financing scheme, or at least his claim that you got the car for free, was pure baloney. This anecdote goes to prove that innumeracy is pervasive enough for these people to take it for granted.

         Here is another example of innumeracy. An acquaintance of mine (and a college graduate, mind you) is convinced that, if the average daily number of accidents in the Periférico is, say, ten --and if  there have already been ten accidents during a given day--, then he need not drive carefully for the rest of that day because he can´t possibly have an accident. Many years ago I sent this case to A.K. Dewdney, a columnist for Scientific American who wrote about innumeracy in the March 1990 issue of the magazine. In reply Dewdney told me about a businessman who always carried a bomb in his suitcase when flying because he had read that the odds for there being two bombs on the same plane were practically zero. (If you think this is a good idea, think again.) "The examples of math abuse," wrote Dewdney in November 1990, "are but symptoms of a general ignorance of mathematics --indeed, of science as a whole."

         People who imagine they "almost” hit the lotto because their ticket was just a few numbers away from the winning number display the same kind of misunderstanding of probability as my college-graduate innumerate friend. Can you tell what´s wrong with these examples of math abuse?

Thursday, March 26, 2009

Are We the Pinnacle of Evolution?

The term “evolution” conjures the picture of initially unicellular life marching triumphantly towards greater size and increasing complexity --and of humans as the undisputed pinnacle of evolutionary history. This smug view is compounded by the widespread notion that evolution and progress are synonyms. It is common in textbooks and popular accounts to depict evolutionary series as ladders --from hyracotherium, the “dawn horse”, to the modern horse; or from Australopithecus to exalted Homo sapiens.

       But, as Stephen Jay Gould has shown in his book Full House, ladders are misleading. Hyracotherium is indeed the ancestor of modern horses, and, yes, there is a continuous line from him to present-day Equus. But the line twists and turns in time, branching endlessly so that the “dawn horse” is also the grandfather of countless other species, some living, but most extinct. The same is true of the line of descent going from Australopithecus, of “Lucy” fame, to modern humans. The line is not a line --it’s a bush. Neanderthals, who can also claim Lucy as their grandmother, are not our direct ancestors.

       Evolutionary lineages in general are not linear. Today’s living species, which we might represent as the outer leaves of an evolutionary tree, are attached to twigs, which are attached to larger twigs, which shoot off from branches, which sprout from larger branches, which emerge from an ancient common trunk going way back into the past --some 3.6 billion years-- to the first living organisms, a kind of bacterium.

       The ladder representation conveys the false idea that evolution is going somewhere --that those first bacteria somehow knew they were to become us. But if ladders had any truth in them then the lower rungs ought to be extinct to open the way for the young, so to speak. Bacteria, however, thrive today. What’s more, by their diversity, by their presence in every nook and cranny of the earth, by their longevity, and by their sheer numbers, bacteria are and always have been the dominant organisms in this planet, as Gould argues in Full House. If we go back far enough, you and I have a common ancestor who was a reptile; go back even further and we will find we are related to a fish. Yet reptiles and fishes are alive and well today. Not the same species, to be sure, but modern ones which may be our cousins many times removed. Humans are not the end point of evolutionary history --all species living today are.

       Natural selection, the motor of evolution, does not have a plan. The only criterion for survival is adaptation to existing conditions. The dinosaurs didn’t die out because they were evolutionary failures or because they were less perfect animals than present-day animals. In fact, dinosaurs have been one of the most successful groups in the history of life. They dominated macroscopic life for over 200 million years. Mammals, in contrast, have only been conspicuous for some 60 million years. The dinosaurs died because their environment changed abruptly when a very large meteorite or comet collided with the earth, some 65 million years ago.

       Consider another example of the progress fallacy. Mammoths, the hairy ancestors of modern elephants, were well adapted to life in the latest Ice Age. Hairless elephants are well adapted to present-day conditions. But a hairless elephant, as Gould points out, is not a cosmically better elephant. When another ice age comes --and it will--, a hairy elephant will be more likely to survive.

       And this brings on my final point. Possible hairy elephants of the frigid future will NOT be mammoths. The mammoth is dead and gone. If elephants ever have hairy descendants, those descendants will be new adaptations to cold weather. They may conceivably look somewhat like mammoths --with all the hair and stuff--, but the resemblance will stem from the fact that both species are solutions to a similar problem --like bats and birds. Extinction, as diamonds, is forever.

Tuesday, March 17, 2009

A monk in his garden

Imagine a monastery in Moravia, and in the monastery a garden, and in the garden a monk. The monk is busy handling pea plants in pots, wrapping the flowers in paper bags after carefully dusting them with pollen from a different variety of the pea plant. He is no ordinary monk. He has studied mathematics and science. In a few years’ time he will be elected abbot of the monastery --which will force him to abandon his scientific work.

       He is Gregor Mendel, the father of the science of genetics, and his experiments with pea plants will provide the missing link to Charles Darwin’s theory of evolution by natural selection. But not before both Darwin and Mendel are dead and gone.

       Darwin will die in 1882, still plagued by the mystery of the mechanism of inheritance. The theory of evolution by natural selection requires that heredity work in such a way that mutations --or fortuitous variations in the hereditary makeup of an organism-- are passed on intact to offspring. This would guarantee the conservation of advantageous mutations (a longer neck in giraffes, a change in pigmentation in moths living in soot-covered trees in central England); whereas the alternative mechanism of blending inheritance --whereby offspring simply strike  an average between the characteristics of their parents-- would cut mutations in half with each generation, rapidly diluting their effect, advantageous or otherwise.

       Mendel will die in 1884 in total obscurity. The revolutionary nature of his experiments will only be recognized in 1904, when three European botanists will independently rediscover his work.

       Ironically, Darwin’s would-be savior was already working with his pea plants when the great scientist published The Origin of Species, in 1859. Mendel’s method of “hybridization” was straightforward. First he would open a pea flower before it was fully developed, removing the anthers (or male sexual organs) with tweezers to avoid self-pollinization. Then he would dust the flower’s stigma with pollen from the selected variety, immediately wrappping the flower in a paper bag to keep away other pollen. Finally, he would wait patiently for the plant to produce seeds and for the seeds to produce the next generation of plants. Mendel then recorded the results.

       The monk chose pea plants because they have traits (such as blossom color and plant height) that are easily distinguishable and that breed true. Thus, he crossed six-foot plants with one-foot plants, and plants with purple blossoms with plants with white blossoms. Would the result be three-foot plants with mauve-colored blossoms, as the popular theory of blending inheritance dictated?

       In contrast with other botanists who had performed hybridization experiments before him, Mendel had studied mathematics and was an able statistician. He found that when he crossed six-footers with the short variety the first-generation hybrids were all six-footers. No intermediate-sized plants were produced. However, when these first-generation hybrids were allowed to self-pollinize, the result was astonishing --the second generation included both tall plants and short plants, and in an approximately three-to-one ratio. A similar result was obtained for six other contrasting traits.

       Mendel’s conclusion was that, contrary to popular belief, the parent’s traits are not blended in the offspring. Inheritable characteristics are determined by units of inheritance that are segregated rather than blended in the offspring, with certain traits dominating over their “recessive” opposites (i.e. long stem versus short stem). Today we call these units “genes.” Mendelian genetics meshed perfectly well with natural selection. In the first decades of this century, genetics and evolution became integrated in what is known as the synthetic theory of evolution.

Monday, March 9, 2009

Literary Theme with Biological Variations

In his short story "Pierre Menard, Author of Don Quixote" the Argentinian writer Jorge Luis Borges recounts the story of a symbolist author in turn-of-the-(last)century France who endeavors to rewrite Miguel de Cervantes’ celebrated work. Pierre Menard, however, is no mere parasite intending to copy or paraphrase Cervantes. His intent is to write a verbally identical book based on his own experience. Menard, alas, dies after completing only two chapters. But how fascinating those two chapters can be! Read as the work of a twentieth-century writer, Menard’s Don Quixote is a completely different book.

This, of course, is only possible in Borges’ brilliant fantasy world. In real life, if you hold two books in your hand --for example, Cervantes’ Don Quijote de la Mancha and Menard’s Don Quijote de la Mancha--, and the books correspond word by word, or almost, you immediately smell a rat. The books must --to say the very least-- have a common ancestor. They can’t really be independent.

Odoriferous rodents of the same kind assail the discerning noses of biologists when they compare organisms from the present and from the past using the tools of old and new biological disciplines such as embryology, anatomy, genetics, and biochemistry. Charles Darwin’s original treatise was a steamroller of evidence for “descent with modification.” Today, evolutionists possess further detailed and consistent proof of the fact of biological evolution.

Consider the backbone in humans. Humans, as you probably know, walk upright most of the time. Our backbones are placed in the back (duh). But look at the famous roof at the Museo de Antropología, in Mexico City. Here it is: 









A hypothetical Cosmic Engineer designing humans from scratch would have endowed us with sturdier “backbones” passing through the center of the torso, not along the back. As things are, we are well adapted to an upright posture, but not perfectly adapted, because we have only recently evolved from ancestors that went about on all fours. Imperfections such as are manifest in anatomical studies argue for evolution and against design.

Anatomy, physiology, embryology and other tools that were already available in Darwin’s time can probe only so deep into the similarities of organisms, and go only so far back in time. It is the more recently developed field of molecular biology that provides the most detailed and convincing evidence that we are all, from human to bacterium, ultimately related by descent from common ancestors.

The organic compounds known as aminoacids can be numbered in the hundreds, yet all bacteria, plants, animals and fungi synthesize all their proteins based on just 20 aminoacids, the same 20 for all living beings. Further, for all its staggering diversity, all life on Earth depends on the same few chemical pathways (fermentation, photosynthesis, respiration) to produce energy and build cell components. The molecular and chemical uniformity of life can only be accounted for by evolution.
Molecular biology is unique as a tool for comparative analysis of species in that it allows scientists to precisely quantify the degree of similarity of different organisms. The protein cytochrome c of humans is identical to that of chimpanzees. It differs by one aminoacid from that of rhesus monkeys, by 12 from that of horses, and by 21 aminoacids from that of tuna fish. Comparing the DNA of two species, molecular biologists can now even determine approximately how far back in time the species’ most recent common ancestor lived in the same way that linguists can tell how recently two languages diverged from a parent language by analyzing their similarities.

This is only a paltry sample of facts that can only be explained by evolution. Darwin himself provides many more in The Origin of Species. Today all scientists agree that, as Theodosius Dobzhansky, a leading evolutionist, once said: “Nothing in biology makes sense except in the light of evolution.”

Thursday, March 5, 2009

Quality Control

Lawyers, politicians and scientists love a good argument. As a scientist, however, I wouldn’t want to argue with neither a lawyer nor a politician. For, you see, while we all may love debate, a lawyer’s, a politician’s, and a scientist’s aim in debating are entirely different.

        Lawyers argue to win. That’s what they get paid for. Whether they are right or not is immaterial. Even when he knows his client is guilty a lawyer must defend the client’s innocence. Truth is not the lawyer’s main concern.

        A politician’s job --whatever Plato, Aristotle, and others may have said in the past-- is to attain office and to remain in office. Sad to say, but that’s the way it is, as you know. A politician argues to win or, failing that, to make people believe he has won. Politicians employ every trick in the rhetorician’s repertoire to defend even the wobbliest ideas.

        Scientists don’t argue to win. They enjoy victory  as much as the next guy, but winning is not so important. What’s important is the clash of ideas. In scientific debate only the fittest ideas survive. Flimsy notions perish. What you want as a scientist is not to be proven right, but to be proven, period. Sir Karl Popper, a contemporary philosopher of science, wrote: “The wrong view of science betrays itself in the craving to be right, for it is not his possession of knowledge, of irrefutable truth, that makes the man of science, but his persistent and recklessly critical quest for truth.”

        Popper also wrote: “Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the scientific game.”  Wolfgang Pauli, one of the founders of quantum mechanics, once hired an assistant whose job it was to constantly refute his employer’s ideas with the strongest arguments he could muster. Like the warriors of yore, scientists value a worthy opponent.

        An earthquake that leaves one building standing among others in ruins proves the sturdiness of that building. It is in the interest of science to constantly submit its constructs to conceptual earthquakes in order to test their sturdiness. Here is Popper again: “Once put forward, none of our hypotheses are dogmatically upheld. Our method of research is not to defend them in order to prove how right we are. On the contrary, we try to overthrow them.”

        Karl Popper is the creator of the idea of “falsability” of scientific hypotheses. He contends that, in order to be considered scientific, a hypothesis must be formulated in such a way that, if false, it can easily be proven false. This contrasts with the old idea of verfiability of scientific theories, but it makes for more solid foundations to the scientific edifice.

        For example: “Energy is conserved” is a valid scientific statement in Popper’s sense because it is easily refutable --finding one single instance of its not being true would suffice to topple it. The principle of conservation of energy was first formulated more than one hundred years ago. So far, scientists have not found a single case in which it is violated. You see, then, how Popperian “falsability” can yield sturdy scientific principles: energy conservation is easy to disprove, yet it hasn’t been disproven. The more tests it survives, the more confident we are that energy is conserved even in situations in which we have not explicitly shown this to be the case.

        When you buy a car you kick the tires and slam the doors to guarantee that you are making a sound investment. A scientist invests much more than money in the ideas he accepts as true. What’s on the line is his ability to do useful work in the future, his worldview, and his inner equilibrium. So when it comes to selecting our truths --our cars and buildings-- we scientists are extremely picky. It is painstaking work but in reward we, more than lawyers or politicians, can feel truly safe in the cars we choose to drive and the buildings we decide to inhabit.

Wednesday, February 25, 2009

A Path to Greater Wonderment

What is a violin made of? Bits of wood and bits of sheep’s intestine. Does its construction demean and banalize the music? On the contrary, it exalts the music further.

--Julian Barnes

 

The story is told that Hans Bethe, the man who finally unveiled the mystery of nuclear fusion in stars, was out with his girlfriend, sitting by a cliff and gazing at the night sky.

       “How beautiful they are,” said the girl at a loss for better words to describe the stars.

       “Yes,” Bethe replied, “and right now I am the only person in the world who knows why they shine.”

       You might be tempted to rebuke Bethe for spoiling a romantic moment, but before you do, allow me to plead his cause.

       Does scientific explanation spoil beauty? Consider what Bethe’s discovery led up to. We now know that all the atoms in the universe other than hydrogen --the simplest possible atom, with a proton for a nucleus and an orbiting electron-- were created in the interiors of stars that later exploded as supernovas. The stuff our planet is made of, and the stuff we ourselves are made of, was cooked in a stellar oven billions of years ago. So what Bethe and other astrophysicists have discovered is, in effect, a link between us and the cosmos. Those bright points of light that stud the night sky are even now brewing the substance of new life.

       As for poetry and romance, consider this: Where Bethe’s companion saw little pinpoints of faintly-colored brightness her physicist friend saw mighty suns, their incandescent atmospheres roiling with nuclear fury, their colors revealing their temperature, age and composition.

       In other words, it is with nature as it is with good books and good movies --you take more from it the more you bring to it. It is simply not true that the scientist is insensitive to the beauty of nature because he can understand part of that beauty. On the contrary, science is a path to greater wonderment. The play of forces and quantum effects that allows the stars to shine and later enrich the universe with heavy elements is so subtle it makes you dream. The deductive chain linking the Big Bang with the present-day structure of the universe --though still riddled with gaps-- is nothing less than awe-inspiring. But only, I’m afraid, to the trained eye and mind --as it is with good movies.

       Consider the movie Shakespeare in Love. At the screening I attended with my wife, Magali, several years ago, there were people from all walks of life, young, old, and even a few little kids. The plot plays on many levels. On the very surface, if the name Shakespeare doesn’t ring a bell (hard to imagine but not impossible), it is a love story with some vague comedy to it.

       On the next level, you can laugh at the idea of an uninspired Shakespeare intending to write a crowd-pleaser titled Romeo and Ethel, the Pirate’s Daughter. You know he eventually wrote a tragedy, Romeo and Juliet, which is widely considered a masterpiece.

       Going deeper still, you even catch a few snippets of actual Shakespeare diaglogue being uttered in the background as an oblivious Will goes by. Later he uses those very phrases in Romeo and Ethel. You may also appreciate the piquancy in the screenwriter’s ploy of having Christopher Marlowe, Shakespeare’s real-life rival, suggest a better plot for Romeo. This is as far as my Shakespearean experience (such as it is) will take me, but there are deeper layers to Shakespeare in Love. How much more delightful the film must be for the lucky ones who can understand it in full.

       On the opposite side of the spectrum, the little kids only laughed when a character said “boobies.”

       So don’t be too harsh on Hans Bethe, the physicist who helped explain the stars. His intent was not to spoil a moment of romance, but to share with his sweetheart the poetry of a great discovery.

Wednesday, February 18, 2009

Einstein Confesses his "Biggest Blunder"


In 1917 Albert Einstein began to explore the cosmological implications of his recently published general theory of relativiy. General relativity is a theory of gravitation, the only force acting between stars and galaxies, so it stood to reason that it should have something to say about the structure of the universe.

Einstein wrote his equations, gave them a nudge and watched them soar. To his astonishment, they revealed that under general relativity the universe could not be static, but must be either expanding or contracting. There was, at the time, no observational evidence for this, and Einstein was forced to conclude, much to his chagrin, that there must be something wrong with the theory. He did not discard it. Instead, he modified the equations adding an artificial term containing what he called the “cosmological constant.” The cosmological constant, he thought, would hold the universe in check.

He was wrong. The Russian mathematician Aleksandr Friedmann found that Einstein had made an algebraic error, upon correction of which the universe happily took wing again. Einstein was puzzled. The equations of general relativity were simple and elegant. They had the kind of mathematical beauty in which the insightful physicist discerns physical thruth even before the equations are tested experimentally. But the astronomers he consulted told him that the stars wander more or less randomly through space, showing no concerted motion. Nature, it appeared to Einstein, had spoken, and against nature´s last word no physicist in his right mind --least of all Einstein-- ought to rise.

At the time many astronomers still believed that the stars in the Milky Way galaxy were more or less the whole universe. The spiral nebulae had not yet been recognized as galaxies in their own right. Many scientists thought they were solar systems in the process of formation, so when the astronomer Vesto Slipher of Lowell Observatory discovered that several spiral nebulae seemed to be receding from the earth at speeds much greater than the typical velocities of stars, nobody knew what to make of his data. He had in fact found the first observational indication that the universe is expanding.

But Slipher did not know that his spiral nebulae were faraway galaxies. Only after Edwin Hubble discovered Cepheid variable stars in the spiral nebulae were they identified as such. Moreover, the presence of Cepheid variables in the spirals allowed astronomers to determine their distances. In 1929 Hubble plotted the distances of 25 galaxies against their velocities of recession from the earth as measured by the “redshift” in their spectra. If the velocities were random, if the observation that most spirals seemed to be receding from the earth were just a coincidence, the graph would show a swarm of points scattered every which way. Instead, Hubble found a straight line.

Hubble was no theorist, and he was completely innocent of general relativity. He was wary of this “redshift-distance relation,” as he cautiously called it, and did not draw conlcusions from his discovery. But his graph was a message in the handwriting of the powers that be. To all who had eyes it read: “Behold, the universe is expanding.”

Einstein later called the cosmological constant his "biggest blunder". However, watch this for later developments in the fate of  this strange antigravity force:


Monday, February 16, 2009

Sticks and Shadows to Measure the Earth


The size of the Earth was determined for the first time some 2,200 years ago. At the time it was already known that our world is a sphere, but nobody had as yet come up with a way of accurately measuring its circumference.

That the earth is round was clear from several easily observable facts: when ships put out to sea their hulls always sink below the horizon before their masts; during a lunar eclipse the Earth’s shadow on the moon is always round. And so on.

One day the mathematician Erathostenes, head of the famed Alexandria library, learned about a curious fact while “leafing through” a papyrus book (presumably part of the library’s huge collection). Every year at noon on June 21 the columns of the temples in the distant city of Syene (present day Aswan), in Egypt, ceased to cast a shadow. As Erathostenes later verified, this was not the case in Alexandria, where vertical columns cast definite shadows at noon on the summer solstice.

Erathostenes knew that on a round Earth columns in Alexandria and columns in Syene do not point in the same direction. He reasoned that at noon on June 21 the sun came directly overhead in Syene so that temple columns were parallel to its rays, while at the same time vertical columns in Alexandria (or vertical sticks, or vertical whatever) were at an angle to the sun’s rays. Erathostenes saw how he could use this fact to determine the Earth’s circumference.



He planted a vertical stick on the ground in Alexandria and waited for the summer solstice. He then computed the angle formed by his stick and the sun’s rays by measuring the length of the stick’s shadow and comparing it to its height, a method involving math taught in highschool today. Erathostenes hired someone to walk all the way from Alexandria to Syene (located exactly due south from Alexandria, near the first cataract of the Nile) and measure the distance between the two cities. It is not too hard to see that the angle formed by the sun’s rays and the stick in Alexandria must be the same as the angle that the vertical of Syene and the vertical of Alexandria would form if extended to the center of the earth. So the clever mathematician now had an angle and the length of arc it traced on the surface of the earth. The angle turned out to be one fifitieth of 360 degrees, so the distance between Alexandria and Syene must equal one fiftieth of the Earth’s circumference. The figure Erathostenes came up with is equivalent to some 40,000 kilometers --remarkably close to present-day measurements.





Many centuries later a Genoese seaman by the name of Cristoforo Colombo was trying to prove that the Earth was small enough for him to reach China by sailing westward from Europe in a reasonably short time. His critics, who were probably aware of Erathostenes’s figure, claimed that the ocean separating Europe from Asia to the west was too vast, that Columbus’s proposed voyage could not be done. And they were on to something. In his eagerness to prove himself right the studious future Admiral of the Ocean Sea had rejected all ancient measurements which were incompatible with his claim, including Erathostenes’s. Had our continent not been in the way, Columbus would have sailed from Palos into oblivion.