Questions About This Book?
- The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
- The Used copy of this book is not guaranteed to include any supplemental materials. Typically, only the book itself is included.
- The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
From the acclaimed author of The Penciland To Engineer Is Human, The Essential Engineer's an eye-opening exploration of the ways in which science and engineering must work together to address our world's most pressing issues, from dealing with climate change and the prevention of natural disasters to the development of efficient automobiles and the search for renewable energy sources.
This textbook discusses a very important issue. While the scientist may identify problems, it falls to the engineer to solve them. It is the inherent practicality of engineering, which takes into account structural, economic, environmental, and other factors that science often does not consider, that makes engineering vital to answering our most urgent concerns.
Henry Petroski takes us inside the research, development, and debates surrounding the most critical challenges of our time, exploring the feasibility of bio fuels, the progress of battery-operated cars, and the question of nuclear power. The textbook gives us an in-depth investigation of the various options for renewable energy among them solar, wind, tidal, and ethanol explaining the benefits and risks of each.
The textbook also raises very important questions; Will windmills soon populate our landscape the way they did in previous centuries' Will synthetic trees, said to be more efficient at absorbing harmful carbon dioxide than real trees, soon dot our prairies' Will we construct a "sunshade" in outer space to protect ourselves from dangerous rays.
In many cases, the technology already exists. What's needed is not so much invention as engineering. Just as the great achievements of centuries past the steamship, the airplane, the moon landing once seemed beyond reach, the solutions to the twenty-first century's problems await only a similar coordination of science and engineering. Eloquently reasoned and written, The Essential Engineer identifies and illuminates these problems and, above all, sets out a course for putting ideas into action.
From the Hardcover edition.
Table of Contents
|Ubiquitous Risk||p. 3|
|Engineering Is Rocket Science||p. 18|
|Doctors and Dilberts||p. 29|
|Which Comes First?||p. 45|
|Einstein the Inventor||p. 59|
|Speed Bumps||p. 71|
|Research and Development||p. 93|
|Development and Research||p. 105|
|Alternative Energies||p. 124|
|Complex Systems||p. 156|
|Two Cultures||p. 173|
|Uncertain Science and Engineering||p. 184|
|Great Achievements and Grand Challenges||p. 198|
|Prizing Engineering||p. 218|
|Illustration Credits||p. 259|
|Table of Contents provided by Ingram. All Rights Reserved.|
Our lives and those of our children and grandchildren are constantly at risk. Hardly a day passes, it seems, when there is not a story on television or in the newspaper about some new threat to our health and safety. If it is not toys decorated with lead- based paint, then it is drugs—not just pharmaceuticals but something as commonplace as toothpaste—containing adulterated ingredients, or even milk contaminated with industrial chemicals that found its way into candy sold around the globe.
Risk and reassurance are two key considerations of the activities of science, engineering, invention, and technology—collectively often referred to simply as “science” or “science and technology.” Whatever they are called, they play a critical role in modern civilization, being essential for the advancement of society and the protection of our quality of life. It is these human disciplines associated with discovery and design that help separate the good from the dangerous on the farm and in the factory, at home and at the office, and on battlefields and frontiers. While science and technology can be misused and become the source of ruin, we would be at even greater risk from tainted products and contagious diseases were it not for the benevolent use of what are among the achievements that make us most distinctly human. If science and technology are two- edged swords, they are also the essential weapons in detecting and managing everyday risk.
The bad milk that caused so much consternation a couple of years ago originated in China, which is among the largest exporters of food and food ingredients in the world. In order to increase quantities and thus realize greater profit, unscrupulous participants in the food supply chain misused chemical engineering to water down and adulterate milk. However, diluted milk, being lower in protein, can easily be detected by standardized testing employing well- established technology. But by adding inexpensive melamine, a chemical rich in nitrogen that is used in producing fertilizer and plastics, the adulterated milk could be made to register a higher protein level. Some of the tainted milk found its way into baby formula, causing tens of thousands of children to become ill, with at least six infants dying. This happened because melamine does not dissolve easily in the body and in higher concentrations can produce kidney stones and lead to kidney failure. The widespread presence of melamine in Chinese food products, including cookies and yogurt, led to worldwide recalls. Melamine had also been used as a cheap filler in pet food, causing many cats and dogs to become seriously ill. The chemical was additionally suspected to have been used in other animal feed, which caused chickens to produce melamine- tainted eggs. China promised to crack down on such practices—going so far as to sentence to death some of those responsible for the criminal activity—but the incident prompted a nagging skepticism that soon there could be some other tainted import that we would have to worry about.
The Chinese milk scandal is a striking example of the use and misuse of science and technology and of the tragic consequences that can result. In themselves, science and technology are neutral tools that help us understand the world and allow us to work with its resources. People, however, are not necessarily neutral participants, and they can use their scientific understanding and technical prowess for good or ill. It may be that those who added melamine to diluted milk thought they were only being clever exploiters of chemistry. The unfortunate consequences of their actions were, of course, beyond mere venality, and ironically, the very same science and technology that served as tools of deception were also used to uncover the plot. Like risk itself, science and technology and their effects are ubiquitous.
It is not just potentially harmful products from abroad that can give us pause. Not long ago E. coli–contaminated spinach from California proved to be the culprit in the deaths of three people and the illnesses of hundreds of Americans who trusted domestically grown and harvested produce. A few years later, salmonella- tainted tomatoes were believed initially to be responsible for causing hundreds of people in dozens of states to become ill. For a while, the root of the problem, which spread through forty- one states and affected more than a thousand people, was believed to be in Florida, or maybe Mexico. When no source was found in either of those agricultural locations, however, the public was told that perhaps tomatoes were not the source after all. Maybe it was fresh jalapeños—or something else. Six weeks after advising people not to eat tomatoes, the U.S. Food and Drug Administration lifted the advisory without reaching any definite conclusion about the origin of the salmonella. It was not that science and technology were inadequate to the task. It was that there were no reliable data trails pointing to the various hands through which the bad food had passed on its way to the supermarket. When the guilty bacterium was finally found in a Texas distribution plant, its ultimate origin could not be traced. Unfortunately, such elastic and inconclusive warnings inure us to risk.
Not long after the tomato/jalapeño incident, peanut products containing salmonella were traced to a processing plant in Georgia. In the years preceding the discovery, the plant had been cited repeatedly by the state department of agriculture for health violations, ranging from unclean food preparation surfaces to dirty and mildewed walls and ceilings. On numerous occasions, when the company’s own testing detected salmonella in its products, they were retested with negative results and the products were shipped. It was only after a salmonella outbreak was traced to peanut butter from the plant that it was shut down by the Food and Drug Administration and two years’ worth of peanut butter products were recalled—after the company was given an opportunity to approve the wording of the recall statement. A selective interpretation of scientific test results and a casual enforcement of technical regulations can imperil millions of people. Such incidents threaten the reputation that science and technology once held for objectivity and are likely to bring increased calls for tightened regulation.
In the wake of the salmonella scares, the Food and Drug Administration approved the use of radiation on fresh vegetables like lettuce and spinach to rid them of bacteria. An editorial in The New York Times praised the move, noting that astronauts have long eaten irradiated meat, and that other treated foods, like poultry and shellfish, had produced no detectable adverse effects on those consumers who had tried them. Of course, there remain a great number of people who cringe at the idea of eating anything that has been exposed to radiation, and it is likely going to be a long time before the practice can be expected to become the norm. Nevertheless, it is such technological advances, which ultimately owe their existence to science and engineering research and development, that can bring an overall reduction in risks of all kinds, including those involved in activities as common and essential as eating.
In modern times, systems of commercial competitiveness and government regulation have provided a good measure of checks and balances against undue risk, but the failings of human nature can interfere with the proper functioning of those protective social structures. Science and engineering can be called upon to develop new means of defining safe limits of contaminants and toxins and can devise new instruments and methods for detecting unsafe products, but the ultimate reduction in risk from everyday things is more a matter of vigilance and enforcement than of technology. It is imperative that positive results for salmonella and other contaminants be taken seriously and treated responsibly by the private food industry. If there continues to be life- threatening disregard for consumer health and safety, it is likely that increased government oversight will be imposed.
Sometimes new technology—even that encouraged by law— brings with it new risks, and we are forced to confront the unthought of consequences of a seemingly good idea. In recent years, the increased use of crops like corn in the manufacture of biofuels intended to ease our dependence on foreign oil pinched the food supply and caused prices to rise. To avoid this problem, nonfood crops have increasingly been proposed for making second- generation green fuels. But biologists have warned that certain reeds and wild grasses known to botanists as “invasive species” and to gardeners as “weeds” would have a high likelihood of overtaking nearby fields, presenting serious threats to the ecology and economy of a region. Investors in the fast- growing worldwide biofuels industry naturally reject such doomsday scenarios, but the risk is a real one. The European Union has been especially bullish on biofuels, with plans to use them for 10 percent of the fuel needed for transportation by 2020. However, it has become increasingly clear that agricultural efforts undertaken to help meet that goal were leading to deforestation in remote regions, thereby contributing to climate change and affecting food prices worldwide. In fact, taking into account production and transportation costs, biofuels may do more harm to the global environment than fossil fuels. New technologies can certainly harbor even newer surprises.
Another potentially risky new technology is the much-touted nanotechnology, which concerns itself with substances and structures whose size is on the scale of atoms and molecules. Nanotubes, already put to use in something so familiar as a tennis racket, are essentially ultra- tiny rolled- up sheets of carbon that are employed in the production of materials much stronger and lighter than steel. Unfortunately, the tubes are shaped like microscopic needles, a property that has caused scientists to speculate that they might present the same health hazard as asbestos, whose fibers have a similar shape. Since nanotubes date only from the early 1990s, their risk as possible carcinogens is not yet fully known.
Nanomaterials of all kinds are increasingly being used in a wide variety of consumer products. Nanoparticles of silver, which are known to be very effective in killing bacteria, have been incorporated into clothing fabrics as a means of preventing the buildup of bacteria that produce undesirable odors in such articles as socks. This obvious advantage may prove to come at a price, however, for as the clothing is worn, washed, and disposed of, the nanosilver is leached out and released to the environment, where it can accumulate and do uncertain harm. For example, washed-out silver particles might destroy bacteria that are an integral part of the filtering process in municipal wastewater systems. A British royal commission on environmental pollution warned that “the potential benefits of nanomaterials meant that the rise in their use had far outstripped the knowledge of the risks they might pose.”
We are not home free even in hospitals. Epidemiologists have estimated that one in twenty- two patients contracts a hospital infection. And, according to an Institute of Medicine report published in 2000, medical error in American hospitals has been blamed for 44,000 to 98,000 unnecessary deaths per year, making inadvertent deaths due to “preventable hospital error” the number eight cause of death annually—above fatalities due to motor vehicle accidents, breast cancer, and AIDS.We must risk our life in trying to save it.
Yet there is little outrage. It is not necessary that we accept an inordinate level of risk as an inevitable by- product of technology. For example, the odds of being killed on an airliner are as long as one in ten million; it is not uncommon for an entire year to pass without a single fatal commercial airplane crash in the United States. This outstanding record has been accomplished by taking seriously rules and regulations, procedures and processes—sometimes to the inconvenience and anger of impatient passengers. If a plane has a mechanical problem, it does not take off until the problem has been diagnosed and resolved. There was considerable disruption to air traffic when American Airlines was forced to cancel hundreds of flights following a disagreement with the Federal Aviation Administration over the safety inspection of essential cables bundled together in wheel wells. Many resigned fliers sighed, “Better safe than sorry.”
When a commercial airliner does crash and passengers are killed, it is instant news, in part because it is as rare an occurrence as a man-bites-dog story. But how often do we hear on the national news of a hospital patient dying because of an improperly administered drug or an infection contracted in the course of routine medical treatment— not to mention a misdiagnosis or an overdose of improperly prescribed pills? Unless the patient is a celebrity or a well- known politician, such incidents remain the private tragedies of family and close friends. If the medical and its ancillary professions had in place—and assiduously followed—rules, regulations, and procedures as stringent as those of the aircraft industry, it is likely that the rate of deaths due to medical errors would be greatly reduced.
It seems that the more common the occurrence of something, the more we tend to accept it as part of the unavoidable risk of living. Even perceived risk can all but immobilize the perceiver. While there are certainly people who fear going into the hospital lest they never leave it alive, only the unusual individual will not seek supervised medical treatment when it is needed. Anecdotally, at least, there also seem to be many people who avoid flying because of their fear of never leaving the plane alive. But when a travel emergency arises, most will relent and go to the airport. Risk numbers may support a fear of hospitals, but they simply do not support a fear of flying. Irrational fears can be nonetheless compelling.
In fact, the more remote the chance of something happening, the more we also seem to fear it. It is as if the sheer unfamiliarity of the thing perhaps because its unfamiliarity has been magnified by our very avoidance of it, or perhaps because it is something theretofore not experienced by mankind—notches up the perceived risk to emphasize the need for precaution. A global catastrophe was feared when the first astronauts to land on the Moon would return to Earth. What if they brought back with them some deadly lunar microorganism that could cause the entire population of the planet to fall fatally ill? The risk was considered “extremely remote” but real enough to quarantine the returning astronauts until they were deemed not to be contagious. Another global catastrophe was feared when the first hydrogen bomb was tested, some scientists expressing genuine concern that there was an extremely small but real possibility that the explosion would ignite the atmosphere and destroy all life on the planet. In both cases, the risk could have been eliminated entirely by not going ahead with the new technology, but compelling geopolitical motives prevailed.
More recently, some physicists expressed concern about the Large Hadron Collider, the enormous international particle accelerator built and operated by CERN, the European Organization for Nuclear Research. Located near Geneva, and straddling the border between Switzerland and France, the collider has been described variously as “the biggest machine ever built,” “the most powerful atom-smasher,” and “the largest scientific experiment in history.” The purpose of the mostly underground machine is to send protons, which contain collections of elementary particles known as hadrons, into various targets in the hope of observing never- before- seen subatomic particles or uncovering never- before-conceived aspects of the universe. The fear was that using the collider, which is designed to operate at unprecedented energy levels, could “spawn a black hole that could swallow Earth” or trigger some other cataclysmic event. An unsuccessful lawsuit to block the start- up of the device alleged that there was “a significant risk” involved and that the “operation of the Collider may have unintended consequences which could ultimately result in the destruction of our planet.” The director general of CERN countered with a news release declaring that the machine was safe and any suggestion of risk was “pure fiction.” Scientists involved with the project were determined to go ahead with it, even though some of them received death threats. In the end, the collider forces prevailed, and test operations began at low energy levels in the late summer of 2008.
It is necessary to begin slowly with large systems like the collider, for such an enormously complicated machine brings with it a multitude of opportunities for predictable surprises with unpredictable consequences. Just thirty- six hours after it was started up, during which time many beams of protons were successfully sent through the tubes of the collider, it had to be shut down because it was believed that one of its electrical transformers failed. It was replaced and test operations began again, but something was still not right. It turned out that there was a leak in the liquid helium cooling system; the cause was attributed to a single poorly soldered connection—just one of ten thousand made during construction—between two of the machine’s fifty- seven magnets, which produced a hot spot that led to the breach. The collider operates in a supercooled state—near absolute zero temperature—which meant that it had to be warmed up before the leak could be repaired. The entire process of warming up, repairing the leak, and then cooling down to operating temperature again was at first expected to take at least a couple of months. That proved to be an optimistic estimate, for most of the magnets were damaged by a buildup of pressure associated with the helium leak and had to be replaced. The machine was in fact shut down for about a year. Such are the risks associated with complex technology, but scientists and engineers expect such setbacks and tend to take them in stride. In time, after all of the bugs had been ironed out, the collider was expected to operate as designed—and with no fatal consequences to scientists, engineers, or planet Earth anticipated.
But rare cataclysmic events do occur, and they have been described as “the most extreme examples of risk.” It is believed that about four billion years ago an object the size of Mars struck the Earth and disgorged material that became the Moon. Scientists have also hypothesized that billions of years ago a meteor the size of Pluto—packing energy equivalent to as many as 150 trillion megatons of TNT and producing the solar system’s largest crater—struck Mars and thereby caused that planet’s unbalanced shape. This asymmetry was discovered in the 1970s by observations made from Viking orbiters, which detected that there was a two- mile difference in altitude between the red planet’s upper third and bottom two- thirds. Other scientists favor the hypothesis that internal forces are responsible for the lopsidedness.
In 1994, our solar system was the scene of an unusually spectacular event, one described as “recorded history’s biggest show of cosmic violence.” Parts of Comet Shoemaker- Levy 9 rained down on Jupiter, producing enormous ( Earth- sized) fireballs that “outshone the planet” and were easily visible through a small telescope. It has been estimated that the energy released in the collision exceeded that of all the nuclear weapons in the world. The question soon arose, What if a meteor, comet, or asteroid were to be on a collision course with our own planet? And what if we saw it coming? Could anything be done about it?
No matter how low the probability, it was obvious that the consequences of such an event could be devastating. Legitimate concerns led to a congressionally mandated study by the National Aeronautics and Space Administration, which was to assess the dangers of such a collision and how it might be anticipated and avoided. NASA outlined a proposal involving a worldwide network of telescopes through which the skies could be watched to provide an early warning of anything on a collision course with Earth. The focus should clearly be on those so- called near- Earth objects that have the potential for doing the most harm. According to NASA, a worst- case scenario would occur if a large comet or asteroid hit with the energy of thousands of nuclear warheads exploding at the same time in the same location. This would enshroud the planet in dust, blocking out sunlight and disrupting the climate to such an extent that it would be the end of civilization as we know it. Such a collision is believed to have occurred 65 million years ago and led to the extinction of the dinosaurs. However, should such an event be anticipated far enough in advance, it might be possible for scientists and engineers to track the object and devise an interception plan, whereby Earth could be saved.
Still, skeptics abounded. There were those who thought that giving serious attention to a “wildly remote” possibility was “laughably paranoid.” Others doubted that a monitoring plan could gain sufficient political support to get into the federal budget. But with the NASA report fresh in its mind when the Shoemaker- Levy comet encountered Jupiter, the House Science Committee voted to charge the space agency with identifying and cataloguing “the orbital characteristics of all comets and asteroids greater than one kilometer in diameter in orbit around the sun that cross the orbit of the Earth.”
But even a relatively small asteroid could do significant damage if it came close enough to Earth before exploding in the atmosphere, however. This is what is believed to have happened in Siberia a century ago. The so- called Tunguska Incident is known to have occurred through eyewitness accounts and physical evidence. A person living about forty miles south of the location of the occurrence recalled seeing on June 30, 1908, how “the sky split in two and fire appeared high and wide over the forest.” His body became unbearably hot on the side facing north. Then there was the sound of a strong thump, and he was thrown backward. The Earth shook, wind blew, and windows were shattered. People from as far as hundreds of miles away reported hearing the blast. Seismometers recorded the equivalent of a Richter 5 earthquake. An estimated eighty million trees were knocked down over an area of eight hundred square miles. In the 1920s, the Russian mineralogist Leonid Alekseyevich Kulik led research expeditions to the extensive site and discovered that the felled trees radiated from a central spot, but he found no fragments of a meteorite in the vicinity.
The scientific consensus appears to be that an asteroid measuring maybe 150 feet across—though some scientists think it could have been much smaller—exploded about five miles above the surface of the Earth. The explosion carried the force of about fifteen megatons of TNT, which would make it a thousand times more powerful than the atomic bomb dropped on Hiroshima. Some scientists and politicians used the one-hundredth anniversary of the Tunguska event to call attention to their belief that not enough was being done to defend Earth against asteroids and comets. NASA now maintains its Near Earth Object Program office at the Jet Propulsion Laboratory to identify potential threats, but, according to one scientist, “the greatest danger does not come from the objects we know about but from the ones we haven’t identified.”
One evening in the early fall of 2008, someone watching the sky from an observatory near Tucson, Arizona, noticed an incoming object. By the next morning, three other skywatchers—located in California,Massachusetts, and Italy—had confirmed that an asteroid provisionally designated 2008 TC3 (indicating the year of its discovery and a coded reference to when in that year the discovery took place) was speeding toward our planet. The collective information about its trajectory enabled astronomers to compute and thus predict that the following day the object would collide with Earth’s atmosphere in the sky above a tiny Sudan village. The impact occurred at the predicted place and within minutes of the predicted time, which NASA had publicized about seven hours before the actual collision. According to the program manager, this represented “the first time we were able to discover and predict an impact before the event,” even though such an impact takes place about once every three months—making it far from a rare occurrence. The atmospheric impact energy of 2008 TC3 was estimated to be the equivalent of one to two thousand tons of TNT, from which it could be inferred that the asteroid had a diameter of about ten feet. Fortunately, this rock from space disintegrated when it hit Earth’s atmosphere and any fragments that may have fallen to the ground did no harm.
As of mid-2009, the NASA center had on its list more than 6,000 objects that might one day strike Earth, with about 750 of them being large enough to do considerable damage. The critical diameter appears to be under a mile, but known incoming objects just one-sixth of critical size would prompt a warning from NASA to evacuate the endangered area. Such a warning could be issued several days beforehand. In the case of the Sudan event, however, asteroid 2008 TC3 was simply “too small and dark to be discovered until it was practically upon Earth,” and so it was not on NASA’s watch list. Those on the list, especially the larger objects, will allow for plenty of warning—and possibly even the opportunity for engineers to do something about them. And there are protective measures that can be taken.
In the early stages of the asteroid- tracking effort, which is referred to as Spaceguard, the science- fiction author, inventor, and futurist Arthur C. Clarke supported the concept. In commentary in The New York Times, he described how Earth- threatening bodies might be deflected from their target. Referring to some of his own novels, in which such tasks were undertaken, Clarke outlined three possible approaches. The first he termed the “brute force approach: nuke the beast.” The equivalent of a billion tons of high explosives could split the incoming rock into fragments, some of which might still do damage to places on Earth, but not to the cataclysmic extent that the whole body would have.
Clarke’s second means was to send up astronauts to mount thruster rockets on the asteroid. Even only a slight nudge from these thrusters, exerted over a sufficiently long period of time, would change the object’s trajectory just enough so that the cumulative effect would be to miss the Earth entirely. Since it takes the Earth about six minutes to move a distance equal to its own diameter, slowing down or speeding up a threatening asteroid’s arrival time by just six minutes could make the difference between whether it strikes or misses Earth.
Finally, Clarke described an “even more elegant solution” involving the mounting of a metal foil mirror on the foreign body, thereby employing the tiny but persistent pressure of sunlight to push the body into a deflected orbit. Because of the small forces involved, however, such a scheme would need years or even decades of continuous action to make a difference, but, given enough lead time, it could work. Clarke was engaging not in the observational and predictive thinking of scientists but in the conceptual and constructive thinking of engineers. This is a key point, and it is a topic to which we shall return.
Not long after the incident of the Jupiter fireworks, a Harvard astronomer announced that an asteroid was on a collision course with Earth, and an alarmingly close encounter would occur thirty years hence. He alerted the world that the recently discovered asteroid, designated 1997 XF11, should be expected to come within thirty thousand miles of our planet at about 1:30 p.m. on October 26, 2028. Because thirty thousand miles is less than four times the diameter of Earth, and given the uncertainty of such a long- range prediction, there appeared to be a reasonable chance that there might actually be a collision. However, within a week of the announcement, another astronomer—at the Jet Propulsion Laboratory—who had used additional data relating to the asteroid to recalculate its orbit, found that it would “come no closer than 600,000 miles and had no chance of hitting the planet.”
It is not uncommon for different scientists looking at the same phenomenon to reach different conclusions; this is what makes it difficult for laypersons to sort out the truth and risk relating to everything from medical procedures to global climate change. In the immediate wake of the contradictory predictions of the asteroid’s encounter with Earth, a group of fifteen astronomers formed an expert committee that could estimate what risk to Earth would be posed by a threatening asteroid.
The following year, at a meeting of the International Astronomical Union, astronomers adopted the Torino Impact Hazard Scale, which takes into account the energy involved as well as the probability that a particular asteroid will strike Earth. The Torino scale, ranging from 0 for objects that will miss Earth to 10 for those capable of causing global destruction, thus takes into account both risk magnitude and consequence, thereby enabling a more meaningful comparison of distinct events. The asteroid that killed off the dinosaurs would have rated a 10, but no recorded object would have earned more than a 1. The Tunguska event, known largely through anecdotal and circumstantial evidence observed after the fact, is not considered to have been “recorded” in the astronomical sense.
The New York Times, editorializing about the new scale, observed that asteroid 1997 XF11, whose predicted encounter with Earth caused so much embarrassment to the astronomical community, would have dropped considerably in its rating over the few days it took to recalculate its orbit. Of course, computational errors can be off in both directions, meaning that there is risk even in our reliance on quantifications of risk.
Just as predicting landfall for a hurricane moving over the Gulf of Mexico requires constant updating as more information and data become available, so pinpointing where a body hurtling through space will strike Earth necessarily changes with time. Scientists can give it their best shot to predict risk within a certain margin of error, but the ultimate answer to the question of where something will strike can be known with certainty only at the last moment. It thus involves a judgment call to decide when to stop hoping that the scientific tracking and predicting will tell us we are safe, and when to begin taking steps to alter the course of nature. This is where science hands the problem over to engineering. Science is about knowing; engineering about doing. Or, as I once heard in a lecture on climate change, “scientists warn, engineers fix.” But it is not always easy to distinguish science from engineering or scientists from engineers, for there can be considerable overlap in their aims and methods. This book strives to clarify the often hazy distinction between science and engineering, between scientists and engineers, thereby making clearer what they can and cannot do about ameliorating global risks that have been termed “planetary emergencies”—such as global climate change that appear to threaten us and our world. Understanding the distinctions better enables more informed judgments and decisions relating to public policy issues such as those concerning management of risk and the allocation of resources for research and development.