did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780684863153

Q is for Quantum An Encyclopedia of Particle Physics

by
  • ISBN13:

    9780684863153

  • ISBN10:

    0684863154

  • Format: Paperback
  • Copyright: 2000-02-22
  • Publisher: Touchstone
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $49.95 Save up to $1.50
  • Buy New
    $48.45
    Add to Cart Free Shipping Icon Free Shipping

    THIS IS A HARD-TO-FIND TITLE. WE ARE MAKING EVERY EFFORT TO OBTAIN THIS ITEM, BUT DO NOT GUARANTEE STOCK.

Supplemental Materials

What is included with this book?

Summary

THE QUANTUM WORLD FROM A TO ZHere in one volume, the award-winning science writer and physicist John Gribbin has provided everything you need to know about the quantum world -- the place where most of the greatest scientific advances of the twentieth century have been made.This exceptional A to Z reference begins with a thorough introduction setting out the current state of knowledge in particle physics. Throughout, Gribbin includes articles on the structure of particles and their interactions, accounts of the theoretical breakthroughs in quantum mechanics and their practical applications, and entertaining biographies of the scientists who have blazed the trail of discovery. In a special section, "Timelines," key dates in our quest to understand the quantum world are mapped out alongside landmarks in world history and the history of science.An encyclopedia of the fundamental science of the future,Q is for Quantumis an essential companion for anyone interested in particle physics.

Author Biography

John Gribbin is the author of many bestselling books, including In Search of Schrödinger's Cat, Schrödinger's Kittens, and Companion to the Cosmos. He has a Ph.D. in astrophysics from Cambridge and is currently Visiting Fellow in Astronomy at the University of Sussex in England. He lives in Sussex.

Table of Contents

CONTENTS

Introduction: The quest for the quantum

A-Z Dictionary

Bibliography

Timelines

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

Introduction: The quest for the quantum

This quick overview of a hundred years of scientific investigation of the microworld is intended to put the detail of the main section of this book in an historical perspective. All the technical terms are fully explained in the alphabetical section.

The quantum world is the world of the very small -- the microworld. Although, as we shall see, quantum effects can be important for objects as large as molecules, the real quantum domain is in the subatomic world of particle physics. The first subatomic particle, the electron, was only identified, by J. J. Thomson, in 1897, exactly 100 years before this book, summing up our present understanding of the microworld, was completed. But it isn't just the neatness of this anniversary that makes this a good time to take stock of the quantum world; particle physicists have now developed an understanding of what things are made of, and how those things interact with one another, that is more complete and satisfying than at any time since Thomson's discovery changed the way people thought about the microworld. The standard model of particle physics, based upon the rules of quantum mechanics, tells us how the world is built up from the fundamental building blocks of quarks and leptons, held together by the exchange of particles called gluons and vector bosons.

But don't imagine that even the physicists believe that the standard model is the last word. After all, it doesn't include gravity. The structure of theoretical physics in the twentieth century was built on two great theories, the general theory of relativity (which describes gravity and the Universe at large) and quantum mechanics (which describes the microworld). Unifying those two great theories into one package, a theory of everything, is the Holy Grail that physicists seek as we enter the 21st century. Experiments that probe the accuracy of the standard model to greater and greater precision are being carried out using particle accelerators like those at CERN, in Geneva, and Fermilab, in Chicago. From time to time, hints that the standard theory is not the whole story emerge. This gives the opportunity for newspapers to run sensational headlines proclaiming that physics is in turmoil; in fact, these hints of something beyond the standard model are welcomed by the physicists, who are only too aware that their theory, beautiful though it is, is not the last word. Unfortunately, as yet none of those hints of what may lie beyond the standard model has stood up to further investigation. As of the spring of 1997, the standard model is still the best game in town.

But whatever lies beyond the standard model, it will still be based upon the rules of quantum physics. Just as the general theory of relativity includes the Newtonian version of gravity within itself as a special case, so that Newton's theory is still a useful and accurate description of how things work in many applications (such as calculating the trajectory of a space probe being sent to Jupiter), so any improved theory of the microworld must include the quantum theory within itself. Apples didn't start falling upwards when Albert Einstein came up with an improved theory of gravity; and no improved theory of physics will ever take away the weirdness of the quantum world.

By the standards of everyday common sense, the quantum world is very weird indeed. One of the key examples is the phenomenon of wave-particle duality. J. J. Thomson opened up the microworld to investigation when he found that the electron is a particle; three decades later, his son George proved that electrons are waves. Both of them were right (and they each won a Nobel Prize for their work). An electron is a particle,andit is a wave. Or rather, it is neither a particle nor a wave, but a quantum entity that will respond to one sort of experiment by behaving like a particle, and to another set of experiments by behaving like a wave. The same is true of light -- it can behave either like a stream of particles (photons) or like a wave, depending on the circumstances. Indeed, it is, in principle, true ofeverything,although the duality does not show up with noticeable strength in the everyday world (which, of course, is why we do not regard the consequences of wave-particle duality as common sense).

All of this is related to the phenomenon of quantum uncertainty. A quantum entity, such as an electron or a photon, does not have a well-determined set of properties, in the way that a billiard ball rolling across the table has a precisely determined velocity and a precisely determined position at any instant. The photon and the electron (and other denizens of the microworld) do not know, and cannot know, both precisely where they are and precisely where they are going. It may seem an esoteric and bizarre idea, of no great practical consequence in the everyday world. But it is this quantum uncertainty that allows hydrogen nuclei to fuse together and generate heat inside the Sun, so without it we would not be here to wonder at such things (quantum uncertainty is also important in the process of radioactive decay, for substances such as uranium-235).

This highlights an important point about quantum physics. It is not just some exotic theory that academics in their ivory towers study as a kind of intellectual exercise, of no relevance to everyday life. You need quantum physics in order to calculate how to make an atom bomb, or a nuclear power station, that works properly -- which is certainly relevant to the modern world. And you also need quantum physics in order to design much more domestic items of equipment, such as lasers. Not everybody immediately thinks of a laser as a piece of domestic equipment; but remember that a laser is at the heart of any CD player, reading the information stored on the disc itself; and the laser's close cousin, the maser, is used in amplifying faint signals, including those from communications satellites that feed TV into your home.

Where does the quantum physics come in? Because lasers operate on a principle called stimulated emission, a purely quantum process, whose statistical principles were first spelled out by Albert Einstein as long ago as 1916. If an atom has absorbed energy in some way, so that it is in what is called an excited state, it can be triggered into releasing a pulse of electromagnetic energy (a photon) at a precisely determined wavelength (a wavelength that is determined by the quantum rules) by giving it a suitable nudge. A suitable nudge happens when a photon with exactly the right wavelength (the same wavelength as the photon that the excited atom is primed to emit) passes by. So, in a process rather like the chain reaction of atomic fission that goes on in a nuclear bomb, if a whole array of atoms has been excited in the right way, a single photon passing through the array (perhaps in a ruby crystal) can trigger all of them to emit electromagnetic radiation (light) in a pulse in which all of the waves are marching precisely in step with one another. Because all of the waves go up together and go down together, this produces a powerful beam of very pure electromagnetic radiation (that is, a very pure colour).

Quantum physics is also important in the design and operation of anything which contains a semiconductor, including computer chips -- not just the computer chips in your home computer, but the ones in your TV, hi-fi, washing machine and car. Semiconductors are materials with conducting properties that are intermediate between those of insulators (in which the electrons are tightly bound to their respective atomic nuclei) and conductors (in which some electrons are able to roam more or less freely through the material). In a semiconductor, some electrons are only just attached to their atoms, and can be made to hop from one atom to the next under the right circumstances. The way the hopping takes place, and the behaviour of electrons in general, depends on a certain set of quantum rules known as Fermi-Dirac statistics (the behaviour of photons, in lasers and elsewhere, depends on another set of quantum rules, Bose-Einstein statistics).

After semiconductors, it is logical to mention superconductors -- materials in which electricity flows without any resistance at all. Superconductors are beginning to have practical applications (including in computing), and once again the reason why they conduct electricity the way they do is explained in terms of quantum physics -- in this case, because under the right circumstances in some materials electrons stop obeying Fermi-Dirac statistics, and start obeying Bose-Einstein statistics, behaving like photons.

Electrons, of course, are found in the outer parts of atoms, and form the interface between different atoms in molecules. The behaviour of electrons in atoms and molecules is entirely described by quantum physics; and since the interactions between atoms and molecules are the raw material of chemistry, this means that chemistry is described by quantum physics. And not just the kind of schoolboy chemistry used to make impressive smells and explosive interactions. Life itself is based upon complex chemical interactions, most notably involving the archetypal molecule of life, DNA. At the very heart of the process of life lies the ability of a DNA molecule, the famous double-stranded helix, to 'unzip' itself and make two copies of the original double helix by building up a new partner for each strand of the original molecules, using each unzipped single molecule as a template. The links that are used in this process to hold the strands together most of the time, but allow them to unzip in this way when it is appropriate, are a kind of chemical bond, known as the hydrogen bond. In a hydrogen bond, a single proton (the nucleus of a hydrogen atom) is shared between two atoms (or between two molecules), forming a link between them. The way fundamental life processes operate can only be explained if allowance is made for quantum processes at work in hydrogen-bonded systems.

As well as the importance of quantum physics in providing an understanding of the chemistry of life, an understanding of quantum chemistry is an integral part of the recent successes that have been achieved in the field of genetic engineering. In order to make progress in taking genes apart, adding bits of new genetic material and putting them back together again, you have to understand how and why atoms join together in certain sequences but not in others, why certain chemical bonds have a certain strength, and why those bonds hold atoms and molecules a certain distance apart from one another. You might make some progress by trial and error, without understanding the quantum physics involved; but it would take an awful long time before you got anywhere (evolution, of course, does operate by a process of trial and error, and has got somewhere because it has been going on for an awful long time).

In fact, although there are other forces which operate deep within the atom (and which form the subject of much of this book), if you understand the behaviour of electrons and the behaviour of photons (light) then you understand everything that matters in the everyday world, except gravity and nuclear power stations. Apart from gravity, everything that is important in the home (including the electricity generated in nuclear power stations) can be described in terms of the way electrons interact with one another, which determines the way that atoms interact with one another, and the way they interact with electromagnetic radiation, including light.

We don't just mean that all of this can be described in general terms, in a qualitative, hand-waving fashion. It can be described quantitatively, to a staggering accuracy. The greatest triumph of theoretical quantum physics (indeed, of all physics) is the theory that describes light and matter in this way. It is called quantum electrodynamics (QED), and it was developed in its finished form in the 1940s, most notably by Richard Feynman. QED tells you about every possible interaction between light and matter (to a physicist, 'light' is used as shorthand for all electromagnetic radiation), and it does so to an accuracy of four parts in a hundred billion. It is the most accurate scientific theory ever developed, judged by the criterion of how closely the predictions of the theory agree with the results of experiments carried out in laboratories here on Earth.

Following the triumph of QED, it was used as the template for the construction of a similar theory of what goes on inside the protons and neutrons that make up the nuclei of atoms -- a theory known as quantum chromodynamics, or QCD. Both QED and QCD are components of the standard model. J. J. Thomson could never have imagined what his discovery of the electron would lead to. But the first steps towards a complete theory of quantum physics, and the first hint of the existence of the entities known as quanta, appeared within three years of Thomson's discovery, in 1900. That first step towards quantum physics came, though, not from the investigation of electrons, but from the investigation of the other key component of QED, photons.

At the end of the 19th century, nobody thought of light in terms of photons. Many observations -- including the famous double-slit experiment carried out by Thomas Young -- had shown that light is a form of wave. The equations of electromagnetism, discovered by James Clerk Maxwell, also described light as a wave. But Max Planck discovered that certain features of the way in which light is emitted and absorbed could be explained only if the radiation was being parcelled out in lumps of certain sizes, called quanta. Planck's discovery was announced at a meeting of the Berlin Physical Society, in October 1900. But at that time nobody thought that what he had described implied that light only existed (or ever existed!) in the form of quanta; the assumption was that there was some property of atoms which meant that light could be emitted or absorbed only in lumps of a certain size, but that 'really' the light was a wave.

The first (and for a long time the only) person to take the idea of light quanta seriously was Einstein. But he was a junior patent office clerk at the time, with no formal academic connections, and hadn't yet even finished his PhD. In 1905 he published a paper in which he used the idea of quanta to explain another puzzling feature of the way light is absorbed, the photoelectric effect. In order to explain this phenomenon (the way electrons are knocked out of a metal surface by light), Einstein used the idea that light actually travels as a stream of little particles, what we would now call photons. The idea was anathema to most physicists, and even Einstein was cautious about promoting the idea -- it was not until 1909 that he made the first reference in print to light as being made up of 'point-like quanta'. In spite of his caution, one physicist, Robert Millikan, was so annoyed by the suggestion that he spent the best part of ten years carrying out a series of superb experiments aimed at proving that Einstein's idea was wrong. He succeeded only in proving -- as he graciously acknowledged -- that Einstein had been right. It was after Millikan's experiments had established beyond doubt the reality of photons (which were not actually given that name until later) that Einstein received his Nobel Prize for this work (the 1921 prize, but actually awarded in 1922). Millikan received the Nobel Prize, partly for this work, in 1923.

While all this was going on, other physicists, led by Niels Bohr, had been making great strides by applying quantum ideas to an understanding of the structure of the atom. It was Bohr who came up with the image of an atom that is still basically the one we learn about when we first encounter the idea of atoms in school -- a tiny central nucleus, around which electrons circle in a manner reminiscent of the way planets orbit around the Sun. Bohr's model, in the form in which it was developed by 1913, had one spectacular success: it could explain the way in which atoms produce bright and dark lines at precisely defined wavelengths in the rainbow spectrum of light. The difference in energy between any two electron orbits was precisely defined by the model, and an electron jumping from one orbit to the other would emit or absorb light at a very precise wavelength, corresponding to that energy difference. But Bohr's model introduced the bizarre idea that the electron did indeed 'jump', instantaneously, from one orbit to the other, without crossing the intervening space (this has become known as a 'quantum leap'). First it was in one orbit, then it was in the other, without ever crossing the gap.

Bohr's model of the atom also still used the idea of electrons as particles, like little billiard balls, and light as a wave. But by the time Einstein and Millikan received their Nobel Prizes, it was clear that there was more to light than this simple picture accounted for. As Einstein put it in 1924, 'there are therefore now two theories of light, both indispensable...without any logical connection'. The next big step, which led to the first full quantum theory, came when Louis de Broglie pointed out that there was also more to electrons than the simple picture encapsulated in the Bohr model accounted for.

De Broglie made the leap of imagination (obvious with hindsight, but a breakthrough at the time) of suggesting that if something that had traditionally been regarded as a wave (light) could also be treated as a particle, then maybe something that had traditionally been regarded as a particle (the electron) could also be treated as a wave. Of course, he did more than just speculate along these lines. He took the same kind of quantum calculations that had been pioneered by Planck and Einstein in their description of light and turned the equations around, plugging in the numbers appropriate for electrons. And he suggested that what actually 'travelled round' an electron orbit in an atom was not a little particle, but a standing wave, like the wave corresponding to a pure note on a plucked violin string.

De Broglie's idea was published in 1925. Although the idea of electrons behaving as waves was puzzling, this business of standing waves looked very attractive because it seemed to get rid of the strange quantum jumping. Now, it looked as if the transition of an electron from one energy level to another could be explained in terms of the vibration of the wave, changing from one harmonic (one note) to another. It was the way in which this idea seemed to restore a sense of normality to the quantum world that attracted Erwin Schrödinger, who worked out a complete mathematical description of the behaviour of electrons in atoms, based on the wave idea, by the end of 1926. He thought that his wave equation for the electron had done away with the need for what he called 'damned quantum jumping'. But he was wrong.

Also by 1926, using a completely different approach based entirely on the idea of electrons as particles, Werner Heisenberg and his colleagues had found another way to describe the behaviour of electrons in atoms, and elsewhere -- another complete mathematical quantum theory. And as if that weren't enough, Paul Dirac had found yet another mathematical description of the quantum world. It soon turned out that all of these mathematical approaches were formally equivalent to one another, different views of the same quantum world (a bit like the choice between giving a date in Roman numerals or Arabic notation). It really didn't matter which set of equations you used, since they all described the same thing and gave the same answers. To Schrödinger's disgust, the 'damned quantum jumping' had not been eliminated after all; but, ironically, because most physicists are very familiar with how to manipulate wave equations, it was Schrödinger's variation on the theme, based on his equation for the wave function of an electron, that soon became the conventional way to do calculations in quantum mechanics.

This tradition was reinforced by the mounting evidence (including the experiments carried out by George Thomson in 1927) that electrons did indeed behave like waves (the ultimate proof of this came when electrons were persuaded to participate in a version of the double-slit experiment, and produced the classic diffraction effects seen with light under the equivalent circumstances). But none of this stopped electrons behaving like particles in all the experiments where they had always behaved like particles.

By the end of the 1920s, physicists had a choice of different mathematical descriptions of the microworld, all of which worked perfectly and gave the right answers (in terms of predicting the outcome of experiments), but all of which included bizarre features such as quantum jumping, wave-particle duality and uncertainty. Niels Bohr developed a way of picturing what was going on that was taught as the standard version of quantum physics for half a century (and is still taught in far too many places), but which if anything made the situation even more confusing. This 'Copenhagen interpretation' says that entities such as electrons do not exist when they are not being observed or measured in some way, but spread out as a cloud of probability, with a definite probability of being found in one place, another probability of being detected somewhere else, and so on. When you decide to measure the position of the electron, there is a 'collapse of the wave function', and it chooses (at random, in accordance with the rules of probability, the same rules that operate in a casino) one position to be in. But as soon as you stop looking at it, it dissolves into a new cloud of probability, described by a wave function spreading out from the site where you last saw it.

It was their disgust with this image of the world that led Einstein and Schrödinger, in particular, to fight a rearguard battle against the Copenhagen interpretation over the next twenty years, each of them independently (but with moral support from each other) attempting to prove its logical absurdity with the aid of thought experiments, notably the famous example of Schrödinger's hypothetical cat, a creature which, according to the strict rules of the Copenhagen interpretation, can be both dead and alive at the same time.

Although this debate (between Einstein and Schrödinger on one side, and Bohr on the other) was going on, most physicists ignored the weird philosophical implications of the Copenhagen interpretation, and just used the Schrödinger equation as a tool to do a job, working out how things like electrons behaved in the quantum world. Just as a car driver doesn't need to understand what goes on beneath the bonnet of the car in order to get from A to B, as long as quantum mechanics worked, you didn't have to understand it, even (as Linus Pauling showed) to get to grips with quantum chemistry.

The last thing most quantum physicists wanted was yet another mathematical description of the quantum world, and when Richard Feynman provided just that, in his PhD thesis in 1942, hardly anybody even noticed (most physicists at the time were, in any case, distracted by the Second World War). This has proved a great shame for subsequent generations of students, since Feynman's approach, using path integrals, is actually simpler conceptually than any of the other approaches, and certainly no more difficult to handle mathematically. It also has the great merit of dealing with classical physics (the old ideas of Newton) and quantum physics in one package; it is literally true that if physics were taught Feynman's way from the beginning, students would only ever have to learn the one approach to handle everything. As it is, although over the years the experts have come to accept that Feynman's approach is the best one to use in tackling real problems at the research level, the way almost all students get to path integrals is by learning classical physics first (in school), then quantum physics the hard way (usually in the form of Schrödinger's wave function, at undergraduate level) then, after completing at least one degree, being introduced to the simple way to do the job.

Don't just take our word for this being the simplest way to tackle physics -- John Wheeler, Feynman's thesis supervisor, has said that the thesis marks the moment in the history of physics 'when quantum theory became simpler than classical theory'. Feynman's approach is not the standard way to teach quantum physics at undergraduate level (or classical physics in schools) for the same reason that the Betamax system is not the standard format for home video -- because an inferior system got established in the market place first, and maintains its position as much through inertia as anything else.

Indeed, there is a deep flaw in the whole way in which science is taught, by recapitulating the work of the great scientists from Galileo to the present day, and it is no wonder that this approach bores the pants off kids in school. The right way to teach science is to start out with the exciting new ideas, things like quantum physics and black holes, building on the physical principles and not worrying too much too soon about the mathematical subtleties. Those children who don't want a career in science will at least go away with some idea of what the excitement is all about, and those who do want a career in science will be strongly motivated to learn the maths when it becomes necessary. We speak from experience -- one of us (JG) got turned on to science in just this way, by reading books that were allegedly too advanced for him and went way beyond the school curriculum, but which gave a feel for the mystery and excitement of quantum physics and cosmology even where the equations were at that time unintelligible to him.

In Feynman's case, the path integral approach led him to quantum electrodynamics, and to the Feynman diagrams which have become an essential tool of all research in theoretical particle physics. But while these applications of quantum theory were providing the key to unlock an understanding of the microworld, even after the Second World War there were still a few theorists who worried about the fundamental philosophy of quantum mechanics, and what it was telling us about the nature of the Universe we live in.

For those who took the trouble to worry in this way, there was no getting away from the weirdness of the quantum world. Building from another thought experiment intended to prove the non-logical nature of quantum theory (the EPR experiment, dreamed up by Einstein and two of his colleagues), the work of David Bohm in the 1950s and John Bell in the 1960s led to the realization that it would actually be possible to carry out an experiment which would test the non-commonsensical aspects of quantum theory in a definitive manner.

What Einstein had correctly appreciated was that every version of quantum theory has built into it a breakdown of what is called 'local reality'. 'Local', in this sense, means that no communication of any kind travels faster than light. 'Reality' means that the world exists when you are not looking at it, and that electrons, for example, do not dissolve into clouds of probability, wave functions waiting to collapse, when you stop looking at them. Quantum physics (any and every formulation of quantum physics) says that you can't have both. It doesn't say which one you have to do without, but one of them you must do without. What became known as the Bell test provided a way to see whether local reality applies in the (for want of a better word) real world -- specifically, in the microworld.

The appropriate experiments were carried out by several teams in the 1980s, most definitively by Alain Aspect and his colleagues in Paris, using photons. They found that the predictions of quantum theory are indeed borne out by experiment -- the quantum world is not both local and real.

So today you have no choice of options, if you want to think of the world as being made up of real entities which exist all the time, even when you are not looking at them; there is no escape from the conclusion that the world is non-local, meaning that there are communications between quantum entities that operate not just faster than light, but actually instantaneously. Einstein called this 'spooky action at a distance'. The other option is to abandon both locality and reality, but most physicists prefer to cling on to one of the familiar features of the commonsense world, as long as that is allowed by the quantum rules.

Our own preference is for reality, even at the expense of locality; but that is just a personal preference, and you are quite free to choose the other option, the traditional Copenhagen interpretation involving both collapsing wave functionsandspooky action at a distance, if that makes you happier. What you arenotfree to do, no matter how unhappy you are as a result, is to think that the microworld is both local and real.

The bottom line is that the microworld does not conform to the rules of common sense determined by our everyday experience. Why should it? We do not live in the microworld, and our everyday experience is severely limited to a middle range of scales (of both space and time) intermediate between the microworld and the cosmos. The important thing is not to worry about this. The greatest of all the quantum mechanics, Richard Feynman, gave a series of lectures at Cornell University on the themeThe Character of Physical Law(published in book form by BBC Publications in 1965). In one of those lectures, he discussed the quantum mechanical view of nature, and in the introduction to that lecture he gave his audience a warning about the weirdness they were about to encounter. What he said then, more than 30 years ago, applies with equal force today:

I think I can safely say that nobody understands quantum mechanics. So do not take the lecture too seriously, feeling that you really have to understand in terms of some model what I am going to describe, but just relax and enjoy it. I am going to tell you what nature behaves like. If you will simply admit that maybe she does behave like this, you will find her a delightful, entrancing thing. Do not keep saying to yourself, if you can possibly avoid it, 'But how can it be like that?' because you will go 'down the drain' into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.

That is the spirit in which we offer you our guide to the quantum world; take the advice of the master -- relax and enjoy it.Nobody knows how it can be like that.

Copyright © 1998 by John & Mary Gribbin


Excerpted from Q Is for Quantum: An Encyclopedia of Particle Physics by John Gribbin
All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.

Rewards Program