The discoveries of the New World are a classic example of ideas being “in the air”. For example, the US patent office received two applications for a working telephone (from Bell and Grey) only a couple of hours apart, and it took years of litigation for Bell’s (possibly fraudulent) claim to prevail. Fifty years earlier, at least five ideas for an electrical telegraph sprouted almost simultaneously from Faraday’s work on electricity and magnetism. (Before that, a telegraph was a semaphore system using towers spaced in line of sight to each other. Both the French and English used telegraphs during the Napoleonic Wars. The best-known was the 200-mile line connecting the Admiralty in London to the fleet bases in Plymouth and Portsmouth, in use until supplanted by a Morse-style electrical telegraph in the 1840’s. I suppose those line-of-sight hills have now sprouted microwave transmission towers.)
Separated by half the planet, Darwin and Wallace simultaneously published their works on evolution through survival of the fittest without detailed knowledge of the other’s work. (The idea had been “in the air” for almost a hundred years before Darwin; he didn’t get credit for inventing it, but for proving it beyond reasonable doubt. The Origin of Species is by far the most boring-but-influential book ever written, as Darwin used extreme overkill to prove his thesis in excruciating detail.)
As soon as the lightweight internal combustion engine was invented by Otto, automobiles and airplanes began breaking out all over, even if Daimler and the Wright Brothers tend to get the credit. (The Wright brothers weren’t the first to try powered flight, but they were the first to figure out how to steer the contraptions.) Hertz showed that oscillating electric currents produced electromagnetic waves, and both Tesla and Marconi took that idea and ran with it to produce practical radios.
In 1823, János Bolyai in Hungary and Nikolai Ivanovich Lobachevsky in Russia simultaneously discovered non-Euclidean geometry. Well, the great Karl Gauss beat them to it, but they didn’t know it. Gauss had a distressing tendency, when somebody produced a revolutionary paper, to say, “Hmm…, that looks familiar. Oh yes, here it is in one of my notebooks from when I was a schoolchild. Never seemed worth publishing.” An eminent historian of Science said that if Gauss had published all his findings in a timely manner, he would have advanced Mathematics by fifty years. Gauss may have been the last true polymath — one who knows everything — in human history. He even wound up featured on the German 10DM banknote.
Bolyai and Lobaschevsky weren’t trying to invent non-Euclidian geometry. Euclid’s Fifth Postulate or Parallel Postulate — given a straight line and a point not on the line, one and only one line can be drawn through the point which will never intersect the original line — had, even to Euclid himself, seemed to be a shaky foundation compared to the other four postulates, and for two thousand years mathematicians had been trying to prove it using only the other four or else trying to figure out how to get along without it. Bolyai and Lobachevsky tried to prove it by assuming it wasn’t true and deriving a contradiction. Unfortunately, they found that assuming no parallels (so-called elliptic geometry) or multiple parallels through that point (hyperbolic geometry) both were fully consistent.
By the way, probably the easiest to follow proof-by-contradiction is Euclid’s own demonstration that there can be no largest prime number. His reasoning (in more modern form) went:
Sticking to mathematics, in 1846, Adams in England and Le Verrier in France almost simultaneously calculated the position of a planet which would explain discrepancies in the orbit of Uranus. Adams made his prediction slightly earlier, but he was ignored until Le Verrier’s publication, at which time both a British and a Prussian astronomer started looking. Unfortunately for the British, the Prussian, working with Le Verrier’s numbers, had access to a much better star chart of the region in question, so he could eliminate known objects much faster and duly identified Neptune first.
Still in the realm of math, Newton and Leibniz simultaneously invented the Calculus, leading to centuries of rancor between English and Continental mathematicians. Newton got the idea of the Calculus quite a few years before Leibniz, but never published until after the German’s book came out. (If it had been up to Newton, he probably never would have published anything. 09Feb17 Even his masterpiece, the Principia Mathematica, where he expounded the laws of calculus, motion, and gravity, required twenty years of his friends (particularly the Astronomer Royal Edmond Halley, of comet fame) hounding him before he finally surrendered and wrote down his ideas, and Halley paid to have it published at his expense. His other major book, Optics, was never published in his lifetime; those friends more or less raided his desk and published it after his death. As I said elsewhere about Tesla, Newton may not have been a mad genius but he certainly teetered on the edge.)
Are you getting a theme here, that in the old days absent-minded professors might get screwed out of their ideas because, to them, the thrill of discovery far outweighed the pleasure of seeing one’s name in print? (Cf. Tom Lehrer’s song about Lobachevsky — “Plagiarize! Let no vun else’s vurk evade your eyes! ... But please, alvays call it research!” Note that Lehrer was a Harvard mathematician.) Today this is not a problem, because of universities’ “Publish or Perish” rules and the ability, in the USA at least, to patent everything from the alphabet on down. (In case you didn’t know it, most of your genes are patented and you can’t use them without permission. Maybe I’ll get sued because my son looks like me.)
I might be slightly maligning the US patent office, but Australia really did issue a patent for the wheel. (Australian patent #2001100012 was issued in 2001 for a “circular transportation facilitation device.” After world-wide ridicule, they revoked the patent the following year.)
Probably the discovery with the greatest chance of changing the world is the CRISPR-CAS9 gene editing process, which came out of the labs of Jennifer Doudna and Emmanuelle Charpentier. In a classic example of the power of attorneys, Harvard University paid the US Patent Office to issue a patent on CRISPR to them ahead of considering the Charpentier and Doudna claim, even though Harvard clearly hadn’t published first or done the work first.
Two 19th-century breakthroughs about the nature of light — Maxwell’s equations of electromagnetic waves plus the Michelson-Morley experiment demonstrating that the speed of light was invariant no matter what the motion of the source — made the Special Theory of Relativity inevitable. (Einstein himself called it “Invariance Theory”.) Had he not died four years before Einstein’s 1905 paper, an Irish mathematical physicist named FitzGerald might have come up with Special Relativity first. Einstein relied heavily on FitzGerald’s work. If Einstein had decided to go on vacation in 1905, then it might have been Hendrik Lorentz instead — the heart of Relativity is still known as “Lorentz Symmetry”, which he identified in 1904, and the apparent distortion of space and time at speeds close to that of light is the “Lorentz-Fitzgerald Contraction”, a pre-relativity attempt to explain the Michelson-Morley results. Another candidate would have been Hermann Minkowski, Einstein’s teacher and “inventor” of the concept of unified space-time. (The year before Special Relativity was published, both Lorentz and Minkowski had demonstrated that Maxwell’s Equations required that time be treated as a fourth dimension.) If all of them had been struck by lightning outside a scientific symposium, any of half a dozen other physicists would have published the theory within a few years.
As mentioned elsewhere, three seperate research groups wrote papers about what came to be called the Higgs Boson within a few months of each other in 1964.
On the subject of scientific symposia, here is the most famous scientific group picture of all time, familiar to all Physics majors. Taken at the 1927 Solvay physics conference in Brussels, the 29 people in this photograph received nineteen Nobel Prizes (Mme Curie got two), and their children and spouses received at least five more. Given the solemnity of the photograph, I suppose the attendees weren’t using symposium in its literal sense — a drinking party (Greek sym-, with, and potos, drinker, as in potion). The modern meaning is from Plato’s dialog of that name, which indeed had a convivial party of philosophers as its setting. “Vertical symposium” at one time was a euphemism (particularly in military-industrial circles) for an after-hours gathering at the local bar.
Special Relativity modified Newton’s Three Laws of Motion for velocities close to the speed of light. Einstein’s General Relativity — using geometry to explain gravity by using Minkowski’s idea of treating time as a fourth dimension — was a much greater intellectual achievement than Special Relativity, because at the time there were no serious experimental or observational discrepancies which made Newton’s Law of Gravity inadequate. (GR is one scientific theory which was not “in the air” at the time but was instead a de novo revelation.) Einstein basically decided his explanation was more esthetically pleasing because it did not require Newton’s “force acting at a distance”, not to mention the fact that SR didn’t work right when there were any forces involved. (GR only requires Inertia; Newton hadn’t liked gravitation acting at a distance any better than Einstein, but couldn’t find a way to avoid it. Faraday’s concept of fields and lines of force was another attempt to sidestep the same problem.) Starting in about 1920, astronomers and physicists came up with experiments where Einstein’s and Newton’s gravitational equations would give different results, and Einstein’s “pretty” General Relativity was fully vindicated.
Unfortunately, the other towering scientific achievement of the 20th Century is Quantum Theory, and the two concepts flatly contradict each other. Scientists and engineers have been able to live with this for almost a century because General Relativity usually comes into play only at very, very large “astronomical” distances and sizes where gravity is all-important, whereas quantum effects are usually noticeable only at sub-atomic scales where gravity is negligible. So astronomers and cosmologists use Einstein, designers of transistors and lasers use Born, Heisenberg, and Schrödinger, everyone dealing with the “real world” (even NASA) uses Newton, and those trying to explain Black Holes, Dark Energy, and the Big Bang use Aspirin. Nobody has yet managed to formulate a “Theory of Everything” to reconcile the contradiction, and the toe remains the Holy Grail of theoretical physicists.
Einstein and others devised many fiendish tests to “prove” General Relativity was correct and Quantum Theory was wrong. In each experiment so far where QT and GR make different predictions, Einstein and his friends have lost, so it’s assumed that a toe will involve some sort of “quantum gravity” modifying GR, with a hypothetical “graviton” particle to carry the force just as the photon carries the electromagnetic force. Ironically, Einstein’s Nobel Prize was not given for Relativity, but for another 1905 paper which is one of the two foundations of Quantum Theory — he not only “invented” the photon, he called it an “energy quantum” and the name stuck. The other foundation is Planck’s Law of black-body radiation, published in 1901, for which he received the 1918 Nobel Prize “for the discovery of energy quanta”, but Planck never accepted Quantum Theory either — the two men effectively spent the rest of their lives disclaiming the logical consequences of their own work. In another irony, the only quantum manifestation which is big enough to be visible to the naked eye is a collection of atoms called a Bose-Einstein Condensate. Satyendra Nath Bose did the math to join Planck and Einstein’s conjectures, and he did not get a Nobel Prize. As consolation, physicists classify photons, the elusive Higgs particle, and such as bosons, while the components of ordinary matter (electrons, protons, and neutrons) are called fermions, for Enrico Fermi. (They were originally called Fermi-Dirac particles, but Dirac lost out, too.)
Einstein and Planck were realists, of course. They grudgingly admitted that Quantum Theory worked, but they felt that there must be some underlying mechanism that was compatible with classical physics and General Relativity. “But…but…but…it can’t really work that way.” They made frequent reference to the theory of epicycles, which went back to the ancient Greeks and which predicted the motions of the planets with much better accuracy than that of Copernicus. (Epicycles would now be called a geometric Fourier series, which can, with the right parameters, match any periodic motion.) It took a hundred years for Kepler and then Newton to come up with something better — or rather, simpler. Einstein and Planck felt that Quantum Theory must be another “wheels within wheels within wheels” kludge that would suddenly become blindingly simple when a better explanation (or better math) came about. Unfortunately, it has been proven that no classical theory can explain reality, if by “classical theory” one means that (a) the laws of physics are consistent from one place to another, (b) a particle can’t be two places at once, (c) the outcome of an experiment does not depend on when (or if) you look at it (the famous Schrödinger’s Cat paradox), (d) effects cannot occur before their causes, and (e) an action cannot instantaneously (rather than at the speed of light) affect the entire Universe. The proof is called Bell’s Theorem after John Stewart Bell. Part of his result was a demonstration that quantum theory would give a different result than any possible contortion of classical physics in certain experiments (the so-called Bell’s Inequalities), and when people went out and performed the experiments in the 1970’s, lo, Bell was right and Einstein was wrong.
General Relativity is a classical theory, meaning Einstein plays fair with space, time, and cause and effect. Quantum Theory doesn’t. Faster-than-light causation, which has been demonstrated in many laboratories by now (see above), is identical to time running backwards, with the effect happening before the cause. Recently, it was suggested that the two theories could be reconciled if Lorentz Symmetry was discarded — essentially saying that the laws of physics are not the same in all directions — but that introduces a host of problems, too. (For one thing, Quantum Theory does require Special Relativity — an often-cited example is that QT cannot explain the yellow color of gold or the liquidity of mercury at room temperature without invoking SR in the behavior of their electrons. Breaking Lorentz Symmetry breaks SR and GR and QT, all three, which seems somewhat counter-productive to understanding The World As We Know It.)
One familiar gadget which does have to take General Relativity into account is the Global Positioning System. According to Einstein, time runs at different rates in higher or lower gravity, and GPS receivers are so precise they have to compensate for the fact that clocks in the transmitting satellites are further from the center of the Earth and thus in a lower gravity field. (When I say precise, I mean it. If a GPS satellite’s clock is “off” by one millionth of a second, it would cause a quarter-mile error in location on the Earth’s surface.) One of the first demonstrations of the gravitational effect on time involved an athletic physicist who back-packed an atomic clock to the top of Mt. Rainier and demonstrated that it ran slower than it did down in Seattle. I also read an account of an experiment showing this with one atom. The scientists split it in half via quantum trickery. The two “virtual atoms“ traveled the exact same distance and then were recombined into the original atom. If the two paths were side-by-side, this worked fine. On the other hand, if the apparatus was rotated so that one path was above the other, the recombination wasn’t quite perfect because time ran slower for the upper one and it arrived slightly late! The current best atomic clock is so accurate that if you moved it from one stair step to the next, the relativistic effect would be quite noticeable.
In Einstein’s four-dimensional universe, everything is always moving at c, the speed of light. That is, the sum of the velocities through all four dimensions — three of space and one of time — must be exactly equal to c. Therefore, Relativity says that a body at rest in space is moving at c through time, and any spatial motion must subtract off of that and thus slightly slow it through time. This motion-derived time reduction is quite easily measured. Physicists know, for example, that a certain particle always decays into something else in a millionth of a second. However, if it is spun up to 0.99c in a particle accelerator, observations show it now lasts a thousandth of a second instead — time is flowing a thousand times slower for the fast-moving particle. If you ship a very accurate clock by commercial air liner from New York to California, it will be measurably slow upon arrival due to the speed of the plane. (This is a different effect than the gravity anomaly mentioned earlier.) Another example is the fact that cosmic rays can be detected on the ground. Primary cosmic ray particles actually “break up” high in the atmosphere, releasing a shower of short-lived particles that can be measured at ground level. However, these secondary particles wouldn’t make it all the way to the ground before themselves decaying except for the time dilation from traveling so close to the speed of light. Those primary cosmic ray particles (each a single atomic nucleus) are moving very close to the speed of light. The most powerful yet seen was a proton nicknamed the Oh-My-God particle which hit atmosphere at an estimated 0.9999999999999999999999951c. (That velocity means that in a race with a photon, the particle would fall behind by one inch every half million years.) That single proton had the kinetic energy of a baseball pitcher’s fastball, or a bowling ball dropped on your toe from head height. I find it mind-boggling that, in theory, someone in a space suit outside the atmosphere’s protection could be killed by being hit with one atom!
Astronauts routinely report seeing cosmic rays. If a particle is moving faster than the speed of light in that medium, it emits blue and ultraviolet light in the same manner as a supersonic airplane or bullet produces a “sonic boom”. This is called Čerenkov radiation; the most common is the blue glow of extreme radioactivity or a nuclear reactor immersed in water. (The speed of light in water is 0.75c, and high-energy electrons easily surpass that.) Anyway, astronauts report seeing a blue-white flash of light in one eye every few minutes, and the most likely explanation is Čerenkov radiation when a low-energy cosmic ray, still traveling close to c, impacts the vitreous humor of the astronaut’s eyeball.
There are “cosmic ray telescopes” that consist of fisheye cameras covering the entire sky, tuned to detect the characteristic Čerenkov flash (only a few nanoseconds in duration) when a primary cosmic ray plows into the atmosphere. Many of these events can be correlated to known black holes in other galaxies.
Our own galaxy’s central black hole is very low-key, perhaps a weak example of the so-called Anthropic Principle, which basically states that we observe the Universe the way it is because if it was any other way, life could not exist and we wouldn’t be here to observe it. The super-active black hole of a quasar possibly makes its entire galaxy uninhabitable. Several apparently independent and arbitrary fundamental constants — gravity, the charge on the electron, etc. — are exquisitely “tuned” such that changing them by just a tiny amount would, e.g., make stars impossible, or make elements heavier than helium impossible, or make the average lifetime of a star ten years, or whatever. See the section on CP violation for another fine-tuning exercise, and see the discussion of the Flatness problem for a much stronger appeal to the Anthropic Principle.
Einstein’s instinctive rejection of quantum physics and its adherents was almost certainly aesthetic again — Quantum Theory was and is profoundly ugly, and even many of its users don’t really want to believe the Universe actually works that way. Unfortunately, Physics doesn’t seem to be a democracy or beauty contest. The current candidates for a toe invoke ten or eleven spatial dimensions, most of which are curled up too small to see, so theoretical beauty is not on the current horizon. On the other hand, so far none of these so-called superstring conjectures meet the minimum standard for a scientific theory, namely the ability to make a prediction that can be verified by experiment or observation. The quantum “Standard Model” that explains everything but gravity, on the other hand, makes by far the most accurate predictions (to about 0.0000000001%) in all of science, but nobody knows WHY. (The current search for the Higgs Boson is because its detection would allow the Standard Model to predict the masses of all particles and therefore of the Universe as a whole. This obviously would be a significant step to unifying quantum theory and gravity.)
Just to prove my point about the accuracy of the Standard Model, in 2010 a group of scientists invented a new way to measure the radius of the proton, and they came up with a figure of 0.000000000000000841 meters. Since the SM insists the value has to be 0.000000000000000877 meters (and three other measurement methods agree), the world’s physicists are in an uproar. A four per cent discrepancy compared to 0.0000000001%? Either (a) the new measurement is wrong, or (b) there are new particles and forces that nobody knew about and all the physics books will have to get re-written. Nobody is suggesting that the Standard Model itself can be that inaccurate. Update: As of 2015, the experiments have been repeated with even more precision, and the discrepancy is still there. Stay tuned.
Richard Feynman, who won a Nobel prize for his part in formulating the Standard Model, was fond of illustrating its accuracy by saying, “If you asked me how far away is the moon and I replied, ‘From my head or my feet?’ That accurate.”
One major problem with theories of gravity is the difficulty of performing experiments. The force of gravity is 100,000,000,000,000,000,000,000,000,000,000,000 times weaker than the electro-magnetic force, meaning it’s that much harder to accurately measure it, let alone detect the alleged graviton and do “gravital” experiments compared to electrical stuff with photons and electrons. (For example, the inexpensive CCD detectors in consumer digital cameras and smartphones can detect and count individual photons — 1…2…3…. So can the human eye’s retina, but the brain filters out any signal fewer than about seven photons to avoid confusion.) Fortunately, electricity comes in positive and negative varieties, so the tremendously powerful electrical forces cancel out in ordinary matter. There is no “negative gravity”, however, so gravity is purely additive and must dominate if the mass gets large enough. Note that it took the entire mass of the Earth (6,000,000,000,000,000,000,000 tons) to generate enough pull to overcome the electrical forces in the stem of Newton’s apple holding it to the tree, and a tiny refrigerator magnet can overcome the gravitational force of the Earth to levitate a paper clip. It takes the gravity of a mass the size of the Sun (i.e., a million times the mass of the Earth — add six more zeros to the above tonnage) to overcome the electrical forces that repel atoms and crush a star into a white dwarf. Add yet one more zero to get enough gravitational pressure to collapse the atoms altogether and form a neutron star, which can be considered as a single atom with the mass of a star. At that point it only takes another factor of two or three to crush the neutron star completely out of existence, forming a black hole as a dimensionless point. (As has been frequently stated, a black hole is where God divided by zero.)
09Feb17 Ampifying my comment on the difficulty in measuring gravity, G (“big G”, both Newton’s and Einstein’s universal graviational constant) is still only known to three significant digits (one part in a thousand) after 350 years, whereas many other physical constants are known to maybe 7 significant digits (one part in ten million), and the electrical constants to at least 9 (one part in a billion). Even more frustrating, the accepted value of G keeps getting less accurate as more and more measurement techniques are tried, giving results which don’t agree with one another.
Other than gravity, there is one more tiny problem with the Standard Model — it predicts that the Universe as we know it cannot exist! More specifically, there is no explanation why the Big Bang did not create equal amounts of matter and anti-matter, which would immediately have self-annihilated and left a universe consisting only of radiation. The physicists’ jargon for this is “CP violation”, where the /C/ stands for Charge symmetry and /P/ for Parity symmetry. In other words, according to the Standard Model’s equations, the Universe would work exactly the same if all particles reversed their charge and swapped left and right. Experiments over the past fifty years have measured minute CP violation in a few kinds of particle decay, so we know the SM is wrong, but unfortunately the calculations show this would only account for an excess of matter over anti-matter sufficient for one galaxy in the entire universe. Even involving time (T symmetry; i.e. all particles also reverse their directions) doesn’t work either. Obviously, Something Is Wrong Somewhere, and the Nobels are out there waiting for the explanation.
There is a different “Why are we here?” question; the so-called Flatness Problem, dealing with the geometry of the Universe. The fundamental measure of this geometry, called Omega, in theory could have any value whatsoever without violating any laws of Physics. Unfortunately, at the time of the “Big Bang“ Ω had to be 1.00000000000000000000000000000000000000000000000000000000000000; if it was 1.00000000000000000000000000000000000000000000000000000000000001 or 0.99999999999999999999999999999999999999999999999999999999999999 then stars, galaxies, and physicists could not have formed. There’s really no answer to this one (so far) other than postulating gazillions of universes (the Multiverse) and then invoking the Anthropic Principle —What we see agrees with the above value because in a universe with any other value of Ω, we wouldn’t be around to notice. That 62 decimal places of necessary precision makes all other “Cosmic Coincidences” look completely trivial. (The reason cosmologists are sure of that number is that astronomers have measured today’s Ω as 1.00 ± .01, and geometry says that any deviation from exactly 1 would have been growing exponentially for 13 billion years.)
Oh, yes— there is another slight problem with merging General Relativity and Quantum Theory. Most approaches lead to equations stating that the energy of a particle must be infinite, which doesn’t agree too well with observation. The only approach so far which avoids the infinities (the so-called Wheeler-DeWitt equation) unfortunately also says that the universe is static — i.e., Time does not exist. This also is somewhat contrary to observation, although many theoretical physicists would find life much simpler if time and all its paradoxes did not exist. The latest tweak of this proposes that for the universe as a whole time does not exist (that is, an outside observer would see an unchanging universe), but that time can exist for things inside the universe. We shall see.
Now that the Higgs Boson has been found, the physicists have another slight problem. Its mass is 1/10000000000000000 of the predicted value. This is related to the strength of the “vacuum energy constant” — the thing that is causing the Universe to expand — which seems to be merely a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times tinier than theory predicts. (That’s one followed by 120 zeroes.) This provides even more arguments for the Multiverse, and that we seem to live in a ridiculously unlikely Universe. Unfortunately, this is another “the scientists don’t get to vote” proposition. Many physicists utterly despise the idea of a Multiverse, mainly because anything we currently don’t understand can be explained away by hand-waving and saying “Oh, that’s just the random Universe we happen to be in. Next question, please.” At it’s heart, the Multiverse hypothesis says that all search for scientific law is useless.
Note that the Multiverse hypothesis (where the laws of Physics are different in each universe) is not the same as the Many-Worlds interpretation of Quantum Mechanics, where quantum uncertainty (that damn cat in the box, for example) is avoided by assuming that every single quantum event anywhere causes the universe to split in two, identical in every way except for the result of that one event. (I.e., in one universe the cat has always been alive and in the other it has always been dead.) The math works perfectly and gives the same results as any other quantum formulation, but “conventional” quantum physicists were so outraged that they forced the author (a student of Wheeler named Everett) completely out of physics and harassed him until he attempted suicide. (The Many-Worlds interpretation says, for example, that if I move my little finger to type a character then I have shifted the position of maybe a billion atoms and have created ten billion new universes. If an alien in some other galaxy wiggles a tentacle, ditto. The circuitry of my computer is moving electrons around, its screen is emitting photons, and every electron movement and every photon is likewise creating another universe.)