The new, shining age of information delivery was briefly at hand on June 8th, 1959. A Regulus cruise missile – designed for delivering a nuclear warhead to a Soviet city or port – landed neatly at the naval base at Mayport, Florida, twenty-two minutes after being launched from a U.S. Navy submarine. Instead of a city-busting “bucket of Sun”, its payload bay held 3,000 letters; the first “missile mail” had arrived. According to Postmaster General of the United States, who witnessed the missile’s landing, “before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles. We stand on the threshold of rocket mail.”
While the Regulus experiment was no more than a publicity stunt advertising the missile’s accuracy to the Soviets and the world, rocket mail was a staple of futurist predictions about soon to be realized innovations up until the late 1960s. According to Wikipedia article, rockets or artillery shells were proposed as mail-delivery systems from as early as 1810. After a widely-publicized lecture by rocket pioneer Hermann Oberth in 1927, at a time when rockets were little more than experimental toys, the United States ambassador to Germany even discussed the practical legalities of transatlantic rocket mail in anticipation of future service. Even though the transatlantic rocket mail service did not materialize, individual inventors and hobbyists kept testing small mail rockets until the advent of intercontinental missiles and the space age seemed to place rocket mail just around the corner.
In hindsight, the logic behind such predictions is easy to understand. From early 1800s, the speed of transport had increased relentlessly. As muddy roads and horse-drawn wagons gave way to railroads and sailing ships, always subject to vagaries of weather, were supplanted with steamships, rates of travel increased significantly. With the coming of the aeroplane, speeds increased even further and faster. In the 1950s, airspeed records were broken monthly, and almost all forecasters expected that the rapid increase in airplane speed would continue until most of the flying – both commercial and military – would happen much faster than the speed of sound. Rockets and missiles would only be the ultimate expression of this trend, capable of traversing the world at Mach 11 or more and hauling time-critical payloads (nuclear warheads, mail, rescue specialists riding manned missiles) to their destinations.
Furthermore, mail delivery had historically provided an important impetus for increasing rates of travel. Regular steamship service across the Atlantic owed much to desire to speed up mail delivery, and the earliest true commercial use of airplanes was likewise in mail flying. It was not unreasonable to believe that rockets, too, would find civilian use in the same role. As they were only linear extrapolations from history, 1950s forecasts of rocket mail were not (as far as I’m aware) controversial in the slightest.

Atomic mail rockets, soon irradiating a postal office near you. Thanks to @NuclearAnthro for the picture, original source unknown.
Of course, in hindsight we also know that this didn’t happen. There are no regular postal missile runs, and many see the very idea as just another ridiculous example of 1950s-era belief in inevitable march of science and technology.
To me, however, rocket mail provides another cautionary tale about predictions of technology. Those confident that rocket mail would soon be a thing were not stupid: they made intelligent predictions about future technologies based on observed trends in history. Faster delivery of mail was better than slower delivery, and previously all faster methods of transportation had been used to deliver mail, sometimes – as in the case of steamships and air mail – despite considerable initial costs and difficulties. The advances in aerospace technology during the 1950s made missiles ubiquitous, and the Space Age and the ever-present threat of missiles carrying nuclear annihilation and the attendant media coverage ensured that rockets and missiles were never far from anyone’s minds. In fact, it seems reasonable to believe that the expert forecasters were particularly influenced by the availability of information about developments in rocketry and missiles: after all, these were the state of the art technologies of the day, whose developments the forecasters generally followed with great interest. In this environment where experts could witness technological advances on a daily basis, an expert extrapolation based practically solely on the trends in rate of travel and the general desirability for faster mail service was a no-brainer.
A generous interpretation of the subsequent events would be that the experts were not wrong in the fundamentals, only in the particulars. Faster communication methods were indeed much desired, but these took the form of electronic communications. Instant was even better than fast, and e-mail eventually replaced mail in all time-critical communications. While packages still require physical handling, packages and things that take up much space are not very well suited for rocket delivery, nor can rockets cross the last mile from the sorting center to customer (or post office). As with passenger transport – another area where faster, faster, faster was once believed to be the single trend worth following – “fast enough” turned to be enough for people at large. (And before you ask – delivery drones have very little in common with rocket mail, which was always about crossing great distances from one sorting center to another, not from sorting center to customer.)
Why the fuzz, then?
But we have a problem if we let forecasters off the hook so easily while still permitting their visions to guide politics and other important decisions. Despite all the history of failed predictions based on single variable or world explanation, from Marxism to neoclassical economic theory, many people still desire to predict things based on a single variable or cause. While the fall of communism discredited Marxist world-explanations (even to a too great an extent; Marx’s ideas retain considerable value as an analytical tool, but just as other tools, Marxism falters badly if it is the sole tool in a toolkit), monocausal explanations using crude economic theory are still all the rage. In the energy debate, which I follow closely, fundamentally monocausal explanations are so common that they only rarely even raise eyebrows. Take any sufficiently advanced comment section in a debate piece about energy, and the probability of finding predictions about the energy markets or even the future of the world based on e.g. Hubbert curve, EROEI, energy density, price trend of photovoltaics, or general “inevitability” of sustainable energy approaches unity.
Sometimes predictions may prove to be accurate, but I believe more by accident: given enough projections about the future (and enough interpretative ambiguity or weasel words), some projections are bound to be proven correct, at least if one squints just right. The great authority on expert predictions, Philip Tetlock, has concluded that most “experts” have no better than 50/50 success rate on average, and even the very best predictors he’s been able to find are wrong about 25% of the time. Most notably, Tetlock has pretty much proven that “experts” who are infatuated with a single or very few “key trends” or world-explanations – monocausal predictors, or in Tetlock’s words, “hedgehogs” who have One Big Idea – are invariably the least accurate, often having significantly worse success rates than flipping a coin would produce.
When I consider the possible biases arising from groupthink, obsession with novelty and wishful thinking, all of whom are extremely prevalent in the intersection of tech industry and technocratic green movement from where the more optimistic energy predictions emanate, at least my confidence in most energy predictions drops dramatically. After all, predictions are hard and energy predictions notoriously so: detailing all the past failures would be tiresome, but suffice to say that in the 1950s, nuclear fusion (note: fusion, not fission) was supposed to deliver “power too cheap to meter” in the very near future. Likewise, atomic energy was often supposed to make even oil obsolete by 2000 – but as the eminent energy historian Vaclav Smil once noted, the most reliable energy forecasts have been those that expected little change from the present.
Our assessments of expert predictions are further impaired because failed predictions are generally airbrushed from history books, unless played up for laughs. To take just one example, quite a few people in 1820s and 1830s – just as the steam engine was making its great leap forward – sincerely believed that the prime mover of the future would be water, enhanced and made more reliable by ingenious hydraulical engineering works (including “pumped hydro” reservoirs powered by windmills) spanning entire British counties. These visionaries argued their case with passion and had plenty of evidence to support their predictions: after all, steam engines were easily the more expensive type of prime mover for the burgeoning factories, the quality (“smoothness”) of power delivered from water wheels was unsurpassed, water power avoided the polluting smoke that was already choking some localities, and the great possibilities of water power were only beginning to be tapped (Malm, 2016). Gordon’s meticulous reconstruction of the hydrological history of Great Britain (1983) estimated that even under very conservative assumptions, most English river basins in 1838 – squarely in the cusp of steam engine’s great takeoff – were still practically unused, with even the most used basin having tapped only 7.2 percent of its available power.
Despite having all the advantages on paper, waterwheels lost the race. As Andreas Malm illustrates in his superb study, Fossil Capital (which I’m going to review in more detail later), the reasons why had very little to do with numbers, trends or technology, and much more to do with difficult-to-quantify (or -predict) factors such as ready access to compliant workforce (steam engine permitted siting factories in cities) and insurmountable coordination problems between potential hydropower beneficiaries. By the way, the latter problem, as Malm also notes, bedevils many if not most proposed theoretical schemes for 100% renewable energy systems, while the renewable plans themselves are eerily reminiscent of the grand plans of 1820s water power advocates. Of course, since energy debates (just like most other technological debates) are generally obsessed with novelty and ignorant of the history of even their own field, these worrying similarities are ignored entirely.
Unfortunately, even the best predictors carefully considering the problem from all angles have a considerable failure rate; even more unfortunately, we can’t know in advance who is going to be proven right. Even though there is always going to be a winner in every coin-flipping tournament, it does not mean reliable coin-flippers exist. Likewise, accuracy of a person’s past predictions does not, by itself, tell much about how reliable her current estimates are. Sadly, we lack a rigorous system of tracking expert predictions and assessing and improving their accuracy, and until such systems are in place (if ever), I would advise taking all predictions about the future with significant grains of salt.
Literature cited
Gordon, R. (1983). Cost and Use of Water Power during Industrialization in New England and Great Britain: A Geological Interpretation. Economic History Review, 36(2), 240–259.
Malm, A. (2016). Fossil Capital: The Rise of Steam Power and the Roots of Global Warming. London: Verso Books.
Pingback: I’m no longer advocating for clean energy; here’s why. | The unpublished notebooks of J. M. Korhonen
Pingback: I’m no longer advocating for clean energy; here’s why. | Damn the Matrix