Why we don’t have the luxury of saying no to low-carbon energy, in one chart

Science Magazine

Reproduced as a public service from Anderson & Peters (2016), Science 354(6309), pp.182-183.

I’ve long maintained that the climate crisis is so acute that humanity simply does not have the luxury of picking and choosing which low-carbon energy sources we’d use. That option was foreclosed perhaps two decades ago, but the idea that we’ll lick the climate change with only our favorite technology dies hard.

Two recent studies highlight an important yet almost totally ignored problem with current climate plans and show why energy cultism must end. In short, these studies (and others like them) show just how much our plans depend on magical CO2-sucking technology conveniently appearing, and suggest that the Earth’s ability to sequester carbon on its own will decrease if we’re able to decrease atmospheric CO2 levels, causing a need for more active measures.

However, for the short term the more important of the two, and therefore the one I’m going to focus on in this post, is this paper from the hallowed pages of Science (one of the two “gold-plated” scientific publications in the world). In it, Kevin Anderson and Glen Peters dissect the assumptions that have gone into climate models and scenarios the world’s leaders believe might deliver us from excesses of runaway warming. What they find is alarming, to say the least: with few outlier exceptions, these models will not work without so-called negative emission technologies that suck carbon dioxide from the atmosphere. Without, the Paris target of 1.5°C (necessary to save small island states and many low-lying, poor countries from drowning) is right out, and even the more conservative 2°C is looking very unlikely.

The problem is, these technologies don’t actually exist. As the authors note in their paper, two decades of research have failed to produce a viable, economical technology for the easiest use case: removing carbon dioxide from fossil fuel power plants. This failure bodes ill for the more difficult but, for climate plans, absolutely necessary use case, removal of carbon dioxide from bioenergy power plants. This largely theoretical technology, known as BECCS, or burning biomass while removing and storing its carbon dioxide emissions, is supposed to be widely used from 2030 onwards. But there’s very little indication it’ll be ready by then – if ever.

Even if the technology works, we’d still be faced with the problem of finding enough biomass to feed its voracious appetite. We’ve long criticized existing environmental organizations for their energy plans that blithely assume we can find as much new arable land for our energy plantations as is currently used for our most important staple crop, wheat. Similar problems abound in more “official” climate scenarios: the two authors helpfully calculate that we’d need the total land area of one to two Indias just to grow the feedstocks, note that logistics of such undertaking (equivalent to up to half of the total global primary energy consumption) aren’t considered, and conclude that even if we make it work somehow, the biodiversity loss could equal that projected from 2.8°C warming.

I’m not an expert but can forecast that if a plan requires harnessing a subcontinent or two, there could be some difficulties ahead.

Negative emission technologies such as BECCS are used in the climate scenarios because they are easy fixes for a very difficult problem. However, the magnitude of our reliance on them is rarely communicated to decision-makers and the public, partly because climate scenarios are reported using net carbon emissions only. But what if BECCS and other fantasy technologies fail to deliver, as they very well may do?

Then we’re in a deep doo-doo. And the depth of the hole we’ll find ourselves in is greatly influenced by the existence of lamentable energy cults. Even at this late hour, there are very influential persons arguing that we shouldn’t use – or won’t need – technology X or technology Y for emission reductions.

This, too, is fantastical thinking. It’s an energy cult straight from the trenches of 1980s energy debates, and it’s woefully out of date when the world is headed towards four or more degrees of warming. All technologies have their challenges and if history is anything to go by (see my previous piece on historical energy transitions here), all technologies will suffer from problems and issues the optimist boosters won’t see in advance. Renewable energy revolutions have already shown they’re not as quick or easy as some believed, if we failed to learn the exact same lesson from the stalled nuclear energy revolution of the 1960s. I predict BECCS “revolution” will also suffer from a variety of unforeseen (or, more properly, foreseen but ignored in the initial optimism) problems, and that its rollout will not be as easy as the official scenarios believe.

The bottom line, therefore, is clear: as long as we’ve actually demonstrated in practice that we can get deep enough emission cuts with a given set of solutions, we should be extremely wary of opposing any potential partial solution to the climate crisis. These, by the way, include degrowth, downshifting and other “social” solutions – we shouldn’t think only technology will or even can help us out here, although we shouldn’t discount technological solutions either.

We don’t have a plan or planet B nor time to concoct one: we have one shot and one shot only to make this thing work, so let’s make sure we throw everything we’ve got at this mess and hope and pray some solutions at least work. Either get on the act or get out of the way, and stop trying to derail projects that might help, even if you don’t love the solution.

PS. I’d strongly suggest everyone follow Kevin Anderson (Twitter: @KevinClimate) and Glen Peters (@Peters_Glen), and listen to their many recorded presentations about the scale of the climate/energy problem. (This one is a good start.) As I’ve often noted, everyone who thinks climate/energy problem is easy to lick doesn’t really understand the problem.

PPS. Following a great suggestion from an actual anthropologist (@NuclearAnthro, follow him as well!), I’ve corrected this article so that it talks of “energy cults” rather than “energy tribalism.” I like the word “cult” more (IÄ FTAGN!), and “tribalism” has some rather unfortunate, not to say inaccurate connotations. I suggest all energy/climate peeps switch over as well.  




Posted in Ecomodernism, Economy and the Environment, Energy | Tagged , , , , , , , , , , | 3 Comments

The problem is, islands is what we have now – some thoughts on Stewart Brand’s essay “Rethinking Extinction”

If you want to read an article that simultaneously enlights, delights and gives hope, you could do much worse than to read this excellent Aeon essay on extinctions penned by one of the titans of the environmental movement – Stewart Brand.

Brand provides a fascinating counterargument to oft-heard discourse about species extinctions and the “Sixth Mass Extinction” now caused by humans. He argues that biodiversity is in fact increasing dramatically, and has been doing so for the last 200 million years (see the chart below). Despite the sometimes horrendous damage humans can inflict upon the environment and the undeniable plight of many species, we are not destroying the environment as a whole. Instead, “the frightening extinction statistics that we hear” are largely about small island ecosystems that comprise only 3 percent of the Earth’s surface, but are the site of 95% of all bird, 90% of all reptile, and 60% of all mammal extinctions since 1600. (The island ecosystems have not, by and large, collapsed as a result – they’ve evolved to a different form.) Besides, these extinctions have already happened, since most vulnerable species are already gone.


Fossil record shows biodiversity has been increasing for 200 million years. Chart courtesy of Wikimedia and the original Aeon article.

Regrettable as they are, these local losses have very little effect on the overall ecological health of the planet. Brand argues that since continents and oceans are much larger, it is unlikely even major die-offs or habitat destructions would collapse the ecosystems there. Species can move to new locations, and if a species dies, it leaves a niche free for some other species to exploit – and there is some evidence that die-offs are actually accelerating the global evolutionary rates. Islands have been a special case because of their simpler ecosystems, which is also a major reason why they’re also the best studied: they’ve been used as laboratories.

However, the fly in this ointment might be that all we have left may be islands. This was the opinion of a Finnish academician and conservation biologist Ilkka Hanski, who devoted his distinguished career to the study of threatened animal and plant species. Based on his experience on habitat patchiness, developed in part on innumerable small islands in the Finnish archipelago, he developed a theory of “island ecology” on mainlands. (More accurately, “metapopulation theory for fragmented landscapes” – PDF link.) Even though continents are indubitably larger, human activity has in effect made the remaining habitats as islands: small patches of nature amidst a sea of paved roads, power lines, cities, farms, mines, factories and wastelands.

Hanski, whose untimely death in May will be sorely regretted, repeatedly stressed how even seemingly healthy populations that on paper have enough square kilometers to roam may in fact teeter precariously on brink of extinction because their habitats have been fragmented through human activity. This is a major issue that isn’t adequately addressed in many simpler measures of environmental health, and which may well resist quantification into a simple metric.

Nevertheless, habitat fragmentation is a real problem, and while there are some ways to mitigate it (for example, roads may be built with tunnels or bridges that let animals across), it seems clear we simply cannot continue our encroachment into natural spaces without a risk of causing major damage even to continental ecosystems. The nature of this beast is that impacts are nonlinear: habitat fragmentation may go on for quite a while without many noticeable ill effects, but after some threshold is reached, a cascading collapse might well result. Worse, habitat fragmentation also hinders the mechanisms Brand hopes will help restore ecosystems after collapse of keystone species. Species may migrate and there may be a species capable of filling the hole, but what if they can’t move because their remaining habitats are isolated islands in the human-built world?

I cannot claim to know biology in more than the most rudimentary fashion, but Hanski’s writings have impacted me at least quite a bit. Fears of habitat fragmentation are one of the reasons I disagree with traditional environmentalists, as their energy scenarios seem to pay lip service at most to this problem. While all environmental organizations agree that ecosystem degradation is a major problem, they nevertheless see no problems announcing or supporting grand plans that would harness vast areas of the world solely for energy production. As this image shows, such plans are likely to result to significant added habitat fragmentation, even considering that some buildup can be done on areas already disturbed by human activities.


The simple, unavoidable fact is that less dense energy means more ecosystem fragmentation per energy unit generated. Image from our book Climate Gamble, based on actual projects.

Some may think bringing up energy in a discussion about ecosystem damage is tedious, but I disagree. These things really are interconnected, and it is very hard to conserve and protect Earthly life if, at the same time, we must gird the Earth with the harness of power grids, power farms, and energy plantations.

Posted in Ecomodernism | Tagged , , , , , , | Leave a comment

A Response to Lawrence, Sovacool, and Stirling. (Reblogged)

In the following post, Nicholas Thompson performs a very good examination of the much-publicized study that sought to “prove” commitment to nuclear power slows down CO2 emission reductions. Well, turns out the paper suffers from a basic math error – among other problems. Correcting the error turns the conclusions upside down and shows countries with active nuclear policy achieve on average better emission reductions, but I have a suspicion these corrections will not be reported as widely as the original paper. Nevertheless, here goes:

A few months ago I read a paper, “Nuclear energy and path dependence in Europe’s ‘Energy union’: coherence or continued divergence?” and after reading it,…

Source: A Response to Lawrence, Sovacool, and Stirling.

Posted in Energy, Nuclear energy & weapons | Tagged , , , , , , , | Leave a comment

Justifying liberalism and socialism without God (a commentary to Yuval Harari’s “Sapiens”)

There’s no need to invoke a belief in supernatural deity in order to believe that all humans are equally important and should be treated with as much equality as possible. In fact, striving for equitable treatment for all is one of the few solid conclusions we can draw from all philosophy.

I’ve been reading Yuval Harari’s bestseller Sapiens: A Brief History of Humankind. While I’ve been enjoying the book so far, I’m also somewhat let down by Harari’s apparent dislike of liberalism and “socialist humanism” (as he calls movements that seek greater equality for all humans). In a particularly egregious passage, Harari argues that both liberalism and humanism cannot be justified without a belief in a supreme being (pages 260 and 270 in particular in the Finnish translation). According to Harari, without eternal souls and a creator god who created all humans equal, liberals and humanists “have great trouble” explaining what’s so special in individual members of Homo Sapiens. Therefore, without God, there’s really no justification for trying to treat people equally.

I’m fairly certain Harari is wrong. While there’s no denying that the Christian tradition has influenced humanism, I for one find no need to believe in souls or creation to think that individual humans are special, have the same intrinsic worth and should be treated equally as far as possible. The reason for doing so is perhaps best formulated by philosopher John Rawls in his rightly famous principle, the Veil of Uncertainty.

The Veil of Uncertainty is a thought experiment that helps answer what is a just society, and it’s deceptively simple: what kind of society would you prefer if you didn’t know in advance where and with what endowments – such as looks, intelligence, health, or inherited wealth – you will be born?

Put simply, this is “just” a reformulation of the Golden Rule (treat others as you’d like them to treat you). While the Golden Rule is most famously known from Christian tradition, it has been developed by non-religious thinkers as well. The Golden Rule and the Veil of Uncertainty are not religious per se, and require no belief in supernatural to follow. Nevertheless, these two rules provide a sound (in my opinion, the soundest I’ve yet found) basis for making decisions that involve other people.

A desire for equality and a confirmation of intrinsic human worth flow directly from these two maxims. However we wish to implement them, the end result more or less resembles what Harari believes follows only (or even mostly) from a belief in undying souls that are equally important.

For example, if we healty people think of a society we’d prefer if we were born not with our current health endowment but with some significant disability, we’d most likely prefer a society where even the most grievously disabled are cared for, valued, and provided with the necessities for a life worth living. Similarly, if we didn’t know in advance where and with what inherited wealth (societal or family) we’d be born with, surely we should prefer a society where income differentials are moderate and even a very bad luck of birth doesn’t render us destitute?

Likewise for the Golden Rule: if I want that people would treat me equitably if I’m down on my luck, it’s probably the best to try to treat people who are down on their luck equitably.

No need to think about gods or souls, no need even for a “right” or “wrong.” What’s needed are only one belief and one desire: first, a desire to make a difference in the world, and second, a belief that environmental factors have an influence in person’s life. And that, it seems, is a fairly well-justified assumption.

Posted in Uncategorized | Tagged , , , | Leave a comment

Dear Scots; vote again, we want toll-free haggis


Image | Posted on by | Leave a comment

Energy transitions: is everything different this time?

The debate about whether transition to low-carbon energy would be faster or slower than previous energy transitions somewhat misses the point. The real problem is whether this time everything is different and whether the low-carbon energy revolution will be complete enough – and for that question, history suggests some very sobering answers.

Last week, the estimable David Roberts a.k.a. @drvox wrote an interesting and optimistic article arguing that while previous energy transitions have been protracted affairs, the current clean-energy transition might be faster. I advise all of you to read the piece, and in general whatever Mr. Roberts writes: in response to critics such as energy polymath Vaclav Smil, who point to historical record and argue that energy transitions tend to take decades, he makes some very good points about factors that truly could help speed up the clean-energy revolution. Moreover, broadly similar sentiments are very common in energy discussions. Typically, they are expressed by energy and climate optimists, particularly those who argue that the necessary energy revolution required to stave off the worst effects of climate change while bringing reliable energy to billions now without can be achieved using only renewable energy technologies.

In fact, in my experience such techno-optimism is a major factor underpinning the renewable only-optimists’ positions. Furthermore, optimism about the coming energy revolution (any day now! Prepare your green banners and save roofspace for solar PV kits!) serves as a major fig-leaf that allows states to fail to act more forcefully on climate: if a revolution is about to happen anyway, then the technological rabbit-from-hat miracle will solve this thorny issue without anyone having to ask difficult questions about, for example, whether we humans and other Earthly life exist here for the purposes of financial economy or whether it should be the other way around.

The question, “can the clean-energy revolution deliver”, is therefore of some importance. Mr. Roberts argues it could: energy transitions have been slow but they don’t have to be. In his view, energy technologies are now developing far more rapidly than before.

He ascribes this development speed to energy technologies becoming smaller and for information replacing hardware in both the design and use of energy resources. According to him, there are now energy options at all levels, ranging from kilowatt-sized home power plants to gigawatt-scale industrial facilities, and this enables innovation to be spread across dozens or hundreds of parties instead of handful of utilities. As a result, smaller technologies iterate and improve faster.

At the same time, better design tools and improved understanding of materials and technology enable energy system designers to design more efficient or cheaper energy technologies, while software revolution drives the development of the system as a whole and enables novel approaches, such as the aggregated use of multiple distributed batteries as one “super battery” able to meet demands that individual, distributed batteries cannot.

All these arguments are familiar to most people who’ve followed the energy debate closely enough, and all of them are true enough. The problem is that we’ve been here before, yet the energy transitions have nevertheless been slow and incomplete affairs.

That damnable S-curve strikes again

One of the major findings in the study of technology has been that practically all technologies go through somewhat similar lifecycles. Initial discoveries – in the words of noted technology economist Brian Arthur1, the “harnessing of a phenomena” for the first time – that in theory enable a particular technology to be contemplated are generally followed by a long period of quiescence as the embryonic technology is confined to lab benches and minds of visionaries. Improvement is slow, but it happens; and once technology either improves enough or the environment changes sufficiently that alternatives become uncompetitive enough, some first users begin to tepidly introduce the technology in specific niches where its benefits outweigh its inevitable drawbacks. In some cases, such niche adoption fosters the further development of technology, and the improvements may result to technology being applicable to other niches. The process continues (a technologist hopes), and, sometimes, it triggers a take-off when suddenly the technology is being tested for just about every imaginable use, just to see whether it might be profitable to use in such settings. At the same time, the number of adopters tends to grow dramatically, driving what for a while seems like an exponential growth curve.

Basic S-curve.png

Figure 1: Idealized S-curve, showing initial slow adoption, followed by a period of nearly exponential growth and a plateau. One should note that at every point of the curve, one can find experts using a ruler to make forecasts about the technology’s future prospects.

However, every technology seen so far has also eventually reached a plateau of slow and steady growth – or no growth worth mentioning. Simultaneously, the profligate variety of competing designs and different use cases tends to get pruned to one or few so-called “dominant designs” and dominant use cases. The interesting experimentation more or less dies out, the industry consolidates, and innovation slows down. A major reason for this seems to be that benefits from innovating follow the law of diminishing returns: in the initial stages of technology life cycle, improvement opportunities abound and are fairly easy to grasp. But as the technology matures, smaller and smaller improvements are wrung out with greater and greater expenditure. This is the nutshell, highly condensed and simplified version of what is known as the logistic S-curve theory, named after the languid “S” shape of the curve that plots technology adoption over time. As an example, I shall use what today is seen as antiquated, almost laughable technology in some circles: nuclear power.

From unlimited promise to unlimited disappointment

The first real patents for nuclear reactors for energy generation date from the Manhattan Project, from about 1944 and 1945. Even earlier, visionaries had envisioned a future powered by radioactive rays of radium or similar compounds, and during the war, the U.S. authorities actively investigated several science fiction writers on suspicion that they had learned about the details of the then-secret Manhattan Project. However, the harnessing of fission for energy generating purposes really begun with the submarine reactors of mid-1950s. In submarines, the budding technology had found a niche uniquely suited for it: a small power plant that could deliver humongous amounts of energy – enough to even wrest breathable oxygen from seawater – from a tiny fuel source while emitting no noxious gases was precisely what was needed for a true submarine, compared to primitive “under-water boats” that had to surface regularly to charge their batteries and refresh their air supplies. The development program paid for by the U.S. and Soviet navies greatly accelerated the chosen technology in particular: the light water reactor.

Meanwhile, largely due to Cold War propaganda pressures, president Eisenhower had announced the Atoms for Peace program in 1953, promising the war-weary, frightened and divided world to forge the atomic swords of Armageddon into nuclear ploughshares of prosperity and cheap energy for all. In the heady years that followed the Atoms for Peace initiative and the first Geneva nuclear conference in 1955, newfangled atomic energy was all the rage. Scientists and popular press alike poured out suggestions for potential applications of this seemingly miraculous energy source and painted heady visions of a world where want itself would be eliminated by the unlimited power liberated from uranium or thorium. Both were seen to hold large potential: even the Finnish Agrarian Party, a party not known even today for innovativeness, discussed the potential of thorium and uranium in their 1962 party program.

city of future.jpg

Figure 2. “City of Future” from a Finnish weekly magazine Seura, mid-1950s. Powered by a single underground atomic power station in the exact center of the circular city. (Thanks to Esko Pettay for this gem.)

Technological explosion reminiscent of the Cambrian explosion in biology followed: by one count, there were nearly one thousand potential reactor or power source designs, and in 1955, about hundred of these were thought to hold technical or economic promise2. The designs ranged from small radiothermal generators producing some kilowatts to large power stations envisaged to produce perhaps 200 or even 400 megawatts of electricity, while numerous solutions for space heating, industrial and process heat were also proposed. These were investigated using the state of the art tools and knowledge of the day, including massive use of electronic computers and extremely novel methods for analyzing materials. Few popular accounts of atomic energy development were complete without mention of the wondrous “scientificity” and unsurpassed rationality of the development process, and laboratory tools featured prominently in almost every pictorial account.

nuclear researcher from Los Alamos.jpg

Figure 3. Typical illustration in atomic power articles: Nuclear physicist unlocking the secrets of the atom with state of the art R&D equipment, which I just have to assume goes “Ping”. From the same Seura article as the previous image.

Even though the number of potential designs dwindled as investigations proceeded, even ten years later, in 1965, there were about ten potential reactor designs and a huge diversity of suppliers. For example, the directory of nuclear equipment suppliers attached to July 1965 issue of EuroNuclear periodical (p. IG-14) listed five suppliers for complete commercial or prototype fast reactors for power production, 13 for complete multi-purpose reactors, and 29 for complete electric power plants – in addition to numerous others offering prototypes, research reactors, laboratory equipment, processing plants, and less than complete deliveries.

For countries large and small, being a part of this energy revolution was more than just a practical matter. Even for Finland, barely out of the privations of the war and rationing, much of its energy supply still reliant on horses (yes, really!) and very much what we’d euphemistically call a “developing country” these days, taking part in the “atomic era” was as much a matter of national pride as it was a decision about energy policy. Finland, cautioned learned observers, would be left behind, if it did not partake in this miraculous wonder energy source of the future. After all, nuclear energy in its various forms was to be the power source that would bring cheap electricity to faraway places, not presently served by power grids of any sort, and enable poor countries to “leapfrog” the suddenly antiquated systems of yesteryear.3

All this may sound eerily familiar. I’m working with a group of researchers on the history of nuclear power, and my research seeks to compare the energy rhetoric of the atomic era to the rhetoric now used with renewable energy. My preliminary analysis suggests that similarities are, frankly, astounding, as are the energy scenarios proposed. Suffice to say that key questions in many scenarios up to late 1970s were how many nuclear power stations would be built to cover the entire energy demand by the year 2000, not whether they would or should be built. By the way, we’re still collecting material – if you know of some good sources, please get in touch!

Detailing the sorry saga of nuclear energy that followed in reality is beyond the scope of this blog post, but to recap the most important issue (those more interested are directed to e.g. R. Cowan’s classic 1990 study4), by 1970 the light water reactor had more or less won the race already. Originally developed for shipboard use, it had a number of shortcomings compared to other potential civilian designs. But it had a major advantage: it was available and subsidized by the state. Furthermore, thanks to the Bomb, there was an existing supply line for uranium, whereas thorium aficionados would have had to build their own. These advantages, and the speed with which countries all over the world rushed to the technology, practically ensured that light water reactor became the dominant design it is even today. Innovation slowed down, and nuclear physics courses no longer drew the crowds they used to. Industry consolidated, and today the serious suppliers for nuclear power reactors could be counted with fingers of one (mutated) hand. Practically every design they offer is a variation of the light water reactor, even though there are some promising signs of innovation perhaps beginning to happen again.

Amidst all this, the growth of nuclear energy flatlined. At one point, it had grown almost explosively – in fact, faster than what the renewables even combined have so far achieved in similar time periods. In absolute terms, the transition was extremely rapid. During that period of expansion, there was no shortage of very serious and genuine experts who boldly proclaimed that the growth observed so far would continue into far future. According to some, by the year 2000, even oil wells would be shuttered because the energy they’d produce simply could not compete.

first 50 years of energy transitions, EJ

Figure 4a. First 50 years of energy transitions in absolute terms (exajoules generated per year).


Energy transitions in history.png

Figure 4b. Past energy transitions, as share of total energy consumption. Data for both figures from Smil (2010), with thanks to Dr. Aki Suokko. Finnish readers in particular should check out his excellent blog on energy, economy and the environment, Palautekytkentöjä.

The more things change, the more they stay the same?

I won’t skirt further around the obvious issue: there are some extremely striking similarities between what happened with nuclear power and what is now happening with renewables. Some of the more notable ones include

  • A great and popular enthusiasm for “wonder energy” of future
  • Extremely positive outlook for its future prospects and rosy promises of a future exclusively powered by this one source alone
  • An interlocking set of intellectual, political and commercial interests that helped reinforce the faith in this energy solution
  • The use of state of the art design and development tools and the top experts of the day; at one point, working in nuclear physics was the prestigious career for the scientifically minded
  • An explosion of potential designs, followed by narrowing down based on technological and economic criteria and technical experience (how many different designs for large wind turbines there are these days?)
  • A major growth spurt in size to capture economies of scale (again, visible in wind turbines, and very likely going to be visible in solar energy as well – witness Ivanpah for example)
  • A consolidation in the industry (happening or already happened with wind turbines, may be happening with PV panels and batteries as “fab labs” are extremely expensive facilities)
  • Diminishing returns for innovation (probably already happened with wind turbines)

Thus armed, one could construct a perfectly valid counterargument to the points raised by mr. Roberts and other energy optimists: when compared to contemporary alternatives, almost all the features that are now supposed to speed up the clean energy revolution were also present during early phases of the nuclear era. Granted, software was less of a business back then; but nuclear plants did co-evolve with the broader electricity system and made use of some fairly advanced stuff back in the day. For example, the first large-scale energy storage systems were pioneered with nuclear power!

It is also true that nuclear power had its unique challenges, although the extent to which this affected decision-making is less clear. On the other hand, variable renewables do suffer from certain drawbacks that nuclear power didn’t have. Their production is inherently variable yet largely autocorrelated (that is, solar panels all produce during day and none produce during night; similar problem applies, to less extent, to wind turbines, as weather patterns can cover large areas). This makes profitable grid integration more and more difficult as renewable penetration increases. Their energy density is low, meaning that large areas need to be appropriated for their use (even though dual use is often a possibility), and, arguably, they can still be fairly expensive.

Furthermore, more positive analysts generally tend to overlook the fact that developments in grid flexibility and energy storage do not necessarily help variable renewables alone.

Given that large-scale energy storage systems have been pioneered and successfully used precisely in the context stable, dispatchable baseload power sources, the silent yet extremely common assumption that development of storage systems will usher a renewable revolution is somewhat puzzling. In fact, as one recent study5 noted, large-scale, scalable energy storage could potentially increase emissions in the U.S. for example. Why? Because cheap storage would allow dispatchable baseload plants to store their excess production when electricity is cheap, and sell it when it’s expensive, thus boosting profitability. At the same time, because the marginal cost of variable renewable production is so low, the periods when these sources actually produce energy would be precisely those periods when the price paid for electricity would be very low. Similar impacts would be seen if demand flexibility changes the demand curve sufficiently.

I’m tempted to think that the source of confusion here is the fact that large-scale storage, demand flexibility or likely a mixture of both are most probably necessary but not sufficient conditions for truly large-scale (that is, the scale we need for climate mitigation) renewable adoption. Since the true renewable revolution is likely to require such developments, many optimists have become confused and think that if such developments happen, then renewable revolution must also happen. But there are no guarantees this is indeed the case. It is also perfectly possibly – perhaps even too likely – that such developments will help baseload plants too, perhaps even to the extent that the relative competitive situation between variable renewables and fossil fuel baseload does not change unduly.

And this is one of the major blind spots in today’s rather ahistorical energy discourse: too many people seem to ignore the fact that all technologies are developing simultaneously.

Humans are very prone to suffering from what is known as “availability bias:” we give more weight to information and events we’ve observed personally. Proponents of a given type of technology tend to follow news about developments in that technology, and more or less ignore news from other sectors. In such a setting, it is easy to become convinced that competing technologies are standing still while one’s favorite technology is developing in leaps and bounds. But in reality, the same types of advanced design tools, materials, and software Mr. Roberts touts as unique to today’s clean energy revolution are being applied to fossil fuel technologies as well. While it’s probably true that fossil technologies are far more mature and innovation there is harder, the precariousness of the competitive edge of renewables may mean that not much innovation is needed to, essentially, maintain the status quo.

This, I believe, was one of the mistakes made by energy optimists of the 1960s. Enamored as they were with nuclear technology, they failed to notice that other technologies were developing as well. Similarly, I believe that developments in other energy technologies were a major reason why renewable revolution did not start from the 1930s or even earlier, even though – for example – wind power was studied seriously back then.

For example, the first megawatt-scale wind turbine was built in 1941. One book in my collection, dating from 1963, expertly discusses the pros and cons of wind and solar power projects; the issues about e.g. variability were the same back then, and only by updating the language somewhat, the discussion could be easily recycled to cover current renewable energy discussion. And the first mention of humanity being “soon” powered “from the unlimited rays of the Sun” I’ve been able to locate is from a 1913 article about large-scale experimental solar power project. Some other forecasts, and actual reality, were recently collated by analyst Michael Cembalest, working for J.P. Morgan:

US RE shares of primary energy and projections

Figure 5. The share of US primary energy from renewable sources, and some notable forecasts. Data from EIA, listed authors, JPMAM, compiled and image by Michael Cembalest and JPMorgan Chase & Co. (2016)

What remains to be seen is the outcome the different drivers and pressures will ultimately produce. Will the renewable revolution exceed all prior energy transformations by truly supplanting, not just adding to, existing energy sources? Or will it follow the path so far followed by every other energy transition and reach a plateau long before supplying even the majority of world’s energy needs? Since we absolutely must quit fossil fuels fairly soon, this is what scares me far more than the rapidity or slowness of the the revolution. Most developed countries absolutely must have a fairly clean energy system by 2050, mere 34 years from today. Even if the renewable revolution well exceeds more pessimistic estimates and reaches a 50% share of total energy consumption by that time, it is not enough. And history shows that energy transitions tend to stall sooner.

(As an aside, I heartily recommend everyone to watch this lecture by esteemed prof. Kevin Anderson, detailing why most climate/energy forecasts are in fact hopelessly and systematically optimistic.)

In fact, one thing I wonder about nuclear history is whether a slower energy transition might have been a good thing: perhaps we wouldn’t have become locked in to state-subsidized light water reactors only, and perhaps some of the problems caused by the rush to this technology, including insufficient safety measures and distrust and resistance-breeding arrogance nuclear boosters exhibited towards the revolution’s doubters, might have been avoided.

Past history does not guarantee future performance, and it is possible that the optimists are right: perhaps there is something entirely different about renewable energy technologies or about the socio-political-economic environment where they are being built. But “this time is different” has been the mantra of the overconfident throughout the recorded history, sufficiently so that there is an excellent book by that title. That book explores the reasons why people don’t seem to learn the lessons of eight centuries of history and repeat follies that predictably lead to economic disasters and disappointments.

As one review of the book noted, “this time is different” are the four most dangerous words in finance. Only time will tell whether the same will apply to energy transitions.


  1. Arthur, B. W. (2009). The Nature of Technology: What it is and how it evolves. New York: Free Press.
  2. Särkikoski, T. (2011). Rauhan atomi, sodan koodi: Suomalaisen atomivoimaratkaisun teknopolitiikka 1955-1970. (The technopolitics of Finnish atomic power decision.) PhD thesis published in Historical Studies from the University of Helsinki XXV, Helsinki.
  3. Särkikoski, T. (2011). Rauhan atomi, sodan koodi: Suomalaisen atomivoimaratkaisun teknopolitiikka 1955-1970. (The technopolitics of Finnish atomic power decision.) PhD thesis published in Historical Studies from the University of Helsinki XXV, Helsinki.
  4. Cowan, R. (1990). Nuclear Power Reactors: A Study in Technological Lock-in. The Journal of Economic History, 50(03), 541. http://doi.org/10.1017/S0022050700037153
  5. Hittinger, E. S., & Azevedo, I. M. L. (2015). Bulk Energy Storage Increases United States Electricity System Emissions. Environmental Science & Technology, 49(5), 3203–3210. http://doi.org/10.1021/es505027p.


Posted in Ecomodernism, Energy, History of technology | Tagged , , , , | 14 Comments

What is Ecomodernism? A lecture

Almost forgot! In case you want to hear the undersigned mangle the language of the Bard and talk about ecomodernism, look no further!

This is a hour-long lecture I delivered in English (here with Finnish subtitles) to a packed audience in January in Arcadia International Bookshop, Helsinki. (BTW, excellent – nay, mandatory – destination if you ever find yourself in the “White Daughter of the Baltic.”) Many thanks to Kaj Luukko for superb work in filming the presentation and doing all the post-production, including embedded slides from my presentation. Kaj’s blog “Gaia” was and is one of the inspirations for me, and it’s practically required reading for anyone interested in environmental issues and capable of handling some Finnish.

Posted in Ecomodernism, Uncategorized | Tagged , , | Leave a comment