Hey Greenpeace, could you find us Finns a warm place to live in?


Pictured: Finland. Not pictured: radiation levels exceeding those Greenpeace deems “emergency radiological situation” and “an unacceptable radiation risk.” (Picture credit: SeppVei/Wikimedia)

A recent Greenpeace news release leads to an inescapable conclusion: that us Finns need to be evacuated immediately, because radiation hazards of living in Finland exceed those encountered in Fukushima evacuation zones. I therefore humbly ask Greenpeace to find a place for 5.5 million Finns, or at the very least for those 549 000 of us who now have to live in a radiated wasteland where annual radiation doses are at least two times higher than what Greenpeace deems “emergency radiological situation” and “an unacceptable radiation risk” in Japan. If possible, could we also find a place that’s warm and without slush?

According to Greenpeace’s press release, “2017/02/21 Greenpeace exposes high radiation risks in Fukushima village as government prepares to lift evacuation order”, radiation levels measured in Iitate village would equate to an annual dose of 2.5 millisieverts per year (mSv/a), and levels as high as 10.4 mSv/a have been measured indoors. Since millisieverts are a measure of radiation danger that already accounts for the differences between different sources of radiation and for the differences in exposure pathways (e.g. internal or external), we can use these measurements to compare directly the risks of living in Iitate to risks of living in Finland. The comparison is simple: for the purposes of radiation hazard, higher millisievert count means greater risk.

According to Finnish estimates, the 5.5 million people living in Finland are at a greater risk than inhabitants of Iitate, receiving on average 3.2 millisieverts per year.


The mean annual radiation dose for Finnish people. Source: Finnish Radiation Safety Authority (STUK), Muikku et al. (2014) p. 6.

But this is not the whole truth, oh no! In many places in Finland, actual radiation doses are far higher than that. The ice ages scraped our soil down to bedrock, and bedrock contains considerable quantities of uranium. As it decays, one of the decay products is an odorless, invisible and radioactive gas known as radon. With little soil above to hold it, radon tends to rise into air and collect within our dwellings. As the pie chart above shows, radon and its decay products are a major factor in the radiation dose of an average Finn, but radon exposures can vary widely, from almost zero to as high as 340 (yes, three hundred and fourty) millisieverts per year (Muikku et al. 2014, p. 12).

According to measurements conducted by the Finnish Radiation Safety Authority (STUK), there are about 549 000 Finns who receive at least 5 millisieverts per year from radon and other sources. Of those, perhaps 70 000 receive annual doses that exceed the highest doses Greenpeace managed to measure at Iitate (10.4 mSv/a) (Muikku et al. 2014, p. 15). (Note: it is unclear whether radon might in fact account for the high indoors radiation doses Greenpeace measured in Iitate. It is well known from Finland and other areas with high radon concentrations that without very good basement ventilation, radon can easily collect in houses and result to very high dose rates.)

Even though extensive studies have failed to find any clear links between these dose rates and incidence of health problems (a link likely exists, but is so weak that clear connection cannot be established), it should be by now clear to anyone that if anyone deserves an evacuation because of radiation hazards, it is us Finns. (See also the picture at the top of this post.)

Preferably to somewhere warm.

The Fukushima disaster was a needless tragedy that sundered apart entire communities. It is despicable for any organization to continue to prolong this tragedy and exploit people’s understandable fears for the purposes of propagating its outdated, probably disastrous energy policy that puts opposition to nuclear power at front and center even when evidence of the dangers of runaway climate change becomes clearer by day. It is especially despicable to use utterly misleading propaganda like Greenpeace currently does – to solicit donations.


Muikku et al. (2014). Suomalaisten keskimääräinen efektiivinen annos. STUK publication A259. (PDF link)

Posted in Nuclear energy & weapons, What they aren't telling you about nuclear power | Tagged , , , , , | 31 Comments

Stall warning for renewable energy?


A model that estimated the plateauing of nuclear and hydropower to within 20 percent of reality suggests that absent a technological breakthrough, the growth of new renewable energy – that is, wind and solar – will saturate and end when these new power sources, taken together, amount to no more than about ten percent of the world’s energy supply. This is the startling and fearsome conclusion of recent study by Hansen et al. (2016).

I previously wrote in length about that nemesis of optimistic prognosticators, the technology S-curve, and noted that there are no reasons to believe renewable energy revolution won’t be subject to the same forces that so far have stalled every previous energy revolution before they’ve been complete. In that article, I showed how the still unmatched initial growth spurt of nuclear power convinced many reasonable observers up until late 1970s that the age of cheap atomic energy was inevitable and that other energy sources would soon simply vanish before this juggernaut of unlimited power and potential. The similarities to today’s discussion and hype about the potential of renewable energy sources, largely based on their relatively rapid initial growth rates, are direct and worrisome: the nuclear revolution entered the steady phase of the S-curve and stalled long before being complete, and there are many signs the renewable revolution is in danger of stalling.

Now, Hansen et al. provide some further evidence for my claims. They used a simple logistic model (“S-curve”) to estimate the final plateau of various energy sources, including hydropower in Europe and nuclear power globally. Using data from the growth years of these power sources, they found that the logistic model predicted the ultimate extent of both nuclear and hydropower generation to within 20 percent of reality. Then, using the same model with similar data about the recent growth of renewable energy, and factoring in optimistic estimates until 2020, they concluded that the similar flattening in growth rates would occur with renewables by about 2030, resulting to global power generation of about 1.8 terawatts (TW) at most.

Because global energy use exceeds 17 TW at the moment and is projected to increase until 2050 at least, this presents a very stark warning to everyone interested in stopping dangerous climate change and weaning the world from the scourge of fossil fuels. The idea that we can stop dangerous climate change, and particularly the idea that we only need renewable energy and energy efficiency to do so, are almost entirely predicated on the assumption that renewable energy growth will be exponential rather than logistic, and/or that the plateau of slow growth will take decades to achieve. However, every technology has followed the logistic S-curve in the past, and Hansen et al. note that the observed data fits the logistic model better than it fits an exponential one.

This is potentially a very serious issue indeed.


Logistic fit for energy sources growth, from Hansen et al. (2016)

Hansen et al’s warning echoes what we’ve been saying for some time now: there is a troubling slowdown in new renewable energy installations in precisely those countries and regions that have been the most advanced in this respect. In other words, RE installations are slowing down in places that have installed the most RE – long before installation rates, let alone total capacities, that decarbonisation requires have been achieved. These slowdowns could be an early signal that renewable revolution is stalling, although it’s still too early to say for certain. However, renewable energy faces some unique challenges, the chief among them being probably the tendency of additional installations to cannibalise the revenue streams of all the similar generators after the penetration roughly equals the average capacity factor of the generators in question. (See this excellent treatise by Alex Trembath and Jesse Jenkins for more detail about this problem.)


Solar PV installation rates in forerunner Europe took a plunge after a peak in 2010, even though global installation rates increased. This could be a troubling signal. Figure from our book, Climate Gamble (see sidebar).

It’s still too early to say for certain whether Hansen et al. are right. I sincerely hope they’re wrong, and that renewable energy revolution continues as the optimists hope. This may well be possible, if – and that’s a major if – energy markets are restructured to truly value low-carbon energy, and price decreases for both renewable generators and storage systems continue unabated. Still, we’re likely to need more subsidies (implicit or explicit) for our energy systems, not less, and as the authors note, the trend is unfortunately towards phasing out the subsidies.

Despite fervently hoping that the authors are seriously in error, I cannot, in good conscience, ignore the extremely troubling similarities to previous hype cycles about energy revolutions, nor can I ignore the potential early warning signals that may indicate that renewable revolution isn’t going to be such a smooth sailing as its proponents still often claim it to be. We need more effort to promote clean energy, and more alternatives. Consider that even if Hansen et al’s prediction turns out to be wrong by a factor of 7, we’d still require more than just wind and solar power. Ignore warning signs such as this only at our peril.

As a bonus: my previous article about S-curves did not include data about the growth of new renewable energy sources, but thanks to a helpful table in the aforementioned study, here comes. It represents an estimate of total power generated by each source during the first 30 years of their expansion, i.e. installed power corrected by (estimated) capacity factor. The capacity factors used are as follows: hydro, 0.8; solar, 0.1; wind, 0.27; nuclear, 0.5 from 1965 to 1980, rising to 0.8 by 2015. You can access the data from this Google Sheet and view the interactive version of the graph below here. As you can see, the nuclear energy revolution is still unprecedented (and particularly so if we’d look at growth relative to existing generation), although wind power growth shows some signs of catching up.


Actual power generation (installed capacity * capacity factor) from different energy sources during up to 30 years of expansion. Data from Hansen et al. (2016), capacity factor estimates my own.


Hansen, J. P., Narbel, P. A., & Aksnes, D. L. (2016). Limits to growth in the renewable energy sector. Renewable and Sustainable Energy Reviews, 70 (October 2016), 769–774. https://doi.org/10.1016/j.rser.2016.11.257

Posted in Energy, History of technology | Tagged , , , , | 1 Comment

Climate shouldn’t be used as a political sledgehammer

A real problem hindering climate change fight is that for too many, climate change is just another blunt rhetorical instrument with which to hammer in their favored policies.

As this article discusses, it would be utterly naive to believe that noted critics of capitalism, for instance, could’ve somehow impartially studied the climate change problem and only then conclude that preventing dangerous climate change requires the downfall of current capitalist world order. In Finnish context, it shouldn’t be surprising if people who’ve made their career in promoting bioenergy will promote bioenergy as a (or even THE) solution to climate change, no matter what science might have to say about the subject. The same of course applies to those deep in mainstream economic thought: no one should be surprised if a devout free marketeer sees climate change as a problem that should be left to the markets to solve.

However, the problem is probably particularly acute for those who’d actually want to prevent dangerous climate change. Far too many organizations and individuals who promote climate change awareness also have a deep, long-standing vested interest in or attachment to particular solutions. In the case of environmental NGOs, it would be only fair to say that their commitment to a world shorn of “excessive consumption” and powered only by renewable energy has far longer roots than their commitment to fighting climate change. As this article notes, this makes conservatives naturally very suspicious when these organizations now just have happened to find a global, serious problem that just happens to vindicate the policies these organizations have proposed for decades before climate change became a mainstream issue.

It’s for this reason why I’ve long said that one of the most powerful symbolic messages the traditional environmental organizations could possibly send would be to issue a statement saying that we now need all potential solutions to this problem, not just those the organization has happened to promote for decades. This is a credibility issue for the whole climate fight. I’ve heard numerous climate skeptics and outright deniers over the years ask the simple question: if climate change is the existential problem the environmental NGOs claim it is, how come they still oppose nuclear power so vehemently?

If environmental NGOs really believe what they’re saying about nuclear energy – that it’s too costly and generally uncompetitive against renewables – they shouldn’t have anything to lose by issuing such a statement. If they’re right, then nuclear is on its way out regardless of what they can say about it. However, such a statement would go a long way towards emphasizing the urgency of the climate fight, and just might convince some of those who aren’t motivated to act because they feel the leftie environmentalists are just using a made-up threat to push their pet politics.

DISCLAIMER: I’m definitely a liberal-leftie environmentalist myself – but I try to figure out how to make environmental issues matter in politics. I’m also involved in the fledgling ecomodernist movement, which just might provide a home for those concerned about environmental issues but incapable of acting within the boundaries of traditional environmentalism.

Posted in Ecomodernism | Tagged , , , , | 3 Comments

I’ll also ban travel to the United States.

Dear friends in the United States and elsewhere,

With regret I need to inform you that I will not be able in good conscience to attend any academic or other events in the United States. Obviously, I also can’t consider any job offers either.

This decision pains me because ever since I was about six, I’ve held the United States and her citizens in great regard, and have always enjoyed my visits there. However, I have no desire to support, aid or abet the proto-fascist regime that is now trying to take power in that once-great country. Even more so, I cannot just pretend nothing happened as my colleagues and fellow humans are so unfairly discriminated against. Among other outrages, just last weekend five PhD students from Aalto University, my own academic home, were denied entry to a long-planned study trip to the United States.

Furthermore, I shall direct my academic and other work towards publications outside the United States. I currently have one manuscript forthcoming in an US-based journal, and I shall not withdraw it at this late stage; however, I shall prioritize all further works to outlets whose taxes do not support a proto-fascist regime. I shall also endeavour to shift the focus of our research group towards non-US outlets. It is high time for top scholarship to return to Europe, from where the predecessors of Mr. Trump once drove it away. Fortunately, we are now in a position to offer similar sanctuary to scholars from other countries, and if you or anyone you know needs any help relocating, please let me know. Right now Finland is unfortunately not the best place to do research due to budget cuts, but other European countries might be.

I understand these actions will have little impact on actual politics and may greatly harm my career prospects. However, to quote another scholar, Stu Marshall, I’d rather have a conscience than a career.

Posted in Uncategorized | Tagged , , , | Leave a comment

Atomic mail rockets, and how monocausal predictions are particularly dangerous

The new, shining age of information delivery was briefly at hand on June 8th, 1959. A Regulus cruise missile – designed for delivering a nuclear warhead to a Soviet city or port – landed neatly at the naval base at Mayport, Florida, twenty-two minutes after being launched from a U.S. Navy submarine. Instead of a city-busting “bucket of Sun”, its payload bay held 3,000 letters; the first “missile mail” had arrived. According to Postmaster General of the United States, who witnessed the missile’s landing, “before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles. We stand on the threshold of rocket mail.”

While the Regulus experiment was no more than a publicity stunt advertising the missile’s accuracy to the Soviets and the world, rocket mail was a staple of futurist predictions about soon to be realized innovations up until the late 1960s. According to Wikipedia article, rockets or artillery shells were proposed as mail-delivery systems from as early as 1810. After a widely-publicized lecture by rocket pioneer Hermann Oberth in 1927, at a time when rockets were little more than experimental toys, the United States ambassador to Germany even discussed the practical legalities of transatlantic rocket mail in anticipation of future service. Even though the transatlantic rocket mail service did not materialize, individual inventors and hobbyists kept testing small mail rockets until the advent of intercontinental missiles and the space age seemed to place rocket mail just around the corner.

In hindsight, the logic behind such predictions is easy to understand. From early 1800s, the speed of transport had increased relentlessly. As muddy roads and horse-drawn wagons gave way to railroads and sailing ships, always subject to vagaries of weather, were supplanted with steamships, rates of travel increased significantly. With the coming of the aeroplane, speeds increased even further and faster. In the 1950s, airspeed records were broken monthly, and almost all forecasters expected that the rapid increase in airplane speed would continue until most of the flying – both commercial and military – would happen much faster than the speed of sound. Rockets and missiles would only be the ultimate expression of this trend, capable of traversing the world at Mach 11 or more and hauling time-critical payloads (nuclear warheads, mail, rescue specialists riding manned missiles) to their destinations.


Furthermore, mail delivery had historically provided an important impetus for increasing rates of travel. Regular steamship service across the Atlantic owed much to desire to speed up mail delivery, and the earliest true commercial use of airplanes was likewise in mail flying. It was not unreasonable to believe that rockets, too, would find civilian use in the same role. As they were only linear extrapolations from history, 1950s forecasts of rocket mail were not (as far as I’m aware) controversial in the slightest.


Atomic mail rockets, soon irradiating a postal office near you. Thanks to @NuclearAnthro for the picture, original source unknown.

Of course, in hindsight we also know that this didn’t happen. There are no regular postal missile runs, and many see the very idea as just another ridiculous example of 1950s-era belief in inevitable march of science and technology.

To me, however, rocket mail provides another cautionary tale about predictions of technology. Those confident that rocket mail would soon be a thing were not stupid: they made intelligent predictions about future technologies based on observed trends in history. Faster delivery of mail was better than slower delivery, and previously all faster methods of transportation had been used to deliver mail, sometimes – as in the case of steamships and air mail – despite considerable initial costs and difficulties. The advances in aerospace technology during the 1950s made missiles ubiquitous, and the Space Age and the ever-present threat of missiles carrying nuclear annihilation and the attendant media coverage ensured that rockets and missiles were never far from anyone’s minds. In fact, it seems reasonable to believe that the expert forecasters were particularly influenced by the availability of information about developments in rocketry and missiles: after all, these were the state of the art technologies of the day, whose developments the forecasters generally followed with great interest. In this environment where experts could witness technological advances on a daily basis, an expert extrapolation based practically solely on the trends in rate of travel and the general desirability for faster mail service was a no-brainer.

A generous interpretation of the subsequent events would be that the experts were not wrong in the fundamentals, only in the particulars. Faster communication methods were indeed much desired, but these took the form of electronic communications. Instant was even better than fast, and e-mail eventually replaced mail in all time-critical communications. While packages still require physical handling, packages and things that take up much space are not very well suited for rocket delivery, nor can rockets cross the last mile from the sorting center to customer (or post office). As with passenger transport – another area where faster, faster, faster was once believed to be the single trend worth following – “fast enough” turned to be enough for people at large. (And before you ask – delivery drones have very little in common with rocket mail, which was always about crossing great distances from one sorting center to another, not from sorting center to customer.)

Why the fuzz, then?

But we have a problem if we let forecasters off the hook so easily while still permitting their visions to guide politics and other important decisions. Despite all the history of failed predictions based on single variable or world explanation, from Marxism to neoclassical economic theory, many people still desire to predict things based on a single variable or cause. While the fall of communism discredited Marxist world-explanations (even to a too great an extent; Marx’s ideas retain considerable value as an analytical tool, but just as other tools, Marxism falters badly if it is the sole tool in a toolkit), monocausal explanations using crude economic theory are still all the rage. In the energy debate, which I follow closely, fundamentally monocausal explanations are so common that they only rarely even raise eyebrows. Take any sufficiently advanced comment section in a debate piece about energy, and the probability of finding predictions about the energy markets or even the future of the world based on e.g. Hubbert curve, EROEI, energy density, price trend of photovoltaics, or general “inevitability” of sustainable energy approaches unity.

Sometimes predictions may prove to be accurate, but I believe more by accident: given enough projections about the future (and enough interpretative ambiguity or weasel words), some projections are bound to be proven correct, at least if one squints just right. The great authority on expert predictions, Philip Tetlock, has concluded that most “experts” have no better than 50/50 success rate on average, and even the very best predictors he’s been able to find are wrong about 25% of the time. Most notably, Tetlock has pretty much proven that “experts” who are infatuated with a single or very few “key trends” or world-explanations – monocausal predictors, or in Tetlock’s words, “hedgehogs” who have One Big Idea – are invariably the least accurate, often having significantly worse success rates than flipping a coin would produce.

When I consider the possible biases arising from groupthink, obsession with novelty and wishful thinking, all of whom are extremely prevalent in the intersection of tech industry and technocratic green movement from where the more optimistic energy predictions emanate, at least my confidence in most energy predictions drops dramatically. After all, predictions are hard and energy predictions notoriously so: detailing all the past failures would be tiresome, but suffice to say that in the 1950s, nuclear fusion (note: fusion, not fission) was supposed to deliver “power too cheap to meter” in the very near future. Likewise, atomic energy was often supposed to make even oil obsolete by 2000 – but as the eminent energy historian Vaclav Smil once noted, the most reliable energy forecasts have been those that expected little change from the present.

Our assessments of expert predictions are further impaired because failed predictions are generally airbrushed from history books, unless played up for laughs. To take just one example, quite a few people in 1820s and 1830s – just as the steam engine was making its great leap forward – sincerely believed that the prime mover of the future would be water, enhanced and made more reliable by ingenious hydraulical engineering works (including “pumped hydro” reservoirs powered by windmills) spanning entire British counties. These visionaries argued their case with passion and had plenty of evidence to support their predictions: after all, steam engines were easily the more expensive type of prime mover for the burgeoning factories, the quality (“smoothness”) of power delivered from water wheels was unsurpassed, water power avoided the polluting smoke that was already choking some localities, and the great possibilities of water power were only beginning to be tapped (Malm, 2016). Gordon’s meticulous reconstruction of the hydrological history of Great Britain (1983) estimated that even under very conservative assumptions, most English river basins in 1838 – squarely in the cusp of steam engine’s great takeoff – were still practically unused, with even the most used basin having tapped only 7.2 percent of its available power.

Despite having all the advantages on paper, waterwheels lost the race. As Andreas Malm illustrates in his superb study, Fossil Capital (which I’m going to review in more detail later), the reasons why had very little to do with numbers, trends or technology, and much more to do with difficult-to-quantify (or -predict) factors such as ready access to compliant workforce (steam engine permitted siting factories in cities) and insurmountable coordination problems between potential hydropower beneficiaries. By the way, the latter problem, as Malm also notes, bedevils many if not most proposed theoretical schemes for 100% renewable energy systems, while the renewable plans themselves are eerily reminiscent of the grand plans of 1820s water power advocates. Of course, since energy debates (just like most other technological debates) are generally obsessed with novelty and ignorant of the history of even their own field, these worrying similarities are ignored entirely.

Unfortunately, even the best predictors carefully considering the problem from all angles have a considerable failure rate; even more unfortunately, we can’t know in advance who is going to be proven right. Even though there is always going to be a winner in every coin-flipping tournament, it does not mean reliable coin-flippers exist. Likewise, accuracy of a person’s past predictions does not, by itself, tell much about how reliable her current estimates are. Sadly, we lack a rigorous system of tracking expert predictions and assessing and improving their accuracy, and until such systems are in place (if ever), I would advise taking all predictions about the future with significant grains of salt.

Literature cited

Gordon, R. (1983). Cost and Use of Water Power during Industrialization in New England and Great Britain: A Geological Interpretation. Economic History Review, 36(2), 240–259.

Malm, A. (2016). Fossil Capital: The Rise of Steam Power and the Roots of Global Warming. London: Verso Books.

Posted in Ecomodernism, History of technology, Nuclear energy & weapons | Tagged , , , , , | Leave a comment

100% renewables and 100% nuclear are both practically impossible

I’ve been following with interest how some nuclear power advocates are suggesting that building anything else than nuclear power is sidetracking us from the climate goals. These advocates claim that variable, non-dispatchable renewables will not be ultimately capable of delivering a deeply decarbonized energy system, and therefore we shouldn’t waste time or money on building them because we’d only have to replace them with dispatchable sources of low-carbon power – which in practice means nuclear.

It should be noted that this is very much a minority position among those who support nuclear power as a weapon in the climate fight. The anti-nuclear establishment’s most common strawman argument against nuclear supporters is to claim we want to solve climate change with nuclear only. This is not true, and there are only a handful of pro-nuclear activists who see no need at all for anything else. However, the 100% nuclear argument is not as far-fetched as it may seem at first, and before arguing why I don’t quite believe in it I must first explain what it is all about.

The key issue at the root of the problem is that variable renewables begin to cannibalize their own profitability long before the energy system will be decarbonized. As a rule of thumb, increasing the market share (penetration) of variable renewable energy sources (VREs) beyond their capacity factor will be increasingly difficult.

Why? Because when the market share of a VRE begins to equal its capacity factor, the occasional full output of all these VRE generators – say, during a sunny or windy day – will begin to exceed the total demand on the electricity grid.

2014-04-08 03.22.24 pm

Figure 1: Production from wind and solar in Germany + Austria on one ordinary day if solar generation is increased 4x and wind generation 8x from current. Black line denotes demand.

Because of the way electricity is valued, this will cause the price of all electricity produced during such times to fall close to the lowest marginal cost of production – which is close to zero with VREs. At worst, inflexibilities in grid operation may mean that producers have to pay to users to consume the excess electricity. At this “inflection point,” as John Morgan has dubbed it here, adding more of the said VRE will not be economic any longer. If, for example, grid electricity will be almost free when the sun shines, where is the incentive to install additional solar panels?

2014-04-08 03.22.43 pm

Figure 2: What electricity can be sold at non-zero prices. Note in particular how little solar electricity has any value at all. For more thorough discussion, see my post here.

(See also here the excellent essay by Jesse Jenkins and Alex Trembath examining this very problem. I’ve also written about the issue here.)

The problem stems from the fact that realistic capacity factors for wind may remain below 40% and for solar, 15% is already optimistic in the northern latitudes. Furthermore, these figures aren’t truly additive: because wind turbines often produce while the sun is shining and vice versa, we cannot simply add up the capacity factors of various energy sources to determine what is the maximum total the electricity grid can economically manage. It may turn out that in large-scale grids, a 50% total market share for VREs could already present substantial practical difficulties. Smaller – that is, national – grids could easily exceed this, provided they have neighbors willing and able to both absorb occasional excess production and provide backup when weather isn’t cooperating. However, high VRE penetration in one region would also mean that its neighbors cannot economically build as much VREs themselves.

The limit will also bend depending on how much we can assume energy storage and demand management. In theory, with economic and scalable energy storage (or perfect demand flexibility), the problem could be solved easily. However, it is still unknown whether we can rely on these technologies to develop the way we want – and there may be other difficulties that reduce the overall limit, such as existing low marginal cost generation, problems in building enough transmission lines, and so forth. Nevertheless, whether the economic limit will be 40, 50, 60 or 70% is not very relevant. As long as the limit is not close to 100% the question is where do we take the rest from?

It is very likely that if we exclude hydropower, which cannot be realistically expanded in most developed countries, the least-cost option for covering the rest of the demand will be by burning stuff – and there aren’t that much carbon neutral fuels to go around, not even if all the waste and all the sustainable biomass is burned for energy. The likely end result will be blown carbon budgets and a failure to prevent dangerous climate change.

Thus, say the 100% nuclear advocates (who typically do not believe that energy storage and/or demand management will develop fast enough to matter), shouldn’t we focus on low-carbon power plants that deliver power up to 90% of the time? And if we have low-carbon power that delivers with a 80-90% capacity factor, then variable sources do not really add anything we actually need from an emissions standpoint. Aren’t wind turbines and solar panels therefore superfluous, and isnt the “all of the above” energy strategy doomed from the beginning?

Why do we nevertheless need all of the above?

I don’t buy the 100% nuclear argument, and in the following I try to present several reasons why. Due to the complexity of the issues, the following four points are necessarily more like thought experiments than anything you should take as a gospel, and I’d value your feedback.

1. All low-carbon energy reduces fuel burn and conserves hydro reservoirs.

The first reason why I support all low-carbon energy sources right now is because the world energy system still runs mostly on fossil fuels. This is true even for electricity system, which is in theory the easiest to decarbonize.

Therefore, with some important local exceptions, all new low-carbon energy can reduce fuel burn and conserve water in hydro reservoirs, which in turn helps to reduce fuel burn during peak demand. In most places, we are still so far from the point where new low-carbon energy will not reduce burning that the question “which low-carbon energy we should build” is largely academic. Anything helps.

2. Totally discounting technological change is unwise.

The problems of large variable renewable market shares outlined above are real. However, they may be solvable with a combination of partial solutions. So far, the pessimistic analyses I’ve seen have focused on some proposed solution, typically energy storage, and proceeded to “disprove” its feasibility by quoting high prices, low energy returns (EROI), lack of critical raw materials, or similar limitations.

These cautionary messages have their place, but just as very few people are claiming we should solve the climate/energy problem with nuclear alone, very few people are saying some single solution would solve the problem of intermittency.

More likely and more feasible is a combination of solutions. Many different forms of energy storage, from batteries to gas generators to heat and cold storage will help; some demand response will do its bit; grid interconnections and “smart” grids do theirs; and optimizing renewable energy generators for maximum capacity factor instead of maximum annual energy production will increase capacity factors and thus raise the rule of thumb limit described above. (For solar panels, orientation can make quite a bit of difference in capacity factor, while wind turbines can be optimized for low wind speeds at the expense of annual generation. The reason this hasn’t been widely done yet is because many subsidy schemes reward peak generation, not firmness of supply.)

Furthermore, price trends in e.g. batteries do make the more pessimistic assumptions outdated even as we speak. It may be that battery prices will not follow the most optimistic trends, but there is still clearly room for improvement, and such improvements are happening. Ditto for EROI issues, and for determining whether we should install some particular source or not, EROI is a problematic metric anyway.

That said, we should be careful not to draw the opposite conclusion – that very real technical issues will be solved just because solving them would be very beneficial for the society. Demand alone does not guarantee a technological solution; this is one of the key takeaways from my forthcoming PhD thesis that examines the technological response to resource scarcities. We shouldn’t discount the capabilities of our scientists and engineers nor the power of suitable incentives, but we also shouldn’t rely on them.

3. Electricity price crashes build necessary niches for critical decarbonization technologies.

Related to point #2, I actually believe that we need electricity price volatility to develop the necessary components for a truly decarbonized energy system.

Decarbonizing current electricity system alone is relatively easy, and in developed countries, it might be possible solely with renewables or solely with nuclear. However, electricity accounts for only about a third of global carbon emissions. To get to the rest, we urgently need technologies that can reduce fossil fuel use and emissions in other sectors. Transport is particularly important, but other sectors, such as construction and industry, also need to be decarbonized.

The simplest and, probably, the most reliable way of getting from here to there is by electrifying just about everything. That is, we need to learn how to use electricity, either directly or indirectly, for tasks that now need fossil fuels. For example, we need to learn how to transport people and goods with electricity, and how to make iron with it.

Most of these tasks are likely to require technologies that, in essence, convert electricity to some other form of energy or to some other material, or transfer demand for electricity from one moment to next. For example, we might turn towards battery-powered electric cars, and make iron in a hydrogen process where hydrogen is separated from ordinary water via electrolysis. Likewise, we could design heaters and coolers to operate only when electricity is plentiful.

At the same time, such technologies could help us solve the problem of electricity overproduction. Given cheap enough methods for storing electricity for later use – the hydrogen route seems one of the more promising ones, either directly or after conversion to methane – the problems outlined in the introduction evaporate. Excess electricity production would simply be absorbed in storage systems, to be released when production flags.

But the problem right now is that these storage systems are prohibitively costly. Likewise, electrification of fossil fuel-demanding industrial activities proceeds only slowly, because electric alternatives are usually more expensive.

To get the prices of these technologies down to where we need them to be, we are very likely to need substantial innovation. This, in turn, is likely to require what is known as “niches” in technological transitions literature. Niches are specific market conditions or other safe havens for innovations to occur and be improved before having to compete with more established technologies. Absent such niches, it is unlikely that even good ideas can be developed to mature enough levels.

Quite obviously, technologies that require us to store or convert electricity would benefit from a niche where electricity is very cheap – or where the grid operator might actually pay for users to spend excess power.

This is increasingly the situation we’re seeing in grids characterized by large percentage of variable renewables or relatively inflexible (for economic reasons, mind) nuclear. What’s more, I suspect that increase in solar power in particular will be crucial to the emergence of low-cost electricity niches: this is because household solar PV is soon becoming a fairly attractive proposal, particularly in markets where transmission costs are high. It may be that new houses in particular soon begin to come with solar panels as standard, and even though they might generate excess electricity, the absence of transmission costs alone may make them profitable – at least until transmission cost structure changes to account for this. Such installations could, in the next ten years or so, provide quite a bit of low-priced electricity gluts that help energy storage, conversion and demand management technologies to grow.

I believe we need these low-carbon technologies no matter what the 2050 electricity grid will look like. Even though nuclear power hits the capacity factor wall much later than variable renewables, a 100% nuclear energy system would also need energy storage, conversion, and demand management. Therefore, having regular electricity price crashes has its upsides as well.

A caveat worth keeping in mind, however, is that electricity glut may be used for other purposes as well. It is quite conceivable that alongside, and perhaps instead of storage and conversion, we’re simply going to see electricity being used more when it is cheap for needs we don’t even know we have right now. An example I’ve toyed with is that cheap electricity might well be utilized to mine for bitcoins; this could be a much more profitable activity than storing the electricity for later use, particularly if the electricity grid is firmed up by the least-cost solutions – fossil fuels. And this last, very real possibility is in my mind the most important reason to support all low-carbon energy sources now.

4. Both 100% RE and 100% nuclear by 2050 are politically and technologically unrealistic pipe dreams.

The recent U.S. elections showed that climate change is still one of the least-motivating political issues in most developed countries. In our environmentalist bubble, where we’ve been sharing climate change news and worries for the last decade or so, this may sound all wrong. But this is simply the reality where we seem to live in.

Going either 100% renewable or 100% nuclear would require an enormous public program to build what would be, in effect, nationalized electricity grids. This would be necessary simply because the required build rates are so immense that there is no hope to achieve them on a commercial basis alone. In case of nuclear industry that has been allowed to wither, the required build rates for 100% nuclear would present significant bottlenecks in manufacturing capability alone. Furthermore, the economics of low-carbon power would require state intervention even for major decarbonization of national electricity grid. This has been done before with nuclear in France and to lesser extent in other countries, but the only real justification for going this route today would be avoiding dangerous climate change.

Compared to other scenarios, 100% nuclear has an additional problem: nuclear is by far the most hated energy source there is. That this hate and fear is based on decades of misinformation is regrettable, but immaterial. Mobilizing opposition to any plans to build an energy system that’s even mostly nuclear would be laughably easy. To be fair, right-wing populists today have begun to oppose renewable energy and wind power in particular with the same blatant disregard for facts, logic and research the shrillest anti-nuclear advocates perfected and weaponized in the 1980s, but this does little to make nuclear more popular, even though some do support nuclear simply to spite “the Greens”. Those of us who continue the fight against low-carbon energy sources they dislike are studiously ignoring the elephants in the room: that the most likely alternatives, and the largest beneficiaries from this fight, are fossil fuels. Meanwhile, the fight between low-carbon sources takes our time, saps our resources and results to personal animosities between those genuinely concerned about climate change. All these are regrettable enough outcomes of infighting in the best of times, and deadly threats today.

My interpretation of the world is that particularly right now, we environmentalists do not have the slightest hope of building the 2050 energy system we might really want. But we might have some hope to influence decisions so that we can reduce emissions instead of increasing them wildly. In other words, when we see proposals that might help, we should rally to help them come to fruition.

Barring several technological miracles, the Paris targets of 1.5°C, tenuous from the beginning, are almost certainly unachievable as the United States now puts her might against climate change mitigation. But we need to remind ourselves that this fight isn’t about 2°C or bust; every tenth of a degree counts, as there are degrees in any catastrophe.

Posted in Ecomodernism, Economy and the Environment, Energy, Innovation, Nuclear energy & weapons, Scarcities and constraints | Tagged , , , , , , , , | 14 Comments

There is now a far-right environmental movement, and I welcome it

I predicted a year ago that we’re going to see more diversity in the global environmental movement, because there are people who actually care about the environment but cannot buy into traditional, heavily Leftist and Western-centric environmentalism – or are not even welcome there.
Just after I wrote about how there is now a dire need for pragmatic environmentalism, I learned that the French far right now has their own environmental organization. I don’t agree with their policies, but at least they have one thing going on in their favor: they support nuclear power, which allowed France to reach in 1989 the climate targets Germany is hoping to achieve in 2050 – even despite great increases in energy consumption. Even if this movement opposes global climate treaties, simply keeping nuclear power operational could very possibly do more good than what the traditional French environmentalists could in reality achieve.

CO2 emissions per capita and the fastest 10-year emission reduction rates in four countries, alongside means used to achieve them. Data from CDIAC carbon database and BP World Energy Outlook, figure from our book Climate Gamble: is Anti-Nuclear Activism Endangering our Future? 

This is going to be a test for those actually interested in protecting the environment instead of fighting a political proxy war through environmental issues: can we work together on issues where we agree, despite there being many issues where we can’t possibly agree? And can we, coming from the more traditional environmental mindset, learn to negotiate about our desires instead of demanding that the rest of the world must accept the set of solutions we set in stone in the 1980s and haven’t really bothered to adjust thereafter?
This fragmentation of environmentalism is, in my opinion, the inevitable result from increasing environmental awareness. Besides political differences in Western countries preventing many people from joining, the traditional environmentalism hasn’t really caught on in the wider world. This is for a good reason: for all its talk about inclusiveness and global responsibility, the traditional environmental movement is extremely Western-centric in just about everything. These values and attitudes may simply not speak to the people in other parts of the world as they do to some of us here in the West. The rest of the world needs, and will have, its own forms of environmentalism, just as different political groups are likely to form their own environmental organizations. All this can lead either to the death and fragmentation of environmentalism, or to a broad coalition of people who disagree on many things but might agree that we need to, for example, wean ourselves off fossil fuels. I believe we need the latter if we want to win the climate fight, but fear the traditional environmentalists are only going to denounce all new approaches as heresy of the highest order.
I believe it would be foolish to denounce these new environmental movements just as tools or trolls. The far right has always had a strong Romantic attachment to “blood and soil” and “unspoiled nature”, and it’s no secret that some former Nazis and fascists were involved in the founding of the modern Green movement – though claims that the Green movement is somehow fascist because of these connections is just paranoid fantasy. But these people still exist, even though they’ve been driven out from the traditional environmental movement, and there are legitimately people who hold political ideals I, for one, find odious, but who nevertheless care about our common environment.
The environmental problems we face are so severe that this is no time for making more enemies than is absolutely necessary. At this point, if the Devil himself were to increase global low-carbon energy supply or reduce emissions, I would find in myself to say something nice about the old fellow.
Posted in Ecomodernism, Economy and the Environment | Tagged , , , , | 1 Comment