Graphic of the week: The great “80% of world’s energy could be generated from renewables” fallacy

ImageIs a future without fossil fuels and without nuclear truly feasible?

In 2011, the Intergovernmental Panel on Climate Change (IPCC) released its Special Report on Renewable Energy Sources and Climate Change Mitigation, or SRREN. The report sought to determine the potential contribution of renewable energy sources to the mitigation of climate change. It reviewed the results from 164 individual scenarios derived from 16 different models, all developed since 2006.

For some, the report’s conclusions were sobering: nearly half of the scenarios suggested that renewable energy sources might contribute no more than 27% of the world’s energy supply by 2050 (see Chapter 10, p. 794). Even when counting only the most aggressive scenarios — the ones where atmospheric CO2 concentrations are stabilized to less than 400 parts per million — the median estimate of world’s renewable energy supply in 2050 was somewhat less than 250 exajoules; or, in other words, about half of global primary energy consumption today.

The report is by no means overly critical of renewables. For example, its rather loose definition of renewable energy includes “traditional” biomass, the main cause of deforestation, and the feasibility of various scenarios is never assessed in the report: no evaluation whatsoever is conducted to determine which of the 164 scenarios might be realistic, and which may be not. Nevertheless, the results show clearly that renewable energy sources are highly unlikely to be enough for meaningful climate change mitigation, even if energy efficiency takes leaps and strides.

It is therefore very instructive to note how various anti-nuclear groups have chosen to portray the results. Without exception, and perhaps mislead by SRREN’s extremely skewed press release, every anti-nuclear organization I’ve so far researched tells you that IPCC SRREN “proves” or “shows” that a future powered completely by renewables is completely possible.

For just one example, take this statement from Greenpeace:

According to the IPCC’s ‘Special Report on Renewable Energy Sources and Climate Change Mitigation’ (SRREN), by harnessing just 2.5% of viable renewable energy sources,  with currently available technologies, its possible to provide up to 80% of the world’s energy demand by 2050.” (Source)

This, my friends, is cherry-picking pure and simple. What anti-nuclear activists don’t tell you is that only two of the 164 scenarios could in any way be construed to suggest anything like that. What’s more, the 80% claim comes from Greenpeace’s Energy Revolution scenario. One could take issue with the fact that in reality, it’s largely prepared with data and “assistance” from renewable industry lobby group EREC, but industry biases and other oddities aside, this scenario assumes that world energy use actually decreases from current levels — even as the world’s population grows to 9 to 10 billion and there are no signs whatsoever that developing nations are willing to forgo the fruits of high-energy lifestyle.

This is so important that it bears repeating: At most, 1.2% of scenarios find that less than 80% of the energy world uses today might be possible to generate from renewable sources alone. The remaining 98.8% aren’t so sanguine.

And, again, that’s with current consumption levels, when the broad consensus is that world’s primary energy demand will increase greatly over the next few decades.

Sure, technically speaking anti-nuclear organizations are correct: it might be possible to derive 80% of world’s energy from purely renewable sources, in the “best case” scenario. But omitting crucial details and the gist of the report are wanton acts of cherry-picking and demagoguery; it’s the exact equivalence of claiming that CO2 emissions aren’t harmful — if all the known and suspected feedback mechanisms turn out to cancel the effects of increased emissions.

What the SRREN report actually shows is that in practice, renewables alone are highly unlikely to be enough, fast enough, to avert the coming climate catastrophe. Similar results abound; to pick just one example, let’s check the recent modeling work undertaken for a book by Mark Lynas. It shows rather clearly that even the aforementioned Greenpeace’s EREC’s Energy Revolution would be highly likely to fail in keeping the global warming below 2°C — even if the scenario were to be executed in full, without fail, starting today.

So, here’s the question: what should we call those who cherry-pick the climate science to suit their political agenda?

Posted in Energy, Economy and the Environment, Infographics, Nuclear energy & weapons | Tagged , , , , | 3 Comments

Graphic of the week: Comparing land use of wind and nuclear energy

Land use footprint of wind and nuclear power generation

Nuclear energy is often claimed to be environmentally harmful technology, especially when contrasted with renewables such as wind power.

However, these claims are rarely accompanied by proper sources. This may be because comparisons using actual science do not really support such blanket statements. To take just few examples, a range of studies, including IPCC’s assessments, have consistently found nuclear energy to be among the least carbon intensive methods of energy generation, surpassing even solar photovoltaics. Similarly, most life cycle assessments have found that nuclear energy uses far less materials – such as steel and concrete – per produced energy unit than even renewables (see e.g. Weißbach et al. 2013).

This graphic compares another component of ecosystem damage potential: land use footprint. It is well known that ecosystem degradation and destruction due increased land use is, alongside climate change, one of the greatest threats to Earth’s environmental well-being. Therefore, solutions that reduce our environmental footprint are desirable.

The graphic most likely underestimates the footprint of wind power while overestimating nuclear energy’s footprint. This is because I deliberately ignored material requirements (except uranium), used the most environmentally destructive uranium mining method (open cast mining), overestimated uranium requirements by a factor of four at least, used the most optimistic assumptions regarding wind energy production, and ignored the effects of variability. The latter would, after a certain level of wind energy production is reached (with current technology, perhaps 20-30% of the electricity grid) require perhaps two or three times the number of plants presented here to produce same level of service, or the building of significant backup plants and/or energy storage facilities. If material requirements are accounted for, wind power has 3-10 times larger materials and mining footprint than nuclear (see e.g. supplementary material for aforementioned Weißbach et al. 2013).

I also selected a relatively dense wind farm with short electricity interconnector (the thickest line connecting three wind farms in each segment of the graphic). In addition, I did not draw those parts of access roads that were evidently used for other purposes as well, e.g. public highways.

As wind power generation increases and locations close to existing power lines and already disturbed by human presence are used up, developers must turn their attention to ever more remote sites. These entail longer connectors and more access roads, sometimes encroaching to existing wildlife sanctuaries. Connectors and roads also dissect biomes, and may therefore contribute more to ecosystem damages than one might assume from simply counting the area they occupy.

The alternative, offshore wind, does not need access roads, but it will still disturb marine ecosystems if not sited properly.

However, please note that none of the above is to be construed as an argument against wind farms or renewable energy in general. Compared to fossil fuels, they are still much less destructive to health and environment – by far – and proper siting can alleviate many of the potential hazards. My only aim is to show that the claim “nuclear harmful – renewables benign” is far more complicated than it appears.

As always, you are free to spread this graphic as you see fit.

Image | Posted on by | Tagged , , , , | 22 Comments

On the relationship between regulation, technological change and competitiveness

Translation of my presentation in the 38th Ilmansuojelupäivät in Lappeenranta, Finland, 20.8.2013

If we deconstruct the topic of the panel, “is environmental protection a threat or an opportunity to a country’s competitiveness,” sooner or later we will end up considering the question whether tighter environmental regulation would help or even force firms to develop novel ways of working — novel technologies, in the broad sense of the word — that others would be willing to pay for.

Of course, this is far from being the only possible way through which environmental protection could in principle affect a country’s competitiveness; to mention just a single example, a case could be made that tight regulations preserve the quality of the environment and therefore make a country an attractive location for skilled professionals. However, because I have neither space, time nor competence to deal with these kinds of questions, I shall concentrate on the effects that regulatory constraints may have on technology — and on the effects technology has on the regulations.

In theory, environmental protection and economic activity are possible to combine precisely through “incentivizing” regulation that forces beneficial changes in techniques and operating procedures. For two decades now, the favorite theory trotted out in support of this thesis has been the so-called “Porter hypothesis,” which maintains that tight regulation pays for itself not just because of enhanced environmental protection, but also because of improved goods and services. The logic behind this hypothesis, developed by Michael Porter and others in the early 1990s (e.g. Porter and van der Linde, 1995), is that necessity is a good motivator that makes firms to develop technologies they otherwise wouldn’t bother developing. If the regulation is just properly incentivizing, innovations will flourish and competitiveness soars, more than offsetting the costs of regulation.

As is the case with so many other beautiful theories, the Porter hypothesis is seductive, simple, and very likely not correct. Despite over two decades of research, empirical evidence for the net positive effects of tightening environmental regulation to economy as a whole or even to the success of specific sectors remains slim, and what little evidence there is tends to be problematic to say the least. In general, noteworthy tightening of regulatory constraints, compared to e.g. regulation elsewhere, can promote individual firms and, in some cases, specific sectors, but there is little evidence for broader effects. From the economy’s viewpoint, the evidence is therefore both positive and negative: on the one hand, regulation has only little positive effect, but on the other, the negative effects of regulation to e.g. industry profitability also tend to be small. Unfortunately, the same applies to reductions of environmental pollutants.

Before taking a stab at explaining why this happens, and what we might do about it, I’m going to be explicit about what I’m not claiming. It is clear that tightening regulations and political pressure create additional incentives and pressures for firms to develop their technologies. It is also clear that on occasion, these pressures and incentives can help the development of successful innovations and even novel industries. Furthermore, regulatory constraints are not the only way regulators can influence technological change; regulation can also increase incentives by e.g. subsidizing novel innovations. Finally, there is no doubt that changes in the operating environment are bound to benefit some firms and penalize others. Anecdotal evidence is abundant, and our all-too-human tendencies to seek simple reasons for complicated developments, remember success and brush failures under the carpet easily create the image that the plural of “anecdote” is “data.”

Nevertheless, it is at the very least uncertain whether we can believe that innovations that are developed as a response to regulatory constraint are, on average, net positive developments. Economic theory suggests that this is possible, given the existence of very specific conditions (Mohr, 2002). Unfortunately, research has failed to find much evidence of these conditions being anything but rare occurrences. Because Porter hypothesis requires that firms systematically leave profitable improvement opportunities unexplored — not just that this may happen — the lack of empirical evidence is not surprising.

The problem with combining economic activity and growth with environmental protection through technological development is that sticks and carrots are not the only or necessarily even very powerful forces affecting technological change. It is only a slight exaggeration to say that discourse is often dominated by a conviction that technological solutions to environmental problems require only proper incentives. If incentives are in place, any problem will be solved. In a certain sense this is true, as long as the definition of a “solution” is kept so broad as to fit ocean liners through, and no time limit for the solution is specified. Outside semantic hair-splitting contexts, it is clear that certain problems remain nevertheless unsolved, even though solutions would clearly have extremely significant practical and economic value. A trivial example is a way to locally alter gravity; less trivial but no less valuable examples are, among others, a cheap and scalable method for storing electricity, a cheap, clean source of energy, and a fast method for convincing the majority of world’s population to make great economic sacrifices in the name of environmental protection.

The aforementioned problems remain unsolved, and there are no guarantees that solutions even exist, even though sticks and carrots are decidedly plentiful. A moment’s reflection reveals why this is so: even if these solutions were technologically possible, we lack the “building blocks” required to construct them. All of our tools and techniques are built upon our existing toolkit and knowledge, much like building blocks of a pyramid. In practice, we cannot realize any solution, if we lack the blocks we need to build it. On the other hand, the history of technology from stone axes to Facebook tells that once the pieces are in place, a technology will be developed very rapidly, almost always by multiple independent inventors. Very few if any single causal factor has much effect on the speed of technological change; to pick an extreme example from the realm of pure ideas, one sympathetic biographer was forced to conclude that Einstein’s contribution was to advance physics by ten years, at most.

In short, what is invented is only rarely affected by constraints. In my own PhD research, I have studied one famous Finnish invention, the flash smelting of copper. Developed as a response to serious electricity shortage after the Second World War, this technology had a great effect on metallurgy, once it was broadly adopted starting in the 1960s. Prior literature has repeatedly claimed that the key causal factor behind this invention was the electricity shortage, which required Outokumpu to develop a completely novel method for copper smelting; however, less ink is spilled over the fact that the technology in question had been first described 80 years prior and even patented full half a century before Outokumpu’s experiments, and a body of existing patents and research literature actually caused great problems for Outokumpu when it tried to patent the invention in the United States. Even less discussed today is the fact that another company, Inco of Canada, operating practically without constraints of any kind, developed a notably better technology a full month before Outokumpu. The inevitable conclusion is that the undoubted later success  of Outokumpu’s invention has much more to do with factors other than constraint, and that the invention would have been invented — in a better form, although perhaps not by Outokumpu — even without the constraint. Based on this and other cases, I therefore side with mathematician Alfred North Whitehead in claiming that instead of necessity being the mother of invention, “necessity is the mother of futile dodges” is closer to truth.

If the constraint caused by extreme post-war scarcity did not, in this and other cases I’ve so far researched, have noticeable effect on the content of technological change, what hope remains for those constraints we can effect through regulation? The answer, in my opinion, is “very little.” Recent research has strengthened the belief that regulatory constraints in democratic societies (at least) are born out of a sort of a negotiation process. Desire for brevity forces me to cut various corners, but the gist of the matter is that because material standard of living and improvements thereof remain extremely important to many voters, and more so during times of economic difficulty, regulators must of necessity consider the impact regulatory decisions may have on the economy. If a decision would have significant negative impact on the economy, the decision will be altered. This dynamic, dubbed “Iron Law of Climate Change” by Roger Pielke Jr., is visible in nearly every regulatory process that aims to ease environmental problems; if it’s not visible, there is a reason to believe that the problem is not overly large or difficult.

What kinds of regulation, then, would have significant negative impact? In principle, every regulatory constraint that cannot be met cheaply and relatively speedily with technologies that are available “off the shelf.” By reading proposed and enacted environmental regulations, one will sooner or later notice concepts such as “Best Available Technology” (BAT). The performance of these technologies largely defines the constraints — e.g. emissions, energy consumption, and so on — set by the regulation. I’d go so far as to wager that if one attempts to explain the dismayingly slow process of environmental protection, a key culprit would be a lack of sufficiently cheap technology for satisfying tighter constraints. It is, in fact, usually more accurate to say that technology changes regulations rather than to say that regulations change technology.

When technology exists and is sufficiently cheap, environmental regulation can proceed rapidly. In these cases, regulatory changes promote the spread of existing (if, perhaps, so far poorly commercialized) technology. My personal hunch is that the majority of research literature finding positive relationships between the enactment of tight environmental regulation and technological change has, by accident or by design, studied these kinds of cases. However, it does not necessarily follow that regulation would promote the development of novel technologies as such; more accurate description of the relationship might be that regulation may promote the spread of existing technologies. The difference may seem small to someone not spending hours in the study of technological change, but it is significant: insofar as the solving of environmental problems requires us to develop completely novel technologies, it is not clear that regulations are the best vehicle to promote this development.

What conclusions one should draw from the above? First, one should note that all hope is not lost, not even for the purpose of improving competitiveness. In principle, it is possible that a country or a firm has a disproportionate share of firms and industries with technologies that would benefit from tighter regulation. Tightening regulation undoubtedly benefits the relative competitive position of these industries, and even if the development of novelty were not affected, the adoption of the existing may be sped up significantly.

Secondly, even though individual regulations may have little to no effect, in the long run the trend is hopefully different. By enacting constraining regulation, the polity is sending a message saying that certain activities are considered inappropriate; over time, increasing the domain of inappropriateness with small steps such as these may lead to changes that would be inconceivable if they had to be taken at once.

Third, even if necessity may not be the mother of invention, regulatory constraints can provide the final push for technologies and firms that are not quite ready yet. As an example, even though the invention of flash smelting would have happened without Outokumpu, the constraint very probably caused precisely Outokumpu to be among the inventors — and in a position to profit from the invention later. However, profiting from such inventions is a different matter, and no regulation can force firms to do so.

References

Arthur, B. W. (2009). The Nature of Technology: What it is and how it evolves. New York: Free Press.

Korhonen, J. M., & Välikangas, L. (Forthcoming). Constraints and Ingenuity: The Case of Outokumpu and the Development of Flash Smelting in the Copper Industry. In B. Honig, J. Lampel, & I. Drori (Eds.), Handbook of Research on Organizational Ingenuity and Creative Institutional Entrepreneurship. Cheltenham: Edward Elgar.

Mohr, R. D. (2002). Technical Change, External Economies, and the Porter Hypothesis. Journal of Environmental Economics and Management, 43(1), 158–168.

Porter, M. E., & van der Linde, C. (1995). Toward a new conception of the environment-competitiveness relationship. Journal of Economic Perspective, 9(4), 97–118.

Roediger-Schluga, T. (2004). The Porter Hypothesis And The Economic Consequences Of Environmental Regulation: A Neo-Schumpeterian Approach. Cheltenham: Edward Elgar.

Posted in History of technology, Innovation, Scarcities and constraints | Tagged , , , | Leave a comment

“Graph” of the Week: What happens if nuclear waste repository leaks?

Lately, I’ve been spending some time reading through reports on nuclear waste management. What is striking is how conservative the calculations seem to be; for example, the report by Posiva (the Finnish company responsible for the world’s first civilian nuclear waste repository, Onkalo) goes to almost absurd lengths when calculating what might be a possible highest dose for the single worst-affected individual ten thousand years from now.

For example, the calculations assume that the person in effect spends all of his or her days – from birth to death – in the single worst contaminated one square meter plot around the repository; eats nothing but the most contaminated food available, with a diet that maximizes radionuclide intake; and drinks only the most contaminated water and nothing else. The figure – 0.00018 milli-sieverts per year – also assumes that the copper canisters where spent fuel pellets are housed begin to leak after mere 1000 years.

And still, the worst-case figures amount to a dose what one would get from eating about two bananas.

I’m fully prepared to accept that many surprising things could happen, and that we cannot be certain of what happens 10 000 years from now; but given the figures here, and the way they’re achieved, I have some confidence that the likelihood of people receiving doses that can actually pose some real danger, even in the long term (say, more than 10 mSv per year – which might just produce enough cancer cases to be visible in statistical sense) seems remote indeed.

You can read the Posiva Biosphere Assessment report (be warned, it’s 192 pages of rather technical text) in English here:

http://www.posiva.fi/files/1230/POSIVA_2010-03web.pdf

As always, corrections and comments are highly appreciated. You’re also free to distribute the image as you see fit – just provide a link to this page.

Image | Posted on by | Tagged , , , , | 5 Comments

“Graph” of the Week: Fukushima tritium leak in context

The “massive” tritium leak to sea from Fukushima since 2011 equals the tritium content of about 22 to 44 self-luminescent EXIT signs.

More info about exit signs here: http://www.dep.state.pa.us/brp/radiation_control_division/tritium.htm

Image | Posted on by | Tagged , , , | 8 Comments

The stagnation of clean energy, with more detail

Inspired by Mark Lynas’s new book “Nuclear 2.0″ and Roger Pielke Jr’s excellent post “Clean Energy Stagnation,” here’s my graph of the week. It’s the same graph used by Pielke fils, showing the clean energy as a share of world’s primary energy supply since 1965. The data used, BP’s statistical report on world energy, is the same as well. What I’ve added is just coloring to show what technologies are doing the heavy lifting – and what are, well, perhaps not doing as much as the hype would lead you to believe.

 

EDIT 7.4.2014: Now with 100% more information – separated biomass, wind and solar in the graph. 

Share of low-carbon energy from total energy supply. Based on BP statistics (2013).

Share of low-carbon energy from total energy supply. Based on BP statistics (2013).

Posted in Energy, Economy and the Environment, Infographics | Tagged , , , , , , | 1 Comment

Mark Lynas’ “Nuclear 2.0:” The case for a Grand Alliance of Low Carbon

As a PhD researcher and an environmentalist who is deeply concerned about the utter lack of progress in reducing CO2 emissions, it has been dismaying to follow how the mainstream environmental movement has been spending money and energy in fighting the one technology that has actually – and not just in projections printed in glossy brochures – decarbonized entire nations. So far, the result is, as recent statistics show, an utter stagnation in the share of energy the world generates from clean energy sources: we are actually no closer to a carbon-free society than we were in 1990, despite all the hype surrounding various renewable energy schemes, and total global emissions only keep on growing.

Mark Lynas’s new Kindle Single e-book, “Nuclear 2.0: Why a Green Future Needs Nuclear Power,” is therefore a welcome, well-grounded argument in support of the view that we simply do not have the luxury of picking and choosing only those carbon abatement options we have an ideological preference for. As modeling specifically undertaken for the book shows, “all of the above,” including not just renewables but nuclear and carbon capture as well as efficiency improvements and shifts in consumption patterns (when possible), is the only strategy that offers even a hope for limiting global warming to less than 2°C, and it seems to be the only strategy that has some realistic chance in keeping the warming below 4°C. Interestingly, even the Greenpeace “Energy Revolution,” by far the most optimistic of the 164 renewable energy scenarios considered by IPCC in its 2011 SRREN report, even if executed perfectly, is not enough. If the most optimistic of already optimistic non-nuclear proposals cannot do the job even if everything goes according to the blueprint, then perhaps the plan needs to be changed.

Lynas tackles the usual talking points offered against nuclear power with great skill and verve, supporting his claims with a decent list of references. For someone new to the subject, this is probably the best and most concise overview of the debate and what actual science shows, for example, about the risks of radiation. While there is little new to those who have actually researched the subject by themselves, the book is still valuable as an overview and for its discussion about the origins of the anti-nuclear movement and for interesting details about coal plants that have followed “no” to nuclear. Despite all the nice words about being against both nuclear and coal, the sad fact is that anti-nuclear activists have unwittingly made themselves “useful fools” for the fossil fuel industry; you may not have known – for example – that Germany is one of the only countries in Europe still building and opening new coal-fired power stations. To paraphrase a late Finnish president, it seems that if you moon to uranium, you simultaneously bow to coal. (Incidentally, one coal plant missing from the examples is Finnish Meri-Pori coal generating station, built immediately after a “no” vote to new nuclear in 1993. Interestingly, it has almost the same power rating as the proposed reactor. As far as the claims that the German coal plants have nothing to do with the most recent nuclear shutdown, these are casuistry of the highest order, omitting cleverly the fact that most plants were approved during earlier nuclear phase-out decision.)

As Lynas shows, much of the anti-nuclear activism is grounded on misconceptions, poor science, and even blatant disinformation, spread by well-meaning but ideologically blinkered activists. The problem for these activists is that the scientific consensus does not, by and large, agree with their views. Radiation is a carcinogen, but a fairly weak one; Fukushima’s casualties will come from fear, not from radiation; and, when compared by impacts per energy unit produced, nuclear power is actually by far the least deadly of any energy source ever employed by humans. The length of the book prevents detailed discussion, and one could take some issue with certain phrasings such as the claim that there is no convincing evidence showing a statistically significant correlation between cancer incidence and radiation exposures of less than 100 mSv (there are some recent studies that contest this), but overall the book is at the very least a good starting point.

Lynas is also careful to point out that nuclear power is not a panacea, and the good qualities of nuclear are no reason to shun renewables – where they are appropriate. While Lynas glosses over some of the rather formidable problems with large-scale renewables (grid-scale energy storage being one of the most pertinent), this is a highly commendable position, partly because renewables do have their own, significant merits and are in many cases excellent choices, and partly because they are (at least for a while) much more acceptable to the general public than new nuclear power stations. An engineering-only analysis might show that nuclear alone (using novel fourth-generation reactors) could easily power the world, and with much less environmental impact than today’s power sources, but politics are different. And in any case, cooperation is more likely to produce results than infighting between which exact low-carbon technology should be promoted. It is heartening to read that this seems to be what the UK branches of Greenpeace and Friends of the Earth seem to be tacitly doing: a recent joint statement, calling for “low-carbon unity” to take carbon almost completely out of the UK’s electricity system by 2030, included the British nuclear and renewables industries, and the growing carbon capture and storage trade group. Such grand alliance may be our last, best hope for saving a planet fit for human habitation.

This is perhaps the most important environmental book of the year, and one of the most important of the recent years as well. It is highly recommended to anyone with an interest in pressing environmental issues, it should be required reading for politicians, and one hopes that a print version will be available just so that one could hand them out to interested parties on occasion.

Posted in Energy, Economy and the Environment, Nuclear energy & weapons | Tagged , , , , | 10 Comments