The “massive” tritium leak to sea from Fukushima since 2011 equals the tritium content of about 22 to 44 self-luminescent EXIT signs.
More info about exit signs here: http://www.dep.state.pa.us/brp/radiation_control_division/tritium.htm
The “massive” tritium leak to sea from Fukushima since 2011 equals the tritium content of about 22 to 44 self-luminescent EXIT signs.
More info about exit signs here: http://www.dep.state.pa.us/brp/radiation_control_division/tritium.htm
Inspired by Mark Lynas’s new book “Nuclear 2.0″ and Roger Pielke Jr’s excellent post “Clean Energy Stagnation,” here’s my graph of the week. It’s the same graph used by Pielke fils, showing the clean energy as a share of world’s primary energy supply since 1965. The data used, BP’s statistical report on world energy, is the same as well. What I’ve added is just coloring to show what technologies are doing the heavy lifting – and what are, well, perhaps not doing as much as the hype would lead you to believe.
As a PhD researcher and an environmentalist who is deeply concerned about the utter lack of progress in reducing CO2 emissions, it has been dismaying to follow how the mainstream environmental movement has been spending money and energy in fighting the one technology that has actually – and not just in projections printed in glossy brochures – decarbonized entire nations. So far, the result is, as recent statistics show, an utter stagnation in the share of energy the world generates from clean energy sources: we are actually no closer to a carbon-free society than we were in 1990, despite all the hype surrounding various renewable energy schemes, and total global emissions only keep on growing.
Mark Lynas’s new Kindle Single e-book, “Nuclear 2.0: Why a Green Future Needs Nuclear Power,” is therefore a welcome, well-grounded argument in support of the view that we simply do not have the luxury of picking and choosing only those carbon abatement options we have an ideological preference for. As modeling specifically undertaken for the book shows, “all of the above,” including not just renewables but nuclear and carbon capture as well as efficiency improvements and shifts in consumption patterns (when possible), is the only strategy that offers even a hope for limiting global warming to less than 2°C, and it seems to be the only strategy that has some realistic chance in keeping the warming below 4°C. Interestingly, even the Greenpeace “Energy Revolution,” by far the most optimistic of the 164 renewable energy scenarios considered by IPCC in its 2011 SRREN report, even if executed perfectly, is not enough. If the most optimistic of already optimistic non-nuclear proposals cannot do the job even if everything goes according to the blueprint, then perhaps the plan needs to be changed.
Lynas tackles the usual talking points offered against nuclear power with great skill and verve, supporting his claims with a decent list of references. For someone new to the subject, this is probably the best and most concise overview of the debate and what actual science shows, for example, about the risks of radiation. While there is little new to those who have actually researched the subject by themselves, the book is still valuable as an overview and for its discussion about the origins of the anti-nuclear movement and for interesting details about coal plants that have followed “no” to nuclear. Despite all the nice words about being against both nuclear and coal, the sad fact is that anti-nuclear activists have unwittingly made themselves “useful fools” for the fossil fuel industry; you may not have known – for example – that Germany is one of the only countries in Europe still building and opening new coal-fired power stations. To paraphrase a late Finnish president, it seems that if you moon to uranium, you simultaneously bow to coal. (Incidentally, one coal plant missing from the examples is Finnish Meri-Pori coal generating station, built immediately after a “no” vote to new nuclear in 1993. Interestingly, it has almost the same power rating as the proposed reactor. As far as the claims that the German coal plants have nothing to do with the most recent nuclear shutdown, these are casuistry of the highest order, omitting cleverly the fact that most plants were approved during earlier nuclear phase-out decision.)
As Lynas shows, much of the anti-nuclear activism is grounded on misconceptions, poor science, and even blatant disinformation, spread by well-meaning but ideologically blinkered activists. The problem for these activists is that the scientific consensus does not, by and large, agree with their views. Radiation is a carcinogen, but a fairly weak one; Fukushima’s casualties will come from fear, not from radiation; and, when compared by impacts per energy unit produced, nuclear power is actually by far the least deadly of any energy source ever employed by humans. The length of the book prevents detailed discussion, and one could take some issue with certain phrasings such as the claim that there is no convincing evidence showing a statistically significant correlation between cancer incidence and radiation exposures of less than 100 mSv (there are some recent studies that contest this), but overall the book is at the very least a good starting point.
Lynas is also careful to point out that nuclear power is not a panacea, and the good qualities of nuclear are no reason to shun renewables – where they are appropriate. While Lynas glosses over some of the rather formidable problems with large-scale renewables (grid-scale energy storage being one of the most pertinent), this is a highly commendable position, partly because renewables do have their own, significant merits and are in many cases excellent choices, and partly because they are (at least for a while) much more acceptable to the general public than new nuclear power stations. An engineering-only analysis might show that nuclear alone (using novel fourth-generation reactors) could easily power the world, and with much less environmental impact than today’s power sources, but politics are different. And in any case, cooperation is more likely to produce results than infighting between which exact low-carbon technology should be promoted. It is heartening to read that this seems to be what the UK branches of Greenpeace and Friends of the Earth seem to be tacitly doing: a recent joint statement, calling for “low-carbon unity” to take carbon almost completely out of the UK’s electricity system by 2030, included the British nuclear and renewables industries, and the growing carbon capture and storage trade group. Such grand alliance may be our last, best hope for saving a planet fit for human habitation.
This is perhaps the most important environmental book of the year, and one of the most important of the recent years as well. It is highly recommended to anyone with an interest in pressing environmental issues, it should be required reading for politicians, and one hopes that a print version will be available just so that one could hand them out to interested parties on occasion.
Russian nuclear regulator has licensed AKME-Engineering to build nuclear power plants.
Hilarity aside, the proposed lead-bismuth cooled SVBR-100 reactor looks quite promising. For one thing, it can use its uranium fuel much more efficiently than current reactor types, and leaves less and by far shorter-lived waste.
An old adage tells us that necessity is the mother of invention. But if necessity were the prime mover of invention, why, then, there are so many really nifty technologies – say, antigravity – that would be obviously useful, yet no one has invented them yet?
What I’ve been doing for these last few years as a PhD student has essentially been an attempt to Science the hell out of that question: is necessity really the mother of invention? To put this in a very short summary, it turns out that necessity is the mother of inventors but not of inventions, as such. And the key takeaway is this: don’t count on something being invented just because it would be very handy if someone would invent that thing.
For those interested in the longer version, I’ve been studying several technologies that have been born (or have been reported to have been born, to be more precise) out of necessity. One of the cases I’ve studied most intently so far concerns the development of copper smelting in the immediate post-war period. That period saw the development of a revolutionary new copper smelting process, called “flash smelting,” which greatly decreased energy use while increasing productivity of copper smelters. Replacing earlier furnaces in 20-30 years or so (which is quite a short time in metals industry), flash smelting and its derivatives produce the overwhelming majority of all primary copper used today. What’s more, it’s been adapted to other metals as well. In some accounts, flash smelting has even been hailed as one of the greatest (if not the greatest) metallurgical breakthroughs of the 20th century – a century that was by no means short of major breakthroughs in metallurgy, from oxygen steelmaking process to minimills to various leaching processes.
To briefly recap the “traditional” account (e.g. Särkikoski 1999, Kuisma 1985), in 1945 Outokumpu, then a small, state-owned copper mining company founded to exploit a rich copper deposit found in Eastern Finland in 1910, was in deep trouble. Just before the war, it had completed what then was the world’s largest electric copper smelting furnace. Had the Second World War not intervened, Outokumpu would likely have been all too happy with its shiny piece of equipment (like a somewhat comparative Boliden copper company in neighboring Sweden), capable of turning out copper at respectable if not record-breaking 12 000 tons per year.
Unfortunately, as we all know, war did intervene. In 1945, the war in which the Finns were a good second but the Soviets still won, finally ground to halt. In the armistice, the Soviet Union made the Finns an offer they couldn’t refuse: the pesky Finns not only had to pay staggering war reparations for the crime of being so intransigent, but they also had to cede large tracts of Eastern Finland to the USSR, including two large hydropower plants that had previously supplied some third of Finland’s electricity.
This is not a good thing for a company that is left holding the world’s largest electric furnace. Skyrocketing electricity prices left Outokumpu with essentially two options: one, it could use its leverage as an extremely important export earner and war reparations supplier to get cheaper rates, or import credits for alternative fuels such as coal. Or, two, it could do something no one had done before and invent a way of smelting copper that would need no external source of energy.
Usually, options such as these are expressed only as snarky rhetorical devices. Yeah, right, go ahead and invent a way to smelt copper without using energy! But in this case, that’s just what Outokumpu did: in just three months or so, they started building what later and somewhat unimaginatively became known as the Outokumpu flash furnace. Using heat generated by sulfur within the ore itself to melt it, in essence burning the ore instead of fuel, and utilizing some neat tricks to recycle the heat, they did just what the sarcastic commentator above might have suggested: starting from February 1947, they smelted copper with unprecedented energy efficiency.
Cue interested foreign buyers and, as they say, the rest is history: even today, Outokumpu method accounts for more than 50% (and possibly more than 70%, depending on whether you ask Outokumpu’s staff) of world’s primary copper.
The problem with the above account is just that it’s not really telling the whole story. Sure, the essentials are there: Outokumpu had a major problem with electricity prices, a serious necessity if anything, and as a response they did develop energy efficient flash smelting speedily enough. What it’s leaving out are the parallel developments elsewhere and in particular the prehistory of the invention – such as the fact that the method was already patented in 1897. Or the fact that Outokumpu’s method was actually only good enough; it was not even the best idea even Outokumpu’s engineers had considered, it just happened to be an idea they could execute right away.
A closer look, which you may find in my forthcoming publications :), shows that flash smelting was nearly inevitable end result of a century or so of development. The first recorded mention of the idea goes back to 1866, and the basics of the process were well known to metallurgists by the end of the 19th century. By 1935 at the latest just about everyone, it seems, knew well enough that sulfur in copper ore could be burned (and in some smelters, actually was burned), and that the key to success was eliminating heat loss due to outflowing hot gases. Even successful experiments (and, inevitably, some unsuccessful ones) had been concluded, and they showed no insuperable difficulties. What kept the world’s copper smelters from loosening their purse strings, to the tune of $30 million in today’s dollars it took Outokumpu to realize the invention, seems to have been first the Great Recession, and then the war. In the first case, economic slump created overcapacity, and in the second, production and not experimentation was first the priority, and once war was over, overcapacity was again the problem. Neither case is a not very conductive for huge and fundamentally uncertain investments.
But some did experiment and invest. Inco of Canada, the Free World’s nickel supplier (or a nasty monopolist which deviously controlled 90% of non-Communist world’s supply, depending on your point of view) was particularly well-placed to do so. Thanks to its monopoly of what has sometimes been called the most strategic of strategic metals – very soon, it was selecting which customers had the honor of receiving its wares – it was making brisk business and raked in, as pure profits, about twice as much mazuma as Outokumpu could put on its entire “revenues” line. With such wherewithal, it had conducted extensive R&D since 1906, and before the war, its laboratories had – among other things – done research on how to finally crack the flash smelting. In 1936, a young PhD T.E. Norman published his thesis, where he proved conclusively that flash smelting was possible if heat losses could be minimized. He suggested that instead of burning copper ore in unadulterated air as previous patents had described, and losing much of the energy in heating the non-reactive nitrogen which makes up for some 79% of the stuff and goes straight up the chimney, why not spice up the mixture with some (or a lot of) oxygen?
Norman’s suggestion was not really an extensive leap of insight either. What oxygen could do to molten metals was very well known to metallurgists in the 19th century; the problem had been how to produce the gas in industrial quantities. After this particular problem was cracked in the early 1900s, and after the equipment became cheap and reliable enough not to bankrupt metals producers, and the furnace equipment was made strong enough to withstand the sometimes spectacular (when viewed from a safe distance) energetics, oxygen was indeed taken up with gusto: it is no wonder that one other strong candidate for the “metallurgical invention of the 20th century” is the use of oxygen in steelmaking.
And almost pure oxygen is what Inco’s engineers promptly used, once the war was over and running the existing plants at full tilt, consequences be damned, was no longer Priority A-1. In principle, they took a traditional “reverberatory” furnace and stuck oxygen pipes to it, actually managing to start up their pilot plant a month before Outokumpu. However, Inco then suffered a delay in building the full-scale plant because the monopolistic supplier of oxygen generators felt entitled to charge whatever it pleased and deliver whenever it pleased it to do so. With a notable lack of self-reflection but no lack of ironic potential, Inco’s engineers rued such dastardly abuse of market power and turned to a smaller supplier instead. After some complications, the production-scale Inco’s Flash Furnace (apparently, naming was never a strong point in metallurgical engineer’s training) finally roared to life in 1952, three years after Outokumpu’s full scale furnace had started its first smelting “campaign.”
Poured from the same tap?
Outokumpu had in fact seriously considered oxygen as well: as noted earlier, its advantages re: oxidizing ores were perfectly obvious to any metallurgist with half a brain, and Outokumpu’s engineers had certainly more than that. There were just two problems: one, oxygen generators needed plentiful electricity, which, as mentioned, was precisely the item Outokumpu was short of. Two, one does not simply walk into oxygen generators in Europe bombed, shelled and generally pillaged half the way to Stone Age. No supplier could be found to deliver one in anything resembling a schedule, so, no oxygen for the poor Finns. Instead, Outokumpu’s furnace used a complicated heat-exchanging apparatus for capturing the heat and recycling it to preheat incoming air. Perhaps predictably, such contraptions were tricky at best and outright unserviceable at worst, and as soon as fuel oil became available, the heat exchanger was unceremoniously ditched. For all concerned, it was easier to burn a little fuel oil to preheat the air, even though doing so cost the furnace the nominal title of being energy-independent.
In the end, the fuel oil substitution and other factors meant Outokumpu’s method was not the most energy efficient, nor was it most productive either. As the figure below shows, both these honors go to Inco, whose furnace was both more productive and less needy in terms of external energy per ton of ore concentrate treated (although what little it used had to be supplied in form of electricity). The productivity and efficiency improved to anywhere near Inco’s levels only after Outokumpu “introduced” (i.e. copied) oxygen injection in the 1970s. As a metallurgy textbook published as late as 1976 states, the commercial success of Outokumpu’s method was “somewhat surprising,” because “[...] it appears that the Inco process is the better from both a technical and economic point of view” (Biswas and Davenport, 1976, p. 170).
But Outokumpu’s furnace was good enough. It did smelt copper without lots and lots of electricity or other fuels. It was somewhat simpler, if one discounts the heat exchanger, and, most importantly, it was available. The Inco method was, in fact, so good that Inco did not want to license it to anyone – especially because the method was particularly suitable for smelting not just copper but also nickel, the stuff Inco was made of. Selling technology was small beans, too: the first license (including detailed design work and supervision) netted Outokumpu just $1.3 million in modern currency in 1956. In the same year, Inco’s revenues from its vast nickel empire, centered on the astonishingly productive Sudbury mine in Canada, were robust $3700 million. No wonder Inco’s bosses did not feel the need to market their furnace, unlike Petri Bryk, Outokumpu’s managing director since 1953. Under Bryk’s leadership, Outokumpu sold its furnace to whoever wanted to buy one. It may seem surprising, but commerce was slow to start: after the first sale to Japan in 1953 (completed in 1956), next Outokumpu furnaces had to wait until the 1960s. Smelters were reluctant to part with their paid-for furnaces as long as they didn’t have to do so, and novelty certainly increased the feeling of risk. Only after air quality standards and emission limits became widespread in the 1960s, and after the 1973 oil crisis made competing low-emission furnaces (mostly, electric ones) temporarily uncompetitive, did the sales really take off.
Perhaps not entirely coincidentally, Bryk also happened to be the first inventor of Outokumpu’s furnace and reaped a hefty second income (in fact, significantly larger than his salary) from license fees.
Takeaway: what necessities give birth to?
The above was the short history of the triumph of Outokumpu flash smelting. As is common with technology, the best technology did not “win.” The most available one did instead. Also, in common with almost any technological artefact you care to name, the “radical” invention was not so much a leap of genius. Rather, it was an almost logical culmination of long history of development. The technology came when the time was right: like a capstone in a pyramid, it could slide to its place only after all the supporting technological building blocks – in this case, notably certain ore concentration processes, understanding of thermodynamics, and cryogenic oxygen production processes – were placed first.
If Outokumpu had not been around to “invent” the flash furnace, Inco was still there. And even if Inco had not shared it with the world, it is more than likely that any one of at least a half a dozen other copper companies with the means, motive and the opportunity to invent the furnace would have done so. In absence of external shocks such as the Great Depression and the world war, it seems likely that the flash furnace would have been taken into use at least several years earlier. What’s more, it’s likely that most flash furnaces would have used oxygen injection right from the start, if the war had not ruined the industries of most of the world (with the exception of North America). The original Outokumpu furnace with its heat exchanger was more properly an anomaly: an “ersatz” technology cobbled together to solve a particular problem, but not all that competitive once situation normalized.
In short, the shortage of electricity – the necessity – mothered an inventor, not an invention. The invention was “out there,” known to metallurgists and almost ready to be taken into use; what Outokumpu did in about three months in late 1945 was essentially to confirm that this invention-to-be was in theory suitable for the type of ore their mine was producing. After that was confirmed, all that remained was to build the thing.
The key takeaway from this tale is twofold. For scholars like me, it illustrates nicely the almost-deterministic nature of technological change: once the correct pieces are in place, some inventions are almost inevitable. This deterministic view is by no means unchallenged, but several key pieces of evidence – perhaps most notably, the simultaneity of independent discovery that is the norm instead of exception – strongly suggest that technology is far more independent (or even bloody-minded) beast than we may hope it to be. Technology, in a sense, has its own if symbiotic life and it operates under its own logic; we humans may hope to guide it somewhat, but our symbiotic relationship with it makes the task almost as realistic as if we hoped to reign over our mitochondria. Further, detailed autopsies of technological change, such as this, suggests that the delineation of innovations into “incremental” and “radical” types is artificial, post hoc conclusion at best, and meaningless complication at worst. Instead, innovations – when viewed in context – seem to be very incremental improvements; it’s just that sometimes these improvements reach a kind of a tipping point when something that was not previously feasible suddenly and sometimes dramatically becomes feasible.
For those outside the ivory tower and concerned with actual life and times, this study and others like it suggest some caution to plans which assume certain technological developments. One should be extremely wary of claims – implicit as well as explicit – of technology being developed in response to some specific necessity. In particular, the development path of technologies that require substantial initial investments (like copper furnaces) can probably be predicted with a fair degree of accuracy by looking at the state of the art and proposed concepts right now. If there is no strong agreement among specialists as to whether a certain technology is feasible – many energy storage schemes come to mind – one should be wary whether it really is, when it is, or at what cost.
Counterexamples may abound, of course. For example, one may note that many if not most knowledgeable observers claimed space travel to be impossible up until 1950s. However, a more careful look may be in order; one famous example of what has for long been called mistaken prediction, uttered by Astronomer Royal Richard Woolley in 1956 – a year before Sputnik – begins “All this talk about space travel is utter bilge, really.”
But what most people using this as an example of hidebound conservatism and lack of imagination don’t know is that the quote continues: “It would cost as much as a major war just to put a man on the moon.”
Not too shabby a prediction after all, eh?
Biswas, A. K., & Davenport, W. G. (1976). Extractive Metallurgy of Copper (1st ed). Oxford, New York: Pergamon.
Davenport, W. G., King, M., Schlesinger, M., & Biswas, A. K. (2002). Extractive Metallurgy of Copper (4th ed.). Oxford: Pergamon.
Kuisma, M. (1985). Outokumpu 1910-1985: Kuparikaivoksesta suuryhtiöksi. Forssa: Outokumpu.
Särkikoski, T. (1999). A Flash of Knowledge. How an Outokumpu innovation became a culture. Espoo and Helsinki: Outokumpu and Finnish Society for History of Technology.
Previously, I made the case why space aliens, or ETIs (short for ExtraTerrestrial Intelligences) would probably not exterminate humanity out of fear of future competition from us, because a) interstellar warfare either is an uncertain business at best, or, b) if we postulate a civilization so advanced that it could destroy us with impunity, it would not seem to benefit from doing so.
Now the argument has been sharpened considerably, thanks to a publishing process which culminated in my article being accepted to Acta Astronautica (Korhonen 2013). I encourage you to get your hands to the paper, either directly from Acta or, if you’re so inclined, via the manuscript version at the arXiv repository.
For those ADHD’s who don’t wish to read the whole paper (I can’t imagine why you wouldn’t, but I’m an accommodating person), here’s the gist of the argument. I make six assumptions about the nature of interstellar warfare, namely,
According to these assumptions, the only surefire way of preventing another civilization from attacking your civilization, ever, seems to be the complete destruction of the said civilization, if not the said species. Unfortunately (from the attacker’s viewpoint, that is), creating absolutely effective first strike weapons and gathering timely intelligence required for their use will always be massively more complicated than creating effective deterrent weapons. After all, the deterrent needs only to threaten unacceptable damage to the attacker; if the marginal cost of the attack rises along with the desired amount of damage, an attack causing severe damage may be orders of magnitude simpler and cheaper than an attack that eliminates the species.
But what is an unacceptable level of damage? This is the tricky part, and I acknowledge I don’t have perfect answers. No one does. However, our history strongly suggests that as civilizations become more technologically advanced, their tolerance for death and destruction diminishes greatly. This trend has continued for centuries, and there seem to be no good reasons to believe it would be reversed as technology advances. Several reputable studies – some written by former high-level nuclear weapons specialists and military officers – make a compelling case that today, deterrence is achieved by being able to destroy about ten random cities. Even when rounding up the numbers, this suggests that the expected value of credible deterrent is – very rough estimate here – in the order of 0.1 x total loss. Even smaller numbers have been put forward, and it is difficult to believe that human decision-makers would willingly lose even a single city for any conceivable political gain.
Now, then, how do we achieve this level of deterrent? Consider that any interstellar attack is likely to cause severe damage to the target civilization. As an example, a rather primitive 1970s reference design for interstellar probe – the Daedalus – could, with just a programming change, deliver some 146 gigatons of kinetic energy to any largish target (=planets, moonbases, battle stations the size of a small moon) within 25 lightyears. When one compares this to the approximate explosive energy of the entire world’s nuclear arsenals (ca. 6.5 gigatons), one may conclude that a hit from even a single interstellar probe would be Very Bad News Indeed.
Even an advanced civilization might have problems defending against such an attack. The sluggish Daedalus, coming in at measly 12% of the speed of light (0.12 c), would be theoretically detectable with Hubble Space Telescope-level sensors at about 10 to 20 Astronomical Units (AU). If the sensor is pointing to the right direction, and if the lack of apparent motion – which is what most detection systems rely on – doesn’t bother. Detection at such ranges would give only some 24 hours of warning before impact. Is this long enough for an interception? And consider that considerably faster, smaller (more difficult to detect) probes may be possible.
What, then, is the likelihood that we (or some other civilization) can launch such a retaliation? I simplified the argument to four probabilities, namely
Even if the attacker is 95% certain of each and every single variable, the joint probability of a successful attack is only 0.815; in other words, the complement probability of a retaliation is uncomfortably high 0.185. I leave it to the individual readers to judge, given the arguments in the paper (e.g. that it’s impossible to be sure whether “primitive” radio transmissions are really the sign of a “primitive” technological civilization, an equivalent to renaissance fair, or a deliberate lure intended to attract hostile civilizations) whether certainties of 95% are achievable across interstellar distances. Considering that even one hit may engender 90%+ losses, it would therefore seem that attacking someone simply because they may be threats in the future would be a losing strategy.
The good news, therefore, are that attempts to contact extraterrestrials are likely to be low-risk affairs. The bad news is that waiting for an answer may be a long wait, as the ETIs might be fearful to take any actions that might be misinterpreted as hostile acts. For example, sending a high-velocity interstellar probe and shooting a high-velocity interstellar kill vehicle look the same to the recipient…
This should also be remembered when discussing our own interstellar missions. If one has the capability to send interstellar probes to someone else’s home system, they also have the capacity to hit them with interstellar kill vehicles. We must make sure that our exploratory efforts, if and when they happen, are not mistaken for attacks!
Thanks to Markku Karhunen and to two anonymous referees of Acta – their comments and suggestions improved the story considerably!
Korhonen, J. M. (2013). MAD with Aliens? Interstellar deterrence and its implications. Forthcoming in Acta Astronautica. DOI 10.1016/j.actaastro.2013.01.016 – manuscript available at arXiv: http://arxiv.org/abs/1302.0606
The article manuscript is published in arXiv repository according to Elsevier’s publishing policy regarding permitted scholarly posting.
The previous post had a simple graph showing electricity generation in four sample countries. Two of the countries had chosen to embrace nuclear energy, while two are known as the champions of renewable energy (wind, solar, biomass, and wave power).
The statistics can be used to tell different stories. To illustrate, here’s a graph of “clean” electricity production in five countries (Finland was added due to local interest). The thick, solid line indicates all clean electricity production, i.e. hydro, nuclear, wind, solar, biomass, wave and so forth. The dotted, thin line indicates everything in the above except nuclear.
Interesting, isn’t it?
The next graph is even more so. This time, the solid lines indicate again all clean electricity sources, but the thin line includes only “renewables,” i.e. non-hydro renewable electricity generation.
If one wants to tell a story where Germany and Denmark are clean energy champions, one can use the thick solid lines. If the narrative is the growth rate of non-hydro renewables, one can start the graph from about 1990 and ignore the solid lines altogether.
If one wants to tell which countries and which methods have been the most successful in decarbonising electricity, the lower figure may tell a more truthful story.
The data is from IEA statistics at World Bank website. For telling your own stories, here’s the collated data in .xls format.
I take absolutely no responsibility for the data: it should be correct, but check the originals before making yourself a fool :).