Modeling Societal Collapse as a Result of Stingy Support for PhD Students

Editor’s note: recent publication of a study about the coming collapse of civilizations has most probably encouraged the author to submit this work for review. Due to its nature, we felt it best to present it to the whole world for a thorough peer review.

A large number of explanations have been proposed to account for the collapse of various historical societies. As Tainter (2014) recently argues, earlier scholars have ascribed collapses to result from elite mismanagement, class conflict, and peasant revolts, while the increased emphasis on environmental issues has inspired modern scholars to suggest that societies have collapsed due to depletion of critical resources, such as soil and forests. Most recently, some studies have suggested that inequality and elite resource consumption leads to societal collapse (Motesharrei et al, in press).

However, these explanations do not focus on what seems to us to be the most pertinent explanation for the collapse of these early societies. It is common knowledge that the earlier societies – examples of which include the Roman Empire, the Mesopotamian civilization, and the Maya civilization – did not possess a modern scientific establishment. As a result, we argue, these societies could not escape their predicaments by developing novel technologies in order to curb elite mismanagement, reduce environmental impacts, or defuse class conflict via widespread Netflix subscriptions.

In particular, environmental literature is well aware of the potential of technology in reducing environmental impacts. From Ehrlich and Holdren (1971), we know that the human impact on the environment can be described by a following equation:


where I stands for Impact, P for Population, A for its Affluence, and T for Technology. Improvements in technological efficiency can reduce resource intensiveness, and therefore reduce the variable T (see also Chertow, 2000). Technological change can even allow us to substitute between different resources, as was the case when whale oil business was outcompeted by mineral oils.

Therefore, it stands to reason that improvements in technology could allow us to postpone or even forestall a societal collapse. While “technology” is somewhat difficult to operationalize, a useful proxy may be found from the share of population with the highest academic degree – the doctorate. As an example, countries ranking highest in PhD graduates per capita (as of 2010, in descending order, Switzerland, Sweden, Germany, Finland, Austria, Denmark) typically score highly in various environmental sustainability and related indicators. It may therefore be surmised that PhD education may help societies forestall a coming collapse.

To test this intuition, we develop a simplified computer model. Following the lead of recent studies, we base this model on the well-known Lotka-Volterra or predator-prey model. The additions to this model are as follows: besides renaming the variables (i.e. P, “population,” and N, “nature”), we add a variable T or “technology” to represent the stock of useful knowledge available for forestalling collapse – e.g. the variety of computer games available for amusing the lower social strata.

The growth of this variable is dictated by the equation


where Q_PhD is a constant denoting the share of PhD students in a population and c is a constant. Increase in T will diminish the environmental impact of the population, or degradation of nature dN_d as follows:


where a is a constant. Finally, we initialize the system with a fixed initial stock of Nature (N_init), and adjust the renewal of natural resources dN_r so that it depends on the current stock of Nature and will not cause the stock of Nature to exceed its original maximum:


where b is a constant. A system dynamics diagram of the model can be found in Figure 1 below. The model has been implemented on NetLogo (Wilensky, 1999), based on a predator-prey model by Wilensky (2005).


Figure 1. System dynamics diagram of the model.


The results of three representative scenarios are shown below. In the Scenario 1, the society chooses not to support PhD students at all (Q_PhD = 0). As can be seen from the Fig. 2, this forces the society into a cycle of growths and collapses, with a downward trend in the population peaks. We posit that such a scenario – lack of support for PhD students – is as likely a cause for the collapse of historical civilizations as any single reason so far suggested.


Figure 2: Cyclical collapse of a society that does not support PhD students.

In the Scenario 2, the society chooses to support PhD students rather stingily (Q_PhD = 0.05). Figure 3 shows the results: in this case, the population will suffer from several collapses before stock of technology allows it to transcend the limits of natural stocks.


Figure 3: A cycle of booms and busts, followed by breakout, experienced by a society that offers only limited support to PhD students.

Finally, in the Scenario 3, the society supports PhD students generously (Fig. 4). Collapse is avoided entirely, allowing the population to grow nearly exponentially.


Figure 4: Near-exponential growth in a society that generously supports PhD students.


Based on the model described above, we conclude that collapses can occur in a limited system. However, collapse can be avoided with judicious policies. In particular, the model suggests that generous support for PhD students is essential for achieving this goal. We therefore implore policy-makers to adjust policies accordingly; a stipend of 4000 € per month should suffice.


The NetLogo model can be downloaded here. The NetLogo itself can be freely downloaded from here.


Chertow, M. R. (2000). The IPAT Equation and Its Variants. Journal of Industrial Ecology 4 (4): 13–29. doi:10.1162/10881980052541927.

Ehrlich, Paul R.; Holdren, John P. (1971). Impact of Population Growth.Science 171 (3977): 1212–1217. JSTOR 1731166.

Motesharrei, S., Rivas, J., and Kalnay, E. (forthcoming). Human and Nature Dynamics (HANDY): Modeling Inequality and use of Resources in the Collapse or Sustainability of Societies. In press.

Tainter, J. (2014) Commentary of Motesharrei et al. paper to Keith Kloor.

Wilensky, U. (2005).  NetLogo Wolf Sheep Predation (System Dynamics) model.  Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

Posted in Simulations | Tagged , , , | 4 Comments

Design against climate change – suggestions for a project?

They say crowdsourcing is the thing nowadays, so let’s try it out.

A friend in California, who happens to be an excellent industrial designer, has for quite some time been interested in climate change and related issues. Recently, he asked me for recommendations for projects that seek to make the world a better place in this regard and that could use his skills and, perhaps, some investment.

I thought this was an excellent idea: the value of good design can be incalculable, especially in products and services that are supposed to be used by actual humans. And this guy is seriously good in design – as in “one of the best in the world.” He also happens to be a very smart, very nice, and extremely passionate about what he’s doing. I can guarantee that having his expertise, passion and contacts in play, any project or startup will have a significantly higher chance of making a real dent in the world.

But! Off the top of my head, I couldn’t say which project might be a good fit for his skills and interests. Therefore, I ask thee, my dear readers: do you have any ideas for projects for him?

So, please spread the word and put forward suggestions, either in the comments section below, in twitter (@jmkorhonen), or through e-mail ( I’ll make sure the suggestions go forward!

Posted in Uncategorized | Tagged , , | 1 Comment

Graphic of the Week: What’s the required build rate for a sustainable energy system?


One aspect of energy system that’s largely ignored is the ultimate sustainable capacity that can be achieved with a given rate of installation. Accustomed as we are to news about renewables breaking new installation records, we may overlook the fact that these installations will eventually reach the end of their lives and need to be replaced. And that, at some point, the entire production capacity will be devoted to nothing else but simply maintaining the current production level.

Calculating this equilibrium level can be a bit tricky, so I did the maths for you. Or, more to the point, I ordered my NetLogo to do so; you may download this wonderful (and free!) simulation environment here, and the .nlogo file I used here. Feel free to use and extend to your purposes.

The results are shown in the graph above. It shows the ultimate equilibrium production level reached for 1 GW of annually installed capacity, as a function of capacity factor and plant lifetime in years. In other words, it tells you what kind of an energy system you can have if you are able to install a gigawatt of generating capacity every year from here to eternity.

What you may instantaneously note about the graph is the importance of plant lifetime. The longer the lifetime, the less often you need to build new plants simply to replace old ones. Therein may lie a problem: the lifetime of the most popular and anticipated renewable generators (solar PV and wind) is likely to be around 25 years. What this means that the acclaimed record-high installation rates that have been achieved lately – for example, Germany’s 2012 record of about 7.6 GW solar per year – are simply not enough. If this rate could have been sustained indefinitely, the equilibrium production in Germany (where the capacity factor of solar PV hovers around 0.1) is about 173 terawatt hours, or TWh.

Last year, German wind power installations amounted to about 2.3 GW, yielding an equilibrium level of about 160 TWh (assuming average capacity factor of 0.3, which may be tad high). Solar slumped to 3.6 GW, which yields an equilibrium at about 82 TWh. As the primary energy consumption in Germany is annually about 3800 TWh (which is used, among other things, to produce ca. 590 TWh of electricity), one may be excused for exhibiting symptoms of panic.

Furthermore, given that the current German government is bent on reducing and eliminating renewable subsidies, it seems rather unlikely that even this poster boy of renewable revolution will see even similar installation rates again any time soon – if ever. Even farther seems to be the day when the installation rates soar to heights required for decarbonization: even to produce just electricity sustainably from renewable sources would require an annual installation rate of (say) 10 GW/a for solar and 5 GW/a for wind. Can even these rates be achieved – and sustained? Perhaps. Perhaps not.

And if one would want to replace fossil fuels in other uses of energy, such as transport fuels and sources of process heat, one would need to generate anything between – say – 1000 to 3800 TWh per year. One thousand terawatt hours could be sustained by installing (for example) 20 GW of solar and 8 GW of wind. Per year, every year. For 3800 TWh, the required installation rates jump to 76 GW of solar and 30.4 GW of wind.

Finally, it should be noted that the above computations give the physical maximum that can be produced. It completely ignores important difficulties, such as whether the power is produced at the time when it’s demanded. If this is not the case (and it will not be the case when the production is dependent on the weather), options such as energy storage are required – causing energy losses and necessitating further increases in build rates. As an example, well-informed renewable boosters have proposed to alleviate problems inherent in the variable renewable sources by simply building so many generators that enough of them operate at any given moment of time. Commonly seen estimates for this “overbuild” range from 2 to more than 10; that is, we may need 2 to 10 times as many renewable generators than the best case above would suggest.

Anyone willing to bet whether we’re ever going to see build rates that can sustain even a 2x overbuild in an industrialized economy?

POSTSCRIPT: What about nuclear?

In this calculation, nuclear power has two great virtues. First, nuclear plants last at least two times as long as renewable generators – new plants are designed for 60 years, and it seems possible to extend that to 80 years. Second, overbuild is much less an issue when you have a generator that typically produces power 80-90% of the time.

If the Germans were to utterly reverse their nuclear exit (fat chance, I know) and instead build approximately one new gigawatt class reactor per year, nuclear power would eventually stabilize to about 426.3 TWh. Combined with current renewable build rates, this would result to an equilibrium level of some 666 (\,,/) TWh of carbon-free electricity per year. Not enough to decarbonize fully, but significantly better than the current trajectory.

Posted in Energy, Economy and the Environment, Infographics, Simulations | Tagged , , , , , | 7 Comments

Graphic of the Week: The hidden “fuels” of renewable energy


Figure 2 from Vidal, Góffe & Arndt (2013:896) shows the demand of some raw materials based on WWF’s prediction for wind and solar energy production reaching 25 000 TWh by 2050. Open and closed symbols correspond to different volumes of raw material required to construct different types of photovoltaic panels.

It is well known that there is no such thing as a free lunch. However, it is somewhat less known that there is no such thing as free energy, either.

Despite all the hoopla about new renewable energy sources being “free” and “practically unlimited” in a sense that no one owns the Sun nor the wind, the fact remains that in order to harness these energies, we need an immense construction effort. This, unfortunately, is neither free nor unrestricted in the material sense. As the above graph taken from a recent study commentary by Vidal, Goffé & Arndt in Nature Geoscience (2013) shows, projected renewable energy deployments would very soon outstrip the current global production of several key materials. By the author’s estimates, if we are to follow the lead of renewables only-advocates, renewable energy projects would consume the entire annual copper, concrete and steel production by 2035 at the latest, annihilate aluminum by around 2030, and gobble up all the glass before 2020.

Certainly, material efficiency can improve greatly, substitutes can be found, and production can be increased. Nevertheless, the scale of the challenge is nothing less than daunting: the authors also provide a handy overview of material requirements per installed capacity, from which I calculated a range of figures for energy production.

If we compare renewable energies to that other low-carbon alternative, nuclear power, per energy unit produced, wind and solar electricity production requires

  • 16-148 times more concrete
  • 57-661 times more steel
  • 43-819 times more aluminum
  • 16-2286 times more copper
  • 4000-73600 times more glass.

(The figures assume a lifetime of 20-30 years for renewables and 60 years for nuclear, and the following capacity factors: wind 0.3, solar PV 0.15, CSP 0.4, nuclear 0.8.)

In a very real sense, these materials can be thought of as the “fuels” or “consumables” of renewables. Without doubt, many of these materials can be recycled to an extent, but the required volumes inevitably mean that any substantial increases in renewable energy generation require corresponding increases in virgin production. Furthermore, not everything can be or will be recovered, and in any case, building the infrastructure for renewable energy generation will sequester huge amounts of steel, aluminum and copper over the lifespan of the generators.

But wait! Aren’t I forgetting something, namely the fuel that nuclear fission uses, and the huge underground caverns required for the disposal of the waste? Indeed, so here’s the second graphic of the day: the rough estimate of mining requirements for various energy sources, per megawatt hour produced.

Calculated after Vidal & Arndt (2013b) and various sources for mining requirements. Uranium mining is assumed to take place at the poorest primarily uranium-producing mines (ore grade 0,1%); other materials are computed using average ore grades and average recycling levels (30% for steel, 10% for concrete, 22% for aluminum, 35% for copper).

Calculated after Vidal & Arndt (2013) and various sources for mining requirements. Uranium mining is assumed to take place at the poorest primarily uranium-producing mines (ore grade 0,1%); other materials are computed using average ore grades and average recycling levels (30% for steel, 10% for concrete, 22% for aluminum, 35% for copper). Geological repository mining requirements are estimated according to Posiva reports.

You may note that nuclear energy’s estimate – and that’s what these are, estimates – is dominated by uranium mining. I deliberately used the low-end value for uranium ore grade, and omitted both In-Situ Leaching and byproduct mining operations, which would decrease the mining requirement considerably. In fairness, I did the same for other materials, although some appreciable amounts of iron and copper are recovered from byproducts. I also omitted the high-end estimate for solar PV, because that would have messed up the graphic: the total runs to staggering 611 kg of mining operations per MWh produced.

The figure is likely to be biased in favor of renewables, as I’ve omitted rare earths from the discussion. As shown in e.g. Öhrlund (2011), rare earths (metals used in e.g. permanent magnets and in solar photovoltaic panels) may pose a bottleneck for renewable expansion. Mining these relatively rare (hence the name) elements is a messy business, which could very easily greatly increase the “materials backpack” renewable energies have to carry around. Furthermore, the figure does not account for backup power systems, grid expansion or energy storage – all of which are significant building projects that are especially important for renewable energy.


Vidal, O., Goffé, B., & Arndt, N. (2013). Metals for a low-carbon society. Nature Geoscience, 6(11), 894–896. doi:10.1038/ngeo1993

Vidal, O., & Arndt, N. (2013). Metals for a low-carbon society: Supplementary Information. Nature Geoscience, 6(11), 15–17. doi:10.1038/NGEO1993

Öhrlund, I. (2011). Future Metal Demand from Photovoltaic Cells and Wind Turbines – Investigating the Potential Risk of Disabling a Shift to Renewable Energy Systems. European Parliament, Science and Technology Options Assessment. Brussels.

Posted in Energy, Economy and the Environment, Infographics, Scarcities and constraints | Tagged , , , , | 26 Comments

Space system “Shuttle,” part of USA’s nuclear attack arsenal?


The story of a white elephant colloquially known as the Space Shuttle is familiar to most students of the history of technology. The shuttle was originally touted as a cheap way to access space: being mostly reusable, it would have done for space travel the same what DC-3 did for air travel, i.e. open up the space for large-scale exploration and exploitation. 


Of course, we all known how that promise fared the test of reality. Instead of envisioned 50 or so annual launches (which may actually have covered the program’s staggering cost), shuttles went up perhaps six times a year. There simply were not enough payloads looking for space access, and refurbishing the shuttle always took longer than early analysis had assumed. However, the shuttle had been sold to the Congress on a launch schedule that even its ardent supporters believed unrealistic. Therefore, the shuttle remained in the agenda for largely political reasons, possibly because of fears that if it was cancelled, there would be nothing else to loft NASA’s astronauts into orbit. In the end, the “cheap” and reusable space access turned out to be (probably) less safe and far more expensive than using expendable, throwaway boosters would have been. 


However, the Shuttle provoked interesting reactions back in the day. Since the name of the game on both sides of the Cold War was paranoia about adversary’s intentions, every pronouncement and every program was pored over with a looking glass by unsmiling men in drab offices. When the U.S. announced the Space Shuttle, the Soviet analysts naturally went to work. However, it soon became apparent to them that the launch schedule NASA had advertised – over 50 launches per year – was hopelessly optimistic. The Soviets, being no slouches in the rocketry department, could not fathom why NASA wanted to build a complex, reusable spaceplane instead of simply using more tried and reliable expendable launch vehicles (Garber, 2002:16). 


But there seemed to be one customer for the shuttle that would not mind the cost or the complexity. 


Eager to sell the shuttle as the only space access the United States would need, NASA had teamed up with the U.S. Air Force. The Air Force was responsible for launching all U.S. defense and intelligence satellites, and if NASA could say to the Congress that Air Force, too, could use the shuttle, then NASA had extra political leverage to extract funds to build one. It was immaterial that the military did not really have a requirement for a shuttle: what was apparently far more important was that NASA could therefore insulate the shuttle from the political charge that it was just a step towards human exploration of Mars, or a permanent space station. Both of these were exactly what some people at NASA wanted it to be, but they also happened to be directions that President Nixon had rejected as too expensive in 1971 (Garber, 2002:9-13). 

Therefore, the shuttle design requirements expanded to include political shielding. This took the form of payload bay size (designed to accommodate spy satellites of the time) and, more importantly, “cross-range capability.” The Air Force wanted to have an option of sending the shuttle on an orbit around the Earth’s poles; scientifically, this was a relatively uninteresting orbit, but for reconnaissance satellites that sweep the Earth’s surface, it’s ideal. The military also wanted to have an option of even capturing an enemy satellite and returning after just one orbit, quick enough to escape detection (Garber, 2002:12). 

However, this requirement caused a major problem. Because the Earth rotates under the spacecraft, after one orbit the launch site would have moved approximately 1800 kilometers to the East. If the craft is to return to base after one orbit, instead of waiting in orbit until the base again rotates underneath it, it would have to be able to fly this “cross-range” distance “sideways” after re-entering the atmosphere (Garber, 2002:12).


In the end, NASA designed a spacecraft with required cross-range capability. This meant large wings, which added weight and complexity, which in turn decreased the payload, which in turn required more powerful engines, which in turn made everything more complicated… (In all fairness, for various good reasons, NASA might have designed a relatively similar shuttle even without the Air Force requirements. However, it seems that the requirement had at least some effect to the cost and complexity of the shuttle.)



Because all this was public knowledge, the analysts in the Soviet Union rejoiced. A spacecraft that could launch from the Vandenberg Air Force Base,do a single polar orbit, and then return stealthily to its base could be nothing else than a weapon in disguise. It was immaterial that few if any analysts could figure out why such an expensive craft was being built: obviously, the capitalist aggressor must have had discovered something that justified the huge expense. An analysis by Mstislav Keldysh, head of the Soviet National Academy of Sciences, suggested that the Space Shuttle existed in order to lob huge, 25-megaton nuclear bombs from space directly to Moscow and other key centers (Garber, 2002:17). The real danger was that the shuttle could do this by surprise. There would be little to no warning from early warning radars, and no defense. 


To date, there is no evidence whatsoever that such mission was even seriously considered. The reason why the Space Shuttle was built was politics; there was no hidden agenda (at least, not one envisioned by the Soviets). But this paranoid line of thinking did leave a legacy, or two. 

One of the legacies was the Soviet “Buran” shuttle program. Apparently, Buran got built and largely resembled the U.S. shuttle simply because the Soviets could not understand why the United States was wasting so much money on the Space Shuttle; however, Buran really was a weapon, with a planned capability to drop up to 20 nuclear bombs from orbit. 


Another legacy is the image above. Taken from a Soviet 1986 civil defense booklet, it illustrates the “nuclear attack arsenal of the USA.” Prominently portrayed alongside MX missile is the “space system ‘Shuttle.’” In other words, the Soviets were so certain that the white elephant was simply a weapon in disguise that they printed it to a recognition guide!


Many thanks to NAJ Taylor and Alex Wellerstein for bringing this to my attention, and to whoever so kindly provided the scans of this booklet in the first place.



References – info about Buran’s combat role. See also Astronautix entry on Buran:

Garber, S. J. (2002). Birds of a Feather? How Politics and Culture Affected the Designs of the U.S. Space Shuttle and the Soviet Buran. Master’s thesis, Virginia Tech. Retrieved from – Hi-res scan of a Soviet 1986 civil defense booklet

Posted in History of technology, Nuclear energy & weapons, SETI, Aliens & Space | Tagged , , , , , , | 4 Comments

Graphic of the Week: How to reduce emissions fast enough?

Three countries and one energy policy that have achieved required CO2 emission reduction rates - and one who hasn't

Three countries and one energy policy that have achieved required CO2 emission reduction rates – and one who hasn’t

According to most estimates, we really are running out of time for the required CO2 emission reductions. Even if we were to achieve peak emissions by 2016, we’d still need global emission reduction rates of around 3% per year – all the way to 2050.

Fortunately, such rates may be achievable. In fact, three countries have been close – by accident.

The sad state of current climate policy is nowhere so evident as in the fact that the fastest emission reductions have been achieved without any climate policy at all. Sweden, Belgium and France all achieved extremely rapid (that is, compared to anything else) rates of decarbonization as a result of their other energy policies. Compared to generally acknowledged leader of climate policy, Germany, their success is remarkable – or, if you like, the lack of success from Germany is highly disturbing.

Anti-nuclear advocates counter with a claim that Germany has been exemplary in reducing its emissions from 1990 levels. This may be so, but such claim ignores two very salient facts. The first fact is that these other nations achieved their substantial reductions before 1990; therefore, comparing their emission reductions since 1990 to Germany is fundamentally unfair.

The second and even more salient point is that much of Germany’s vaunted achievements are due to so-called “wallfall” effect. In 1990, Germany had just been unified, and former East Germany still operated a number of awesomely ineffective and polluting coal plants and factories whose emissions counted towards the German 1990 totals. These were for the most part closed or extensively modernized in the five years following the unification. A study by the respected Fraunhofer Institute in Germany put the impact of these wallfall reductions to around 50% of all emission reductions achieved between 1990 and 2000, and 60% of energy-related emissions.

While I welcome any emission reductions irrespective of how they’re achieved (well, almost), there are no valid reasons to ascribe wallfall reductions to any climate policy. Therefore, there are no valid reasons to use Germany’s performance as a proof of its climate policy, without removing the substantial wallfall effect. And when these one-off windfalls are removed from the equations, the performance of Germany’s policies looks, quite frankly, rather dismal.

Here’s a hint to politicians: if your policy is repeatedly outperformed by lack of policy, it might be a time to consider alternatives.

PS. These figures were inspired by Global Carbon Project research (PDF link). The slight differences in reduction percentages (e.g. Sweden 4.5% in GCP) are due to the fact that I used per capita emissions, while Global Carbon Project used total emissions. Emission data is from CDIAC; figures for Germany before 1990 are combined East & West Germany totals. 

Posted in Energy, Economy and the Environment, Infographics, Nuclear energy & weapons | Tagged , , , , , , , , , | 5 Comments

Graphic of the week: The great “80% of world’s energy could be generated from renewables” fallacy

ImageIs a future without fossil fuels and without nuclear truly feasible?

In 2011, the Intergovernmental Panel on Climate Change (IPCC) released its Special Report on Renewable Energy Sources and Climate Change Mitigation, or SRREN. The report sought to determine the potential contribution of renewable energy sources to the mitigation of climate change. It reviewed the results from 164 individual scenarios derived from 16 different models, all developed since 2006.

For some, the report’s conclusions were sobering: nearly half of the scenarios suggested that renewable energy sources might contribute no more than 27% of the world’s energy supply by 2050 (see Chapter 10, p. 794). Even when counting only the most aggressive scenarios — the ones where atmospheric CO2 concentrations are stabilized to less than 400 parts per million — the median estimate of world’s renewable energy supply in 2050 was somewhat less than 250 exajoules; or, in other words, about half of global primary energy consumption today.

The report is by no means overly critical of renewables. For example, its rather loose definition of renewable energy includes “traditional” biomass, the main cause of deforestation, and the feasibility of various scenarios is never assessed in the report: no evaluation whatsoever is conducted to determine which of the 164 scenarios might be realistic, and which may be not. Nevertheless, the results show clearly that renewable energy sources are highly unlikely to be enough for meaningful climate change mitigation, even if energy efficiency takes leaps and strides.

It is therefore very instructive to note how various anti-nuclear groups have chosen to portray the results. Without exception, and perhaps mislead by SRREN’s extremely skewed press release, every anti-nuclear organization I’ve so far researched tells you that IPCC SRREN “proves” or “shows” that a future powered completely by renewables is completely possible.

For just one example, take this statement from Greenpeace:

According to the IPCC’s ‘Special Report on Renewable Energy Sources and Climate Change Mitigation’ (SRREN), by harnessing just 2.5% of viable renewable energy sources,  with currently available technologies, its possible to provide up to 80% of the world’s energy demand by 2050.” (Source)

This, my friends, is cherry-picking pure and simple. What anti-nuclear activists don’t tell you is that only two of the 164 scenarios could in any way be construed to suggest anything like that. What’s more, the 80% claim comes from Greenpeace’s Energy Revolution scenario. One could take issue with the fact that in reality, it’s largely prepared with data and “assistance” from renewable industry lobby group EREC, but industry biases and other oddities aside, this scenario assumes that world energy use actually decreases from current levels — even as the world’s population grows to 9 to 10 billion and there are no signs whatsoever that developing nations are willing to forgo the fruits of high-energy lifestyle.

This is so important that it bears repeating: At most, 1.2% of scenarios find that less than 80% of the energy world uses today might be possible to generate from renewable sources alone. The remaining 98.8% aren’t so sanguine.

And, again, that’s with current consumption levels, when the broad consensus is that world’s primary energy demand will increase greatly over the next few decades.

Sure, technically speaking anti-nuclear organizations are correct: it might be possible to derive 80% of world’s energy from purely renewable sources, in the “best case” scenario. But omitting crucial details and the gist of the report are wanton acts of cherry-picking and demagoguery; it’s the exact equivalence of claiming that CO2 emissions aren’t harmful — if all the known and suspected feedback mechanisms turn out to cancel the effects of increased emissions.

What the SRREN report actually shows is that in practice, renewables alone are highly unlikely to be enough, fast enough, to avert the coming climate catastrophe. Similar results abound; to pick just one example, let’s check the recent modeling work undertaken for a book by Mark Lynas. It shows rather clearly that even the aforementioned Greenpeace’s EREC’s Energy Revolution would be highly likely to fail in keeping the global warming below 2°C — even if the scenario were to be executed in full, without fail, starting today.

So, here’s the question: what should we call those who cherry-pick the climate science to suit their political agenda?

Posted in Energy, Economy and the Environment, Infographics, Nuclear energy & weapons | Tagged , , , , | 5 Comments