Finnish Greens propose burning more forests as alternative to more nuclear

Here in Finland, the energy conversation has heated up considerably in the last months. This, of course, is a good thing: energy is very likely one of the key questions of the 21st Century.

The discussion largely revolves around whether or not to build new nuclear power, the Fennovoima power plant, as short-sighted that discussion by itself is. Predictably, the Green Party is dead set against any new nuclear, and has lately latched onto Fennovoima’s Russian connection as a key argument why. Make no mistake: this is just an argument of convenience, used simply because many Finns hold the Russians in less than stellar regard.

Yesterday, the Greens published their own idea of how to provide energy without Fennovoima’s AES-1200 reactor. That reactor is supposed to go on-line in about 2025 (although I have my doubts as to whether the funding can be found), and provide between 9 and 10 TWh of almost carbon-free electricity per year. Since even most of the die-hard anti-nuclear Greens acknowledge that combating climate change will require massive additions of carbon-free energy, opposing massive additions of carbon-free energy presents somewhat of a problem.

To get out of the dilemma, the Greens now propose the following measures, to be completed by 2025:

  • The addition of about 2-2.5 TWh of wind power, in addition to 9 TWh already signed for in the government’s Energy and Climate Strategy (2013).
  • Electricity co-generation, from wood-based biomass, to be increased by 2 TWh per year (i.e. 6 TWh thermal power).
  • Solar PV to be increased by 1 TWh per year.
  • Replacement of electric heating by wood-based pellet heating to the tune of 1 TWh per year.
  • Additional energy efficiency measures above and beyond those already mandated, so that approximately 4.7-5.4 TWh of electricity per annum are saved. The savings are to be achieved in the domestic and service sectors.

The targets, as such, are not wildly ambitious and probably are realizable, although there is surprisingly little in the report about costs, and renewables are just assumed to be competitive on their own even when average electricity price remains well below 40 €/MWh. But the real problems are twofold: first, as predicted, much of the increase in power generation (in fact, the largest single component) comes from biomass burning. It is just arithmetically impossible to reduce emissions by promoting burning, at least for as long as carbon capture and storage isn’t commonplace. Which won’t be the case by 2025.

Second, the report is entirely mum about the key issue: what if the aforesaid energy efficiency measures are taken in addition to building Fennovoima’s reactor? 

There are no laws of nature that prevent energy efficiency measures being taken together with nuclear energy. In fact, the Swedes (which the report lauds) have done exactly that. It is clear that if the energy efficiency measures were to be taken together with 9 TWh increase in carbon-free energy generation, we would be farther along the decarbonization track than if the addition of carbon-free energy remains at 2.5 TWh, and we increase wood burning by 7 TWh (thermal).

A climate warrior’s solution would have been to push for energy efficiency measures in addition to nuclear power plant. Perhaps they could have been made a condition for plant’s acceptance. But the Green’s dogmatic opposition to nuclear has robbed us of this chance.

There is also a problem as to where, exactly, the biomass shall come from. Even the extremely pro-forestry, pro-farming Center Party has recently recognized that the Energy and Climate strategy will likely exhaust easily available, relatively carbon-neutral (i.e. maybe not much worse than natural gas) feedstocks of forestry residues. That’s a source of no more than 15 TWh thermal – and now we’d need 7 TWh more. Not to mention that recent research suggests that even the former figure may actually increase Finland’s emissions over short and medium term, even compared to coal burning!

One could also note that it’s very likely that the issue is not whether to achieve climate goals with nuclear power or with energy efficiency and renewables. The recent reports coming from IPCC indicate that we very probably need nuclear and renewables and energy efficiency and carbon capture, and even that will be a close shave. Squandering away valuable renewable resources and energy savings simply to oppose a particular form of low-carbon technology is more than irresponsible: it is reckless gamble with the safety of the future generations.

PS. The party that has claimed “finlandization” due to Rosatom deal has no problem proposing, in the same report, that energy taxes should be reoriented to promote the burning of cleaner fuels instead of coal. Such as natural gas. Which happens to come entirely from Russia, which isn’t stored in Finland, and which has no plausible alternative sources of supply in the medium term.

Posted in Nuclear energy & weapons | Tagged , , , | 7 Comments

Graphic of the Week: Having too much and too little renewables – at the same time

One of the benefits of renewable energy is that it pushes down the price of electricity when the wind blows or the sun shines. Besides lowering energy bills, that kills the profitability of traditional “baseload” power plants – i.e. those burning coal or splitting atoms – and forces them to close, to the rejoicing of all.

Or so we’ve been told. But can there be too much of a good thing?

This is a problem where I would value your input, my dear reader. Please share your thoughts on whether I’ve made any mistake or overly simplified the situation, and what could be done to correct this problem. But first, let me explain.

In order to understand the possible problem, look first at the Fig. 1 below. The graph shows a rather typical weekday load in the combined German/Austrian electricity grid (26.3.2014, to be precise: source). From nighttime low, the load increases during the day, and falls back late in the evening. You may also note the contributions of wind (blue) and solar (yellow) power.

Image

Figure 1: Total electricity production in the combined German/Austrian electricity network, 26.3.2014.

In Fig. 2, the demand is unchanged (black), but renewable production has been increased, by a factor of eight for wind power and by a factor of four for solar power. Such a scheme could, theoretically, deliver some 75% (or more – see below) of electricity from low-carbon sources. This would be a great accomplishment, even though France and Sweden did better already in the 1980s. (Note that in reality, increased penetration would slightly flatten the shape of production peaks, due to increased geographical diversity. But that’s not large enough impact to be relevant for the purposes of this experiment.)

Image

Figure 2: Production when solar PV generation is increased 4 and wind production 8 times from current levels. Black curve denotes demand.

As you may note, there are times when power production greatly exceeds the demand. In terms of production “lost,” this in itself is not a great problem: some 13% of total solar production and about 10% of wind production go above the demand, and are either wasted or need to be stored for future use. Many renewable energy advocates – particularly those advocating for renewables only energy scenarios – would stop here and point out that overproduction is not a terribly big problem: a loss of 10-15% of daily production may be acceptable, if the prices of renewable energy sources continue to fall.

But it may get worse. Much worse.

In a deregulated electricity market, the price of electricity is effectively determined by the lowest cost marginal production connected to the grid. The marginal cost of producing electricity with solar PV and wind turbines is close to zero.

Therefore, what will happen once renewable production exceeds total demand?

Every renewable generator wants to make money from selling electricity. In other words, every producer wants to be in the group of producers that gets a chance to sell their electricity. Furthermore, every generator has an incentive to, basically, sell at any cost, as long as they at least cover very low marginal cost. Once production exceeds demand, every producer must lower their asking price to the lowest cost they can bear, because otherwise they would be left out of the market: the next door neighbor can always undercut any asking price higher than marginal cost. This creates a bidding war – a race to the bottom, where the bottom is some cents per megawatt hour, or even less. In other words, the moment the production exceeds demand, the price of electricity will collapse.

Unfortunately, if there is excess production, it’s not just the excess production that’s worthless. Unless I’m mistaken, practically all production during hours when a) there is excess production and b) the marginal costs of that production are close to zero is nearly worthless. Figure 3 shows what will happen.

Image

Figure 3: Electricity that can be sold at higher than near-zero marginal cost of solar/wind generation. Black curve denotes demand.

During daylight hours and (in this case) in the evening, electricity is basically given away for free. In fact, under some circumstances, the price can be negative: you may get paid for using electricity. This is because electric grid will fail just as easily under excessive load, as it will fail if the load does not meet the demand. Therefore, the grid operator may sometimes have to offload excess electricity to anyone who can waste it.

What’s the impact? About 20% of wind power production is worthless, which is bad in itself. But it’s the day-only solar that really takes a hit: massive 74% of total solar output must be handed out for pennies. In the worst case, someone has to pay to get rid of the production.

You may imagine what this does for the profitability.

The renewable boosters are right on one thing, though: this does wreak havoc on the profitability of existing plants as well. The sad thing is that they’re still needed: note the still-black areas on the Figure 3. This means that conventional plants, too, almost certainly need subsidies simply to keep them in reserve. Furthermore, they still account for some 25% of the total daily production. If that is met by fossil fuels, there is practically no chance that climate targets can be achieved.

What happens then?

That was the theory, but how will it play out in practice? Here are some potential scenarios; feel free to add yours in the comments section below.

1. The most obvious solution might be to store the excess electricity for use later, perhaps by converting it to synthetic methane. If all the electricity produced in Fig. 2 could be losslessly stored and recovered, some 86% of daily electricity might be covered from renewable sources. (Real figure would, of course, be much lower.) The problem here is that storage technologies are still very much under development. Much depends on the progress of energy storage technologies: for the reasons outlined above, they are far more important components for a sustainable energy system than advances in low-cost renewable generation. Given sufficient amounts of sufficiently advanced (and cheap) storage, we would have no problem managing large low-carbon penetrations – both renewable and nuclear. However, in reality, there probably will not be storage options that are scalable enough at a low enough price.

2. The costliest option probably would be to continue paying subsidies for renewable energy generators irrespective of whether their electricity is needed or not. The cost-benefit ratio for this option seems to be remarkably low, given that increased penetrations will swiftly increase the ratio of zero-price electricity to paid-for electricity.

3. Under free market, the renewable energy revolution will for all intents and purposes stop dead on its tracks once peak production regularly begins to catch up with total demand. No amount of foreseeable cost reductions will make solar PVs a competitive energy source when 75% (or more) of their production must be given away for free. Figure 4 below shows one potential scenario, where solar production increases 3.3 times, and wind power 5 times from current levels. In this case, conventional energy sources must cover for 60% of the electricity demand. This is totally incompatible with any scientifically credible electricity decarbonization plans – not to mention the decarbonization of the energy system as a whole. Furthermore, the profitability of conventional plants will be poor or nonexistent, necessitating heavy subsidies, while high share of low marginal cost production will nevertheless drive down electricity prices during peak production hours, necessitating still more subsidies for renewable generators. This is the worst case scenario: punishing subsidies all round, no worthwhile climate progress. Sadly, it is also the most likely one, in my opinion.

Image

Figure 4: “Sustainable” increase in renewable generation. In financial terms, that is.

4. Demand management, i.e. smoothing of the load curve, will not help much. In particular, solar PV installations will hit a wall no matter how much demand management there is.

5. Supergrids may help some, if excess electricity can be exported to geographically faraway locations and, conversely, excess production from those locales can be used to firm up the grid. However, Germany + Austria are already rather large locales, and it’s an open issue whether adding more Central European countries to the grid would do much good. To time-shift the solar production peak by three hours to both directions (when it could really make an impact) would require building of equivalent amount of solar panels in Russia and in the Canary Islands, and anywhere in between. The chances of the former occurring in the current geopolitical climate are remote to say the least.

6. One possible solution would be for the producers to gang up and share the loss, i.e. curtail a percentage of everyone’s production if it threatens to exceed demand and crash prices. Alternatively, the Government could force the producers to do so. This could alleviate the problem somewhat, although prices would still suffer during peak production hours. Unfortunately, such cartels are probably illegal, run counter to the ethos of distributed generation, and furthermore, would produce an irresistible incentive to undercut the agreed upon price target – because any one producer would gain more by selling 100% of her electricity at half the price than selling 25% of the electricity at full price. This would, over time, still drive the prices towards marginal cost.

7. In the longer term, the availability of free electricity will stimulate innovation in electricity use. Insofar as it goes towards energy storage technologies, or to technologies that could effectively adjust their production to absorb peak renewables production and therefore reduce demand for dirty energy, that’s a good thing. But there are no guarantees that the cheap energy will only be used in ways that offset dirty energy production elsewhere: just as likely is that it’s simply used to increase economic activity during hours when electricity can be wasted to just about anything.

For example, one conceivable end result might be a sharp decrease in the price of aluminium: aluminium smelters can conceivably be operated relatively profitably as peak production absorbers, and once the major energy cost is effectively taken away, price of aluminium will drop. As a result, it will be used for more and more applications, thus stimulating demand for more aluminium. This has obvious benefits, but from the viewpoint of environmental protection, increased (virgin) production may be a step backwards.

In conclusion, we may very well have too much of a good thing. And this is something that bears remembering the next time someone tells you that renewable overproduction is not a problem, or that renewables are reducing electricity prices and making existing plants uncompetitive. Or applauds, when 50% (or some other figure) of daily electricity production is met from renewable sources.

Posted in Energy, Economy and the Environment, Infographics | Tagged , , , , , , , , , | 15 Comments

Modeling Societal Collapse as a Result of Stingy Support for PhD Students

Editor’s note: recent publication of a study about the coming collapse of civilizations has most probably encouraged the author to submit this work for review. Due to its nature, we felt it best to present it to the whole world for a thorough peer review.

A large number of explanations have been proposed to account for the collapse of various historical societies. As Tainter (2014) recently argues, earlier scholars have ascribed collapses to result from elite mismanagement, class conflict, and peasant revolts, while the increased emphasis on environmental issues has inspired modern scholars to suggest that societies have collapsed due to depletion of critical resources, such as soil and forests. Most recently, some studies have suggested that inequality and elite resource consumption leads to societal collapse (Motesharrei et al, in press).

However, these explanations do not focus on what seems to us to be the most pertinent explanation for the collapse of these early societies. It is common knowledge that the earlier societies – examples of which include the Roman Empire, the Mesopotamian civilization, and the Maya civilization – did not possess a modern scientific establishment. As a result, we argue, these societies could not escape their predicaments by developing novel technologies in order to curb elite mismanagement, reduce environmental impacts, or defuse class conflict via widespread Netflix subscriptions.

In particular, environmental literature is well aware of the potential of technology in reducing environmental impacts. From Ehrlich and Holdren (1971), we know that the human impact on the environment can be described by a following equation:

Image

where I stands for Impact, P for Population, A for its Affluence, and T for Technology. Improvements in technological efficiency can reduce resource intensiveness, and therefore reduce the variable T (see also Chertow, 2000). Technological change can even allow us to substitute between different resources, as was the case when whale oil business was outcompeted by mineral oils.

Therefore, it stands to reason that improvements in technology could allow us to postpone or even forestall a societal collapse. While “technology” is somewhat difficult to operationalize, a useful proxy may be found from the share of population with the highest academic degree – the doctorate. As an example, countries ranking highest in PhD graduates per capita (as of 2010, in descending order, Switzerland, Sweden, Germany, Finland, Austria, Denmark) typically score highly in various environmental sustainability and related indicators. It may therefore be surmised that PhD education may help societies forestall a coming collapse.

To test this intuition, we develop a simplified computer model. Following the lead of recent studies, we base this model on the well-known Lotka-Volterra or predator-prey model. The additions to this model are as follows: besides renaming the variables (i.e. P, “population,” and N, “nature”), we add a variable T or “technology” to represent the stock of useful knowledge available for forestalling collapse – e.g. the variety of computer games available for amusing the lower social strata.

The growth of this variable is dictated by the equation

Image

where Q_PhD is a constant denoting the share of PhD students in a population and c is a constant. Increase in T will diminish the environmental impact of the population, or degradation of nature dN_d as follows:

Image

where a is a constant. Finally, we initialize the system with a fixed initial stock of Nature (N_init), and adjust the renewal of natural resources dN_r so that it depends on the current stock of Nature and will not cause the stock of Nature to exceed its original maximum:

Image

where b is a constant. A system dynamics diagram of the model can be found in Figure 1 below. The model has been implemented on NetLogo (Wilensky, 1999), based on a predator-prey model by Wilensky (2005).

Image

Figure 1. System dynamics diagram of the model.

Results

The results of three representative scenarios are shown below. In the Scenario 1, the society chooses not to support PhD students at all (Q_PhD = 0). As can be seen from the Fig. 2, this forces the society into a cycle of growths and collapses, with a downward trend in the population peaks. We posit that such a scenario – lack of support for PhD students – is as likely a cause for the collapse of historical civilizations as any single reason so far suggested.

Image

Figure 2: Cyclical collapse of a society that does not support PhD students.

In the Scenario 2, the society chooses to support PhD students rather stingily (Q_PhD = 0.05). Figure 3 shows the results: in this case, the population will suffer from several collapses before stock of technology allows it to transcend the limits of natural stocks.

Image

Figure 3: A cycle of booms and busts, followed by breakout, experienced by a society that offers only limited support to PhD students.

Finally, in the Scenario 3, the society supports PhD students generously (Fig. 4). Collapse is avoided entirely, allowing the population to grow nearly exponentially.

Image

Figure 4: Near-exponential growth in a society that generously supports PhD students.

Conclusions

Based on the model described above, we conclude that collapses can occur in a limited system. However, collapse can be avoided with judicious policies. In particular, the model suggests that generous support for PhD students is essential for achieving this goal. We therefore implore policy-makers to adjust policies accordingly; a stipend of 4000 € per month should suffice.

Appendix

The NetLogo model can be downloaded here. The NetLogo itself can be freely downloaded from here.

References

Chertow, M. R. (2000). The IPAT Equation and Its Variants. Journal of Industrial Ecology 4 (4): 13–29. doi:10.1162/10881980052541927.

Ehrlich, Paul R.; Holdren, John P. (1971). Impact of Population Growth.Science 171 (3977): 1212–1217. JSTOR 1731166.

Motesharrei, S., Rivas, J., and Kalnay, E. (forthcoming). Human and Nature Dynamics (HANDY): Modeling Inequality and use of Resources in the Collapse or Sustainability of Societies. In press.

Tainter, J. (2014) Commentary of Motesharrei et al. paper to Keith Kloor. https://blogs.discovermagazine.com/collideascape/2014/03/21/judging-merits-media-hyped-collapse-study/

Wilensky, U. (2005).  NetLogo Wolf Sheep Predation (System Dynamics) model.  http://ccl.northwestern.edu/netlogo/models/WolfSheepPredation(SystemDynamics).  Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

Posted in Simulations | Tagged , , , | 4 Comments

Design against climate change – suggestions for a project?

They say crowdsourcing is the thing nowadays, so let’s try it out.

A friend in California, who happens to be an excellent industrial designer, has for quite some time been interested in climate change and related issues. Recently, he asked me for recommendations for projects that seek to make the world a better place in this regard and that could use his skills and, perhaps, some investment.

I thought this was an excellent idea: the value of good design can be incalculable, especially in products and services that are supposed to be used by actual humans. And this guy is seriously good in design – as in “one of the best in the world.” He also happens to be a very smart, very nice, and extremely passionate about what he’s doing. I can guarantee that having his expertise, passion and contacts in play, any project or startup will have a significantly higher chance of making a real dent in the world.

But! Off the top of my head, I couldn’t say which project might be a good fit for his skills and interests. Therefore, I ask thee, my dear readers: do you have any ideas for projects for him?

So, please spread the word and put forward suggestions, either in the comments section below, in twitter (@jmkorhonen), or through e-mail (jmkorhonen@gmail.com). I’ll make sure the suggestions go forward!

Posted in Uncategorized | Tagged , , | 1 Comment

Graphic of the Week: What’s the required build rate for a sustainable energy system?

Image

One aspect of energy system that’s largely ignored is the ultimate sustainable capacity that can be achieved with a given rate of installation. Accustomed as we are to news about renewables breaking new installation records, we may overlook the fact that these installations will eventually reach the end of their lives and need to be replaced. And that, at some point, the entire production capacity will be devoted to nothing else but simply maintaining the current production level.

Calculating this equilibrium level can be a bit tricky, so I did the maths for you. Or, more to the point, I ordered my NetLogo to do so; you may download this wonderful (and free!) simulation environment here, and the .nlogo file I used here. Feel free to use and extend to your purposes.

The results are shown in the graph above. It shows the ultimate equilibrium production level reached for 1 GW of annually installed capacity, as a function of capacity factor and plant lifetime in years. In other words, it tells you what kind of an energy system you can have if you are able to install a gigawatt of generating capacity every year from here to eternity.

What you may instantaneously note about the graph is the importance of plant lifetime. The longer the lifetime, the less often you need to build new plants simply to replace old ones. Therein may lie a problem: the lifetime of the most popular and anticipated renewable generators (solar PV and wind) is likely to be around 25 years. What this means that the acclaimed record-high installation rates that have been achieved lately – for example, Germany’s 2012 record of about 7.6 GW solar per year – are simply not enough. If this rate could have been sustained indefinitely, the equilibrium production in Germany (where the capacity factor of solar PV hovers around 0.1) is about 173 terawatt hours, or TWh.

Last year, German wind power installations amounted to about 2.3 GW, yielding an equilibrium level of about 160 TWh (assuming average capacity factor of 0.3, which may be tad high). Solar slumped to 3.6 GW, which yields an equilibrium at about 82 TWh. As the primary energy consumption in Germany is annually about 3800 TWh (which is used, among other things, to produce ca. 590 TWh of electricity), one may be excused for exhibiting symptoms of panic.

Furthermore, given that the current German government is bent on reducing and eliminating renewable subsidies, it seems rather unlikely that even this poster boy of renewable revolution will see even similar installation rates again any time soon – if ever. Even farther seems to be the day when the installation rates soar to heights required for decarbonization: even to produce just electricity sustainably from renewable sources would require an annual installation rate of (say) 10 GW/a for solar and 5 GW/a for wind. Can even these rates be achieved – and sustained? Perhaps. Perhaps not.

And if one would want to replace fossil fuels in other uses of energy, such as transport fuels and sources of process heat, one would need to generate anything between – say – 1000 to 3800 TWh per year. One thousand terawatt hours could be sustained by installing (for example) 20 GW of solar and 8 GW of wind. Per year, every year. For 3800 TWh, the required installation rates jump to 76 GW of solar and 30.4 GW of wind.

Finally, it should be noted that the above computations give the physical maximum that can be produced. It completely ignores important difficulties, such as whether the power is produced at the time when it’s demanded. If this is not the case (and it will not be the case when the production is dependent on the weather), options such as energy storage are required – causing energy losses and necessitating further increases in build rates. As an example, well-informed renewable boosters have proposed to alleviate problems inherent in the variable renewable sources by simply building so many generators that enough of them operate at any given moment of time. Commonly seen estimates for this “overbuild” range from 2 to more than 10; that is, we may need 2 to 10 times as many renewable generators than the best case above would suggest.

Anyone willing to bet whether we’re ever going to see build rates that can sustain even a 2x overbuild in an industrialized economy?

POSTSCRIPT: What about nuclear?

In this calculation, nuclear power has two great virtues. First, nuclear plants last at least two times as long as renewable generators – new plants are designed for 60 years, and it seems possible to extend that to 80 years. Second, overbuild is much less an issue when you have a generator that typically produces power 80-90% of the time.

If the Germans were to utterly reverse their nuclear exit (fat chance, I know) and instead build approximately one new gigawatt class reactor per year, nuclear power would eventually stabilize to about 426.3 TWh. Combined with current renewable build rates, this would result to an equilibrium level of some 666 (\,,/) TWh of carbon-free electricity per year. Not enough to decarbonize fully, but significantly better than the current trajectory.

Posted in Energy, Economy and the Environment, Infographics, Simulations | Tagged , , , , , | 6 Comments

Graphic of the Week: The hidden “fuels” of renewable energy

Image

Figure 2 from Vidal, Góffe & Arndt (2013:896) shows the demand of some raw materials based on WWF’s prediction for wind and solar energy production reaching 25 000 TWh by 2050. Open and closed symbols correspond to different volumes of raw material required to construct different types of photovoltaic panels.

It is well known that there is no such thing as a free lunch. However, it is somewhat less known that there is no such thing as free energy, either.

Despite all the hoopla about new renewable energy sources being “free” and “practically unlimited” in a sense that no one owns the Sun nor the wind, the fact remains that in order to harness these energies, we need an immense construction effort. This, unfortunately, is neither free nor unrestricted in the material sense. As the above graph taken from a recent study commentary by Vidal, Goffé & Arndt in Nature Geoscience (2013) shows, projected renewable energy deployments would very soon outstrip the current global production of several key materials. By the author’s estimates, if we are to follow the lead of renewables only-advocates, renewable energy projects would consume the entire annual copper, concrete and steel production by 2035 at the latest, annihilate aluminum by around 2030, and gobble up all the glass before 2020.

Certainly, material efficiency can improve greatly, substitutes can be found, and production can be increased. Nevertheless, the scale of the challenge is nothing less than daunting: the authors also provide a handy overview of material requirements per installed capacity, from which I calculated a range of figures for energy production.

If we compare renewable energies to that other low-carbon alternative, nuclear power, per energy unit produced, wind and solar electricity production requires

  • 16-148 times more concrete
  • 57-661 times more steel
  • 43-819 times more aluminum
  • 16-2286 times more copper
  • 4000-73600 times more glass.

(The figures assume a lifetime of 20-30 years for renewables and 60 years for nuclear, and the following capacity factors: wind 0.3, solar PV 0.15, CSP 0.4, nuclear 0.8.)

In a very real sense, these materials can be thought of as the “fuels” or “consumables” of renewables. Without doubt, many of these materials can be recycled to an extent, but the required volumes inevitably mean that any substantial increases in renewable energy generation require corresponding increases in virgin production. Furthermore, not everything can be or will be recovered, and in any case, building the infrastructure for renewable energy generation will sequester huge amounts of steel, aluminum and copper over the lifespan of the generators.

But wait! Aren’t I forgetting something, namely the fuel that nuclear fission uses, and the huge underground caverns required for the disposal of the waste? Indeed, so here’s the second graphic of the day: the rough estimate of mining requirements for various energy sources, per megawatt hour produced.

Calculated after Vidal & Arndt (2013b) and various sources for mining requirements. Uranium mining is assumed to take place at the poorest primarily uranium-producing mines (ore grade 0,1%); other materials are computed using average ore grades and average recycling levels (30% for steel, 10% for concrete, 22% for aluminum, 35% for copper).

Calculated after Vidal & Arndt (2013) and various sources for mining requirements. Uranium mining is assumed to take place at the poorest primarily uranium-producing mines (ore grade 0,1%); other materials are computed using average ore grades and average recycling levels (30% for steel, 10% for concrete, 22% for aluminum, 35% for copper). Geological repository mining requirements are estimated according to Posiva reports.

You may note that nuclear energy’s estimate – and that’s what these are, estimates – is dominated by uranium mining. I deliberately used the low-end value for uranium ore grade, and omitted both In-Situ Leaching and byproduct mining operations, which would decrease the mining requirement considerably. In fairness, I did the same for other materials, although some appreciable amounts of iron and copper are recovered from byproducts. I also omitted the high-end estimate for solar PV, because that would have messed up the graphic: the total runs to staggering 611 kg of mining operations per MWh produced.

The figure is likely to be biased in favor of renewables, as I’ve omitted rare earths from the discussion. As shown in e.g. Öhrlund (2011), rare earths (metals used in e.g. permanent magnets and in solar photovoltaic panels) may pose a bottleneck for renewable expansion. Mining these relatively rare (hence the name) elements is a messy business, which could very easily greatly increase the “materials backpack” renewable energies have to carry around. Furthermore, the figure does not account for backup power systems, grid expansion or energy storage – all of which are significant building projects that are especially important for renewable energy.

References

Vidal, O., Goffé, B., & Arndt, N. (2013). Metals for a low-carbon society. Nature Geoscience, 6(11), 894–896. doi:10.1038/ngeo1993

Vidal, O., & Arndt, N. (2013). Metals for a low-carbon society: Supplementary Information. Nature Geoscience, 6(11), 15–17. doi:10.1038/NGEO1993

Öhrlund, I. (2011). Future Metal Demand from Photovoltaic Cells and Wind Turbines – Investigating the Potential Risk of Disabling a Shift to Renewable Energy Systems. European Parliament, Science and Technology Options Assessment. Brussels.

Posted in Energy, Economy and the Environment, Infographics, Scarcities and constraints | Tagged , , , , | 14 Comments

Space system “Shuttle,” part of USA’s nuclear attack arsenal?

 Image

The story of a white elephant colloquially known as the Space Shuttle is familiar to most students of the history of technology. The shuttle was originally touted as a cheap way to access space: being mostly reusable, it would have done for space travel the same what DC-3 did for air travel, i.e. open up the space for large-scale exploration and exploitation. 

 

Of course, we all known how that promise fared the test of reality. Instead of envisioned 50 or so annual launches (which may actually have covered the program’s staggering cost), shuttles went up perhaps six times a year. There simply were not enough payloads looking for space access, and refurbishing the shuttle always took longer than early analysis had assumed. However, the shuttle had been sold to the Congress on a launch schedule that even its ardent supporters believed unrealistic. Therefore, the shuttle remained in the agenda for largely political reasons, possibly because of fears that if it was cancelled, there would be nothing else to loft NASA’s astronauts into orbit. In the end, the “cheap” and reusable space access turned out to be (probably) less safe and far more expensive than using expendable, throwaway boosters would have been. 

 

However, the Shuttle provoked interesting reactions back in the day. Since the name of the game on both sides of the Cold War was paranoia about adversary’s intentions, every pronouncement and every program was pored over with a looking glass by unsmiling men in drab offices. When the U.S. announced the Space Shuttle, the Soviet analysts naturally went to work. However, it soon became apparent to them that the launch schedule NASA had advertised – over 50 launches per year – was hopelessly optimistic. The Soviets, being no slouches in the rocketry department, could not fathom why NASA wanted to build a complex, reusable spaceplane instead of simply using more tried and reliable expendable launch vehicles (Garber, 2002:16). 

 

But there seemed to be one customer for the shuttle that would not mind the cost or the complexity. 

 

Eager to sell the shuttle as the only space access the United States would need, NASA had teamed up with the U.S. Air Force. The Air Force was responsible for launching all U.S. defense and intelligence satellites, and if NASA could say to the Congress that Air Force, too, could use the shuttle, then NASA had extra political leverage to extract funds to build one. It was immaterial that the military did not really have a requirement for a shuttle: what was apparently far more important was that NASA could therefore insulate the shuttle from the political charge that it was just a step towards human exploration of Mars, or a permanent space station. Both of these were exactly what some people at NASA wanted it to be, but they also happened to be directions that President Nixon had rejected as too expensive in 1971 (Garber, 2002:9-13). 

Therefore, the shuttle design requirements expanded to include political shielding. This took the form of payload bay size (designed to accommodate spy satellites of the time) and, more importantly, “cross-range capability.” The Air Force wanted to have an option of sending the shuttle on an orbit around the Earth’s poles; scientifically, this was a relatively uninteresting orbit, but for reconnaissance satellites that sweep the Earth’s surface, it’s ideal. The military also wanted to have an option of even capturing an enemy satellite and returning after just one orbit, quick enough to escape detection (Garber, 2002:12). 

However, this requirement caused a major problem. Because the Earth rotates under the spacecraft, after one orbit the launch site would have moved approximately 1800 kilometers to the East. If the craft is to return to base after one orbit, instead of waiting in orbit until the base again rotates underneath it, it would have to be able to fly this “cross-range” distance “sideways” after re-entering the atmosphere (Garber, 2002:12).

 

In the end, NASA designed a spacecraft with required cross-range capability. This meant large wings, which added weight and complexity, which in turn decreased the payload, which in turn required more powerful engines, which in turn made everything more complicated… (In all fairness, for various good reasons, NASA might have designed a relatively similar shuttle even without the Air Force requirements. However, it seems that the requirement had at least some effect to the cost and complexity of the shuttle.)

 

 

Because all this was public knowledge, the analysts in the Soviet Union rejoiced. A spacecraft that could launch from the Vandenberg Air Force Base,do a single polar orbit, and then return stealthily to its base could be nothing else than a weapon in disguise. It was immaterial that few if any analysts could figure out why such an expensive craft was being built: obviously, the capitalist aggressor must have had discovered something that justified the huge expense. An analysis by Mstislav Keldysh, head of the Soviet National Academy of Sciences, suggested that the Space Shuttle existed in order to lob huge, 25-megaton nuclear bombs from space directly to Moscow and other key centers (Garber, 2002:17). The real danger was that the shuttle could do this by surprise. There would be little to no warning from early warning radars, and no defense. 

 

To date, there is no evidence whatsoever that such mission was even seriously considered. The reason why the Space Shuttle was built was politics; there was no hidden agenda (at least, not one envisioned by the Soviets). But this paranoid line of thinking did leave a legacy, or two. 

One of the legacies was the Soviet “Buran” shuttle program. Apparently, Buran got built and largely resembled the U.S. shuttle simply because the Soviets could not understand why the United States was wasting so much money on the Space Shuttle; however, Buran really was a weapon, with a planned capability to drop up to 20 nuclear bombs from orbit. 

 

Another legacy is the image above. Taken from a Soviet 1986 civil defense booklet, it illustrates the “nuclear attack arsenal of the USA.” Prominently portrayed alongside MX missile is the “space system ‘Shuttle.’” In other words, the Soviets were so certain that the white elephant was simply a weapon in disguise that they printed it to a recognition guide!

 

Many thanks to NAJ Taylor and Alex Wellerstein for bringing this to my attention, and to whoever so kindly provided the scans of this booklet in the first place.

 

 

References

http://www.buran.su/ – info about Buran’s combat role. See also Astronautix entry on Buran: http://www.astronautix.com/craft/buran.htm

Garber, S. J. (2002). Birds of a Feather? How Politics and Culture Affected the Designs of the U.S. Space Shuttle and the Soviet Buran. Master’s thesis, Virginia Tech. Retrieved from http://scholar.lib.vt.edu/theses/available/etd-01282002-104138/unrestricted/birdsfinalcomplete4.pdf

http://bunker-datacenter.com/plakat.go/ – Hi-res scan of a Soviet 1986 civil defense booklet

Posted in History of technology, Nuclear energy & weapons, SETI, Aliens & Space | Tagged , , , , , , | 2 Comments