Space system “Shuttle,” part of USA’s nuclear attack arsenal?

 Image

The story of a white elephant colloquially known as the Space Shuttle is familiar to most students of the history of technology. The shuttle was originally touted as a cheap way to access space: being mostly reusable, it would have done for space travel the same what DC-3 did for air travel, i.e. open up the space for large-scale exploration and exploitation. 

 

Of course, we all known how that promise fared the test of reality. Instead of envisioned 50 or so annual launches (which may actually have covered the program’s staggering cost), shuttles went up perhaps six times a year. There simply were not enough payloads looking for space access, and refurbishing the shuttle always took longer than early analysis had assumed. However, the shuttle had been sold to the Congress on a launch schedule that even its ardent supporters believed unrealistic. Therefore, the shuttle remained in the agenda for largely political reasons, possibly because of fears that if it was cancelled, there would be nothing else to loft NASA’s astronauts into orbit. In the end, the “cheap” and reusable space access turned out to be (probably) less safe and far more expensive than using expendable, throwaway boosters would have been. 

 

However, the Shuttle provoked interesting reactions back in the day. Since the name of the game on both sides of the Cold War was paranoia about adversary’s intentions, every pronouncement and every program was pored over with a looking glass by unsmiling men in drab offices. When the U.S. announced the Space Shuttle, the Soviet analysts naturally went to work. However, it soon became apparent to them that the launch schedule NASA had advertised – over 50 launches per year – was hopelessly optimistic. The Soviets, being no slouches in the rocketry department, could not fathom why NASA wanted to build a complex, reusable spaceplane instead of simply using more tried and reliable expendable launch vehicles (Garber, 2002:16). 

 

But there seemed to be one customer for the shuttle that would not mind the cost or the complexity. 

 

Eager to sell the shuttle as the only space access the United States would need, NASA had teamed up with the U.S. Air Force. The Air Force was responsible for launching all U.S. defense and intelligence satellites, and if NASA could say to the Congress that Air Force, too, could use the shuttle, then NASA had extra political leverage to extract funds to build one. It was immaterial that the military did not really have a requirement for a shuttle: what was apparently far more important was that NASA could therefore insulate the shuttle from the political charge that it was just a step towards human exploration of Mars, or a permanent space station. Both of these were exactly what some people at NASA wanted it to be, but they also happened to be directions that President Nixon had rejected as too expensive in 1971 (Garber, 2002:9-13). 

Therefore, the shuttle design requirements expanded to include political shielding. This took the form of payload bay size (designed to accommodate spy satellites of the time) and, more importantly, “cross-range capability.” The Air Force wanted to have an option of sending the shuttle on an orbit around the Earth’s poles; scientifically, this was a relatively uninteresting orbit, but for reconnaissance satellites that sweep the Earth’s surface, it’s ideal. The military also wanted to have an option of even capturing an enemy satellite and returning after just one orbit, quick enough to escape detection (Garber, 2002:12). 

However, this requirement caused a major problem. Because the Earth rotates under the spacecraft, after one orbit the launch site would have moved approximately 1800 kilometers to the East. If the craft is to return to base after one orbit, instead of waiting in orbit until the base again rotates underneath it, it would have to be able to fly this “cross-range” distance “sideways” after re-entering the atmosphere (Garber, 2002:12).

 

In the end, NASA designed a spacecraft with required cross-range capability. This meant large wings, which added weight and complexity, which in turn decreased the payload, which in turn required more powerful engines, which in turn made everything more complicated… (In all fairness, for various good reasons, NASA might have designed a relatively similar shuttle even without the Air Force requirements. However, it seems that the requirement had at least some effect to the cost and complexity of the shuttle.)

 

 

Because all this was public knowledge, the analysts in the Soviet Union rejoiced. A spacecraft that could launch from the Vandenberg Air Force Base,do a single polar orbit, and then return stealthily to its base could be nothing else than a weapon in disguise. It was immaterial that few if any analysts could figure out why such an expensive craft was being built: obviously, the capitalist aggressor must have had discovered something that justified the huge expense. An analysis by Mstislav Keldysh, head of the Soviet National Academy of Sciences, suggested that the Space Shuttle existed in order to lob huge, 25-megaton nuclear bombs from space directly to Moscow and other key centers (Garber, 2002:17). The real danger was that the shuttle could do this by surprise. There would be little to no warning from early warning radars, and no defense. 

 

To date, there is no evidence whatsoever that such mission was even seriously considered. The reason why the Space Shuttle was built was politics; there was no hidden agenda (at least, not one envisioned by the Soviets). But this paranoid line of thinking did leave a legacy, or two. 

One of the legacies was the Soviet “Buran” shuttle program. Apparently, Buran got built and largely resembled the U.S. shuttle simply because the Soviets could not understand why the United States was wasting so much money on the Space Shuttle; however, Buran really was a weapon, with a planned capability to drop up to 20 nuclear bombs from orbit. 

 

Another legacy is the image above. Taken from a Soviet 1986 civil defense booklet, it illustrates the “nuclear attack arsenal of the USA.” Prominently portrayed alongside MX missile is the “space system ‘Shuttle.’” In other words, the Soviets were so certain that the white elephant was simply a weapon in disguise that they printed it to a recognition guide!

 

Many thanks to NAJ Taylor and Alex Wellerstein for bringing this to my attention, and to whoever so kindly provided the scans of this booklet in the first place.

 

 

References

http://www.buran.su/ – info about Buran’s combat role. See also Astronautix entry on Buran: http://www.astronautix.com/craft/buran.htm

Garber, S. J. (2002). Birds of a Feather? How Politics and Culture Affected the Designs of the U.S. Space Shuttle and the Soviet Buran. Master’s thesis, Virginia Tech. Retrieved from http://scholar.lib.vt.edu/theses/available/etd-01282002-104138/unrestricted/birdsfinalcomplete4.pdf

http://bunker-datacenter.com/plakat.go/ – Hi-res scan of a Soviet 1986 civil defense booklet

Posted in History of technology, Nuclear energy & weapons, SETI, Aliens & Space | Tagged , , , , , , | 2 Comments

Graphic of the Week: How to reduce emissions fast enough?

Three countries and one energy policy that have achieved required CO2 emission reduction rates - and one who hasn't

Three countries and one energy policy that have achieved required CO2 emission reduction rates – and one who hasn’t

According to most estimates, we really are running out of time for the required CO2 emission reductions. Even if we were to achieve peak emissions by 2016, we’d still need global emission reduction rates of around 3% per year – all the way to 2050.

Fortunately, such rates may be achievable. In fact, three countries have been close – by accident.

The sad state of current climate policy is nowhere so evident as in the fact that the fastest emission reductions have been achieved without any climate policy at all. Sweden, Belgium and France all achieved extremely rapid (that is, compared to anything else) rates of decarbonization as a result of their other energy policies. Compared to generally acknowledged leader of climate policy, Germany, their success is remarkable – or, if you like, the lack of success from Germany is highly disturbing.

Anti-nuclear advocates counter with a claim that Germany has been exemplary in reducing its emissions from 1990 levels. This may be so, but such claim ignores two very salient facts. The first fact is that these other nations achieved their substantial reductions before 1990; therefore, comparing their emission reductions since 1990 to Germany is fundamentally unfair.

The second and even more salient point is that much of Germany’s vaunted achievements are due to so-called “wallfall” effect. In 1990, Germany had just been unified, and former East Germany still operated a number of awesomely ineffective and polluting coal plants and factories whose emissions counted towards the German 1990 totals. These were for the most part closed or extensively modernized in the five years following the unification. A study by the respected Fraunhofer Institute in Germany put the impact of these wallfall reductions to around 50% of all emission reductions achieved between 1990 and 2000, and 60% of energy-related emissions.

While I welcome any emission reductions irrespective of how they’re achieved (well, almost), there are no valid reasons to ascribe wallfall reductions to any climate policy. Therefore, there are no valid reasons to use Germany’s performance as a proof of its climate policy, without removing the substantial wallfall effect. And when these one-off windfalls are removed from the equations, the performance of Germany’s policies looks, quite frankly, rather dismal.

Here’s a hint to politicians: if your policy is repeatedly outperformed by lack of policy, it might be a time to consider alternatives.

PS. These figures were inspired by Global Carbon Project research (PDF link). The slight differences in reduction percentages (e.g. Sweden 4.5% in GCP) are due to the fact that I used per capita emissions, while Global Carbon Project used total emissions. Emission data is from CDIAC; figures for Germany before 1990 are combined East & West Germany totals. 

Posted in Energy, Economy and the Environment, Infographics, Nuclear energy & weapons | Tagged , , , , , , , , , | 5 Comments

Graphic of the week: The great “80% of world’s energy could be generated from renewables” fallacy

ImageIs a future without fossil fuels and without nuclear truly feasible?

In 2011, the Intergovernmental Panel on Climate Change (IPCC) released its Special Report on Renewable Energy Sources and Climate Change Mitigation, or SRREN. The report sought to determine the potential contribution of renewable energy sources to the mitigation of climate change. It reviewed the results from 164 individual scenarios derived from 16 different models, all developed since 2006.

For some, the report’s conclusions were sobering: nearly half of the scenarios suggested that renewable energy sources might contribute no more than 27% of the world’s energy supply by 2050 (see Chapter 10, p. 794). Even when counting only the most aggressive scenarios — the ones where atmospheric CO2 concentrations are stabilized to less than 400 parts per million — the median estimate of world’s renewable energy supply in 2050 was somewhat less than 250 exajoules; or, in other words, about half of global primary energy consumption today.

The report is by no means overly critical of renewables. For example, its rather loose definition of renewable energy includes “traditional” biomass, the main cause of deforestation, and the feasibility of various scenarios is never assessed in the report: no evaluation whatsoever is conducted to determine which of the 164 scenarios might be realistic, and which may be not. Nevertheless, the results show clearly that renewable energy sources are highly unlikely to be enough for meaningful climate change mitigation, even if energy efficiency takes leaps and strides.

It is therefore very instructive to note how various anti-nuclear groups have chosen to portray the results. Without exception, and perhaps mislead by SRREN’s extremely skewed press release, every anti-nuclear organization I’ve so far researched tells you that IPCC SRREN “proves” or “shows” that a future powered completely by renewables is completely possible.

For just one example, take this statement from Greenpeace:

According to the IPCC’s ‘Special Report on Renewable Energy Sources and Climate Change Mitigation’ (SRREN), by harnessing just 2.5% of viable renewable energy sources,  with currently available technologies, its possible to provide up to 80% of the world’s energy demand by 2050.” (Source)

This, my friends, is cherry-picking pure and simple. What anti-nuclear activists don’t tell you is that only two of the 164 scenarios could in any way be construed to suggest anything like that. What’s more, the 80% claim comes from Greenpeace’s Energy Revolution scenario. One could take issue with the fact that in reality, it’s largely prepared with data and “assistance” from renewable industry lobby group EREC, but industry biases and other oddities aside, this scenario assumes that world energy use actually decreases from current levels — even as the world’s population grows to 9 to 10 billion and there are no signs whatsoever that developing nations are willing to forgo the fruits of high-energy lifestyle.

This is so important that it bears repeating: At most, 1.2% of scenarios find that less than 80% of the energy world uses today might be possible to generate from renewable sources alone. The remaining 98.8% aren’t so sanguine.

And, again, that’s with current consumption levels, when the broad consensus is that world’s primary energy demand will increase greatly over the next few decades.

Sure, technically speaking anti-nuclear organizations are correct: it might be possible to derive 80% of world’s energy from purely renewable sources, in the “best case” scenario. But omitting crucial details and the gist of the report are wanton acts of cherry-picking and demagoguery; it’s the exact equivalence of claiming that CO2 emissions aren’t harmful — if all the known and suspected feedback mechanisms turn out to cancel the effects of increased emissions.

What the SRREN report actually shows is that in practice, renewables alone are highly unlikely to be enough, fast enough, to avert the coming climate catastrophe. Similar results abound; to pick just one example, let’s check the recent modeling work undertaken for a book by Mark Lynas. It shows rather clearly that even the aforementioned Greenpeace’s EREC’s Energy Revolution would be highly likely to fail in keeping the global warming below 2°C — even if the scenario were to be executed in full, without fail, starting today.

So, here’s the question: what should we call those who cherry-pick the climate science to suit their political agenda?

Posted in Energy, Economy and the Environment, Infographics, Nuclear energy & weapons | Tagged , , , , | 4 Comments

Graphic of the week: Comparing land use of wind and nuclear energy

Land use footprint of wind and nuclear power generation

Nuclear energy is often claimed to be environmentally harmful technology, especially when contrasted with renewables such as wind power.

However, these claims are rarely accompanied by proper sources. This may be because comparisons using actual science do not really support such blanket statements. To take just few examples, a range of studies, including IPCC’s assessments, have consistently found nuclear energy to be among the least carbon intensive methods of energy generation, surpassing even solar photovoltaics. Similarly, most life cycle assessments have found that nuclear energy uses far less materials – such as steel and concrete – per produced energy unit than even renewables (see e.g. Weißbach et al. 2013).

This graphic compares another component of ecosystem damage potential: land use footprint. It is well known that ecosystem degradation and destruction due increased land use is, alongside climate change, one of the greatest threats to Earth’s environmental well-being. Therefore, solutions that reduce our environmental footprint are desirable.

The graphic most likely underestimates the footprint of wind power while overestimating nuclear energy’s footprint. This is because I deliberately ignored material requirements (except uranium), used the most environmentally destructive uranium mining method (open cast mining), overestimated uranium requirements by a factor of four at least, used the most optimistic assumptions regarding wind energy production, and ignored the effects of variability. The latter would, after a certain level of wind energy production is reached (with current technology, perhaps 20-30% of the electricity grid) require perhaps two or three times the number of plants presented here to produce same level of service, or the building of significant backup plants and/or energy storage facilities. If material requirements are accounted for, wind power has 3-10 times larger materials and mining footprint than nuclear (see e.g. supplementary material for aforementioned Weißbach et al. 2013).

I also selected a relatively dense wind farm with short electricity interconnector (the thickest line connecting three wind farms in each segment of the graphic). In addition, I did not draw those parts of access roads that were evidently used for other purposes as well, e.g. public highways.

As wind power generation increases and locations close to existing power lines and already disturbed by human presence are used up, developers must turn their attention to ever more remote sites. These entail longer connectors and more access roads, sometimes encroaching to existing wildlife sanctuaries. Connectors and roads also dissect biomes, and may therefore contribute more to ecosystem damages than one might assume from simply counting the area they occupy.

The alternative, offshore wind, does not need access roads, but it will still disturb marine ecosystems if not sited properly.

However, please note that none of the above is to be construed as an argument against wind farms or renewable energy in general. Compared to fossil fuels, they are still much less destructive to health and environment – by far – and proper siting can alleviate many of the potential hazards. My only aim is to show that the claim “nuclear harmful – renewables benign” is far more complicated than it appears.

As always, you are free to spread this graphic as you see fit.

Image | Posted on by | Tagged , , , , | 27 Comments

On the relationship between regulation, technological change and competitiveness

Translation of my presentation in the 38th Ilmansuojelupäivät in Lappeenranta, Finland, 20.8.2013

If we deconstruct the topic of the panel, “is environmental protection a threat or an opportunity to a country’s competitiveness,” sooner or later we will end up considering the question whether tighter environmental regulation would help or even force firms to develop novel ways of working — novel technologies, in the broad sense of the word — that others would be willing to pay for.

Of course, this is far from being the only possible way through which environmental protection could in principle affect a country’s competitiveness; to mention just a single example, a case could be made that tight regulations preserve the quality of the environment and therefore make a country an attractive location for skilled professionals. However, because I have neither space, time nor competence to deal with these kinds of questions, I shall concentrate on the effects that regulatory constraints may have on technology — and on the effects technology has on the regulations.

In theory, environmental protection and economic activity are possible to combine precisely through “incentivizing” regulation that forces beneficial changes in techniques and operating procedures. For two decades now, the favorite theory trotted out in support of this thesis has been the so-called “Porter hypothesis,” which maintains that tight regulation pays for itself not just because of enhanced environmental protection, but also because of improved goods and services. The logic behind this hypothesis, developed by Michael Porter and others in the early 1990s (e.g. Porter and van der Linde, 1995), is that necessity is a good motivator that makes firms to develop technologies they otherwise wouldn’t bother developing. If the regulation is just properly incentivizing, innovations will flourish and competitiveness soars, more than offsetting the costs of regulation.

As is the case with so many other beautiful theories, the Porter hypothesis is seductive, simple, and very likely not correct. Despite over two decades of research, empirical evidence for the net positive effects of tightening environmental regulation to economy as a whole or even to the success of specific sectors remains slim, and what little evidence there is tends to be problematic to say the least. In general, noteworthy tightening of regulatory constraints, compared to e.g. regulation elsewhere, can promote individual firms and, in some cases, specific sectors, but there is little evidence for broader effects. From the economy’s viewpoint, the evidence is therefore both positive and negative: on the one hand, regulation has only little positive effect, but on the other, the negative effects of regulation to e.g. industry profitability also tend to be small. Unfortunately, the same applies to reductions of environmental pollutants.

Before taking a stab at explaining why this happens, and what we might do about it, I’m going to be explicit about what I’m not claiming. It is clear that tightening regulations and political pressure create additional incentives and pressures for firms to develop their technologies. It is also clear that on occasion, these pressures and incentives can help the development of successful innovations and even novel industries. Furthermore, regulatory constraints are not the only way regulators can influence technological change; regulation can also increase incentives by e.g. subsidizing novel innovations. Finally, there is no doubt that changes in the operating environment are bound to benefit some firms and penalize others. Anecdotal evidence is abundant, and our all-too-human tendencies to seek simple reasons for complicated developments, remember success and brush failures under the carpet easily create the image that the plural of “anecdote” is “data.”

Nevertheless, it is at the very least uncertain whether we can believe that innovations that are developed as a response to regulatory constraint are, on average, net positive developments. Economic theory suggests that this is possible, given the existence of very specific conditions (Mohr, 2002). Unfortunately, research has failed to find much evidence of these conditions being anything but rare occurrences. Because Porter hypothesis requires that firms systematically leave profitable improvement opportunities unexplored — not just that this may happen — the lack of empirical evidence is not surprising.

The problem with combining economic activity and growth with environmental protection through technological development is that sticks and carrots are not the only or necessarily even very powerful forces affecting technological change. It is only a slight exaggeration to say that discourse is often dominated by a conviction that technological solutions to environmental problems require only proper incentives. If incentives are in place, any problem will be solved. In a certain sense this is true, as long as the definition of a “solution” is kept so broad as to fit ocean liners through, and no time limit for the solution is specified. Outside semantic hair-splitting contexts, it is clear that certain problems remain nevertheless unsolved, even though solutions would clearly have extremely significant practical and economic value. A trivial example is a way to locally alter gravity; less trivial but no less valuable examples are, among others, a cheap and scalable method for storing electricity, a cheap, clean source of energy, and a fast method for convincing the majority of world’s population to make great economic sacrifices in the name of environmental protection.

The aforementioned problems remain unsolved, and there are no guarantees that solutions even exist, even though sticks and carrots are decidedly plentiful. A moment’s reflection reveals why this is so: even if these solutions were technologically possible, we lack the “building blocks” required to construct them. All of our tools and techniques are built upon our existing toolkit and knowledge, much like building blocks of a pyramid. In practice, we cannot realize any solution, if we lack the blocks we need to build it. On the other hand, the history of technology from stone axes to Facebook tells that once the pieces are in place, a technology will be developed very rapidly, almost always by multiple independent inventors. Very few if any single causal factor has much effect on the speed of technological change; to pick an extreme example from the realm of pure ideas, one sympathetic biographer was forced to conclude that Einstein’s contribution was to advance physics by ten years, at most.

In short, what is invented is only rarely affected by constraints. In my own PhD research, I have studied one famous Finnish invention, the flash smelting of copper. Developed as a response to serious electricity shortage after the Second World War, this technology had a great effect on metallurgy, once it was broadly adopted starting in the 1960s. Prior literature has repeatedly claimed that the key causal factor behind this invention was the electricity shortage, which required Outokumpu to develop a completely novel method for copper smelting; however, less ink is spilled over the fact that the technology in question had been first described 80 years prior and even patented full half a century before Outokumpu’s experiments, and a body of existing patents and research literature actually caused great problems for Outokumpu when it tried to patent the invention in the United States. Even less discussed today is the fact that another company, Inco of Canada, operating practically without constraints of any kind, developed a notably better technology a full month before Outokumpu. The inevitable conclusion is that the undoubted later success  of Outokumpu’s invention has much more to do with factors other than constraint, and that the invention would have been invented — in a better form, although perhaps not by Outokumpu — even without the constraint. Based on this and other cases, I therefore side with mathematician Alfred North Whitehead in claiming that instead of necessity being the mother of invention, “necessity is the mother of futile dodges” is closer to truth.

If the constraint caused by extreme post-war scarcity did not, in this and other cases I’ve so far researched, have noticeable effect on the content of technological change, what hope remains for those constraints we can effect through regulation? The answer, in my opinion, is “very little.” Recent research has strengthened the belief that regulatory constraints in democratic societies (at least) are born out of a sort of a negotiation process. Desire for brevity forces me to cut various corners, but the gist of the matter is that because material standard of living and improvements thereof remain extremely important to many voters, and more so during times of economic difficulty, regulators must of necessity consider the impact regulatory decisions may have on the economy. If a decision would have significant negative impact on the economy, the decision will be altered. This dynamic, dubbed “Iron Law of Climate Change” by Roger Pielke Jr., is visible in nearly every regulatory process that aims to ease environmental problems; if it’s not visible, there is a reason to believe that the problem is not overly large or difficult.

What kinds of regulation, then, would have significant negative impact? In principle, every regulatory constraint that cannot be met cheaply and relatively speedily with technologies that are available “off the shelf.” By reading proposed and enacted environmental regulations, one will sooner or later notice concepts such as “Best Available Technology” (BAT). The performance of these technologies largely defines the constraints — e.g. emissions, energy consumption, and so on — set by the regulation. I’d go so far as to wager that if one attempts to explain the dismayingly slow process of environmental protection, a key culprit would be a lack of sufficiently cheap technology for satisfying tighter constraints. It is, in fact, usually more accurate to say that technology changes regulations rather than to say that regulations change technology.

When technology exists and is sufficiently cheap, environmental regulation can proceed rapidly. In these cases, regulatory changes promote the spread of existing (if, perhaps, so far poorly commercialized) technology. My personal hunch is that the majority of research literature finding positive relationships between the enactment of tight environmental regulation and technological change has, by accident or by design, studied these kinds of cases. However, it does not necessarily follow that regulation would promote the development of novel technologies as such; more accurate description of the relationship might be that regulation may promote the spread of existing technologies. The difference may seem small to someone not spending hours in the study of technological change, but it is significant: insofar as the solving of environmental problems requires us to develop completely novel technologies, it is not clear that regulations are the best vehicle to promote this development.

What conclusions one should draw from the above? First, one should note that all hope is not lost, not even for the purpose of improving competitiveness. In principle, it is possible that a country or a firm has a disproportionate share of firms and industries with technologies that would benefit from tighter regulation. Tightening regulation undoubtedly benefits the relative competitive position of these industries, and even if the development of novelty were not affected, the adoption of the existing may be sped up significantly.

Secondly, even though individual regulations may have little to no effect, in the long run the trend is hopefully different. By enacting constraining regulation, the polity is sending a message saying that certain activities are considered inappropriate; over time, increasing the domain of inappropriateness with small steps such as these may lead to changes that would be inconceivable if they had to be taken at once.

Third, even if necessity may not be the mother of invention, regulatory constraints can provide the final push for technologies and firms that are not quite ready yet. As an example, even though the invention of flash smelting would have happened without Outokumpu, the constraint very probably caused precisely Outokumpu to be among the inventors — and in a position to profit from the invention later. However, profiting from such inventions is a different matter, and no regulation can force firms to do so.

References

Arthur, B. W. (2009). The Nature of Technology: What it is and how it evolves. New York: Free Press.

Korhonen, J. M., & Välikangas, L. (Forthcoming). Constraints and Ingenuity: The Case of Outokumpu and the Development of Flash Smelting in the Copper Industry. In B. Honig, J. Lampel, & I. Drori (Eds.), Handbook of Research on Organizational Ingenuity and Creative Institutional Entrepreneurship. Cheltenham: Edward Elgar.

Mohr, R. D. (2002). Technical Change, External Economies, and the Porter Hypothesis. Journal of Environmental Economics and Management, 43(1), 158–168.

Porter, M. E., & van der Linde, C. (1995). Toward a new conception of the environment-competitiveness relationship. Journal of Economic Perspective, 9(4), 97–118.

Roediger-Schluga, T. (2004). The Porter Hypothesis And The Economic Consequences Of Environmental Regulation: A Neo-Schumpeterian Approach. Cheltenham: Edward Elgar.

Posted in History of technology, Innovation, Scarcities and constraints | Tagged , , , | Leave a comment

“Graph” of the Week: What happens if nuclear waste repository leaks?

Lately, I’ve been spending some time reading through reports on nuclear waste management. What is striking is how conservative the calculations seem to be; for example, the report by Posiva (the Finnish company responsible for the world’s first civilian nuclear waste repository, Onkalo) goes to almost absurd lengths when calculating what might be a possible highest dose for the single worst-affected individual ten thousand years from now.

For example, the calculations assume that the person in effect spends all of his or her days – from birth to death – in the single worst contaminated one square meter plot around the repository; eats nothing but the most contaminated food available, with a diet that maximizes radionuclide intake; and drinks only the most contaminated water and nothing else. The figure – 0.00018 milli-sieverts per year – also assumes that the copper canisters where spent fuel pellets are housed begin to leak after mere 1000 years.

And still, the worst-case figures amount to a dose what one would get from eating about two bananas.

I’m fully prepared to accept that many surprising things could happen, and that we cannot be certain of what happens 10 000 years from now; but given the figures here, and the way they’re achieved, I have some confidence that the likelihood of people receiving doses that can actually pose some real danger, even in the long term (say, more than 10 mSv per year – which might just produce enough cancer cases to be visible in statistical sense) seems remote indeed.

You can read the Posiva Biosphere Assessment report (be warned, it’s 192 pages of rather technical text) in English here:

http://www.posiva.fi/files/1230/POSIVA_2010-03web.pdf

As always, corrections and comments are highly appreciated. You’re also free to distribute the image as you see fit – just provide a link to this page.

Image | Posted on by | Tagged , , , , | 5 Comments

“Graph” of the Week: Fukushima tritium leak in context

The “massive” tritium leak to sea from Fukushima since 2011 equals the tritium content of about 22 to 44 self-luminescent EXIT signs.

More info about exit signs here: http://www.dep.state.pa.us/brp/radiation_control_division/tritium.htm

Image | Posted on by | Tagged , , , | 8 Comments