Previously, I made the case why space aliens, or ETIs (short for ExtraTerrestrial Intelligences) would probably not exterminate humanity out of fear of future competition from us, because a) interstellar warfare either is an uncertain business at best, or, b) if we postulate a civilization so advanced that it could destroy us with impunity, it would not seem to benefit from doing so.
Now the argument has been sharpened considerably, thanks to a publishing process which culminated in my article being accepted to Acta Astronautica (Korhonen 2013). I encourage you to get your hands to the paper, either directly from Acta or, if you’re so inclined, via the manuscript version at the arXiv repository.
For those ADHD’s who don’t wish to read the whole paper (I can’t imagine why you wouldn’t, but I’m an accommodating person), here’s the gist of the argument. I make six assumptions about the nature of interstellar warfare, namely,
- All civilizations have a concept of risks and benefits, i.e. they do not simply act randomly.
- A civilization that does not need to fear retaliation has little need to destroy other civilizations. We humans could destroy all the anthills we find without having much to fear, but are we doing so just because they may be our competitors one day?
- There are practical limits to technological development. At some point, civilizations cannot greatly reduce their vulnerability to attack through technological improvements; even if our laughably primitive weapons won’t get through, some weapons will.
- No defense can be guaranteed to be 100% successful 100% of the time. See above.
- No attack can be guaranteed to be 100% successful. Even the most sophisticated attack will have a non-zero probability of failing to achieve its complete objectives. It also seems that the marginal cost incurred by the attacker as it tries to increase destructiveness of the attack increases rapidly: sure, killing off 50% of humans may be trivial, but killing all humans may not be!
- Verification of peaceful intentions over interstellar distances may be difficult. Regardless of assurances, there is little that can be done – short of a physical visit – to verify the truth of any statement a civilization’s representatives may make.
According to these assumptions, the only surefire way of preventing another civilization from attacking your civilization, ever, seems to be the complete destruction of the said civilization, if not the said species. Unfortunately (from the attacker’s viewpoint, that is), creating absolutely effective first strike weapons and gathering timely intelligence required for their use will always be massively more complicated than creating effective deterrent weapons. After all, the deterrent needs only to threaten unacceptable damage to the attacker; if the marginal cost of the attack rises along with the desired amount of damage, an attack causing severe damage may be orders of magnitude simpler and cheaper than an attack that eliminates the species.
But what is an unacceptable level of damage? This is the tricky part, and I acknowledge I don’t have perfect answers. No one does. However, our history strongly suggests that as civilizations become more technologically advanced, their tolerance for death and destruction diminishes greatly. This trend has continued for centuries, and there seem to be no good reasons to believe it would be reversed as technology advances. Several reputable studies – some written by former high-level nuclear weapons specialists and military officers – make a compelling case that today, deterrence is achieved by being able to destroy about ten random cities. Even when rounding up the numbers, this suggests that the expected value of credible deterrent is – very rough estimate here – in the order of 0.1 x total loss. Even smaller numbers have been put forward, and it is difficult to believe that human decision-makers would willingly lose even a single city for any conceivable political gain.
Now, then, how do we achieve this level of deterrent? Consider that any interstellar attack is likely to cause severe damage to the target civilization. As an example, a rather primitive 1970s reference design for interstellar probe – the Daedalus – could, with just a programming change, deliver some 146 gigatons of kinetic energy to any largish target (=planets, moonbases, battle stations the size of a small moon) within 25 lightyears. When one compares this to the approximate explosive energy of the entire world’s nuclear arsenals (ca. 6.5 gigatons), one may conclude that a hit from even a single interstellar probe would be Very Bad News Indeed.
Even an advanced civilization might have problems defending against such an attack. The sluggish Daedalus, coming in at measly 12% of the speed of light (0.12 c), would be theoretically detectable with Hubble Space Telescope-level sensors at about 10 to 20 Astronomical Units (AU). If the sensor is pointing to the right direction, and if the lack of apparent motion – which is what most detection systems rely on – doesn’t bother. Detection at such ranges would give only some 24 hours of warning before impact. Is this long enough for an interception? And consider that considerably faster, smaller (more difficult to detect) probes may be possible.
What, then, is the likelihood that we (or some other civilization) can launch such a retaliation? I simplified the argument to four probabilities, namely
- P(identified), which is the probability that an attacking civilization’s military intelligence can identify the victim’s essential centers of gravity correctly,
- P(hit), which is the probability that the identified targets are hit with sufficient force,
- P(destroyed), that once hit, the ability of the victim to retaliate is destroyed permanently, and
- P(no witnesses), that there are no other civilizations which might detect such unprovoked attacks and see them as causes of alarm, precipitating their pre-emptive strike against the attacker.
Even if the attacker is 95% certain of each and every single variable, the joint probability of a successful attack is only 0.815; in other words, the complement probability of a retaliation is uncomfortably high 0.185. I leave it to the individual readers to judge, given the arguments in the paper (e.g. that it’s impossible to be sure whether “primitive” radio transmissions are really the sign of a “primitive” technological civilization, an equivalent to renaissance fair, or a deliberate lure intended to attract hostile civilizations) whether certainties of 95% are achievable across interstellar distances. Considering that even one hit may engender 90%+ losses, it would therefore seem that attacking someone simply because they may be threats in the future would be a losing strategy.
The good news, therefore, are that attempts to contact extraterrestrials are likely to be low-risk affairs. The bad news is that waiting for an answer may be a long wait, as the ETIs might be fearful to take any actions that might be misinterpreted as hostile acts. For example, sending a high-velocity interstellar probe and launching a high-velocity interstellar kill vehicle look the same to the recipient…
This should also be remembered when discussing our own interstellar missions. If one has the capability to send interstellar probes to someone else’s home system, they also have the capacity to hit them with interstellar kill vehicles. We must make sure that our exploratory efforts, if and when they happen, are not mistaken for attacks!
Thanks to Markku Karhunen and to two anonymous referees of Acta – their comments and suggestions improved the story considerably!
Korhonen, J. M. (2013). MAD with aliens? Interstellar deterrence and its implications. Acta Astronautica, 86, 201–210. https://doi.org/10.1016/j.actaastro.2013.01.016 (manuscript available at arXiv: http://arxiv.org/abs/1302.0606)
The article manuscript is published in arXiv repository according to Elsevier’s publishing policy regarding permitted scholarly posting.