There has been some debate in the sciencey & science fictionish circles about whether searching for and contacting extra-terrestrial aliens is really such a great idea. No less a luminary than Stephen Hawking recently warned the humanity about the possible dangers of phoning the E.T. (as if the humans would listen…), and contact with hostile aliens has understandably been a staple of science fiction since the beginning.
In fiction, aliens are usually depicted lusting after our planet/water/bodies/female bodies (cross off as appropriate). There are some reasons to believe that these fears may have been exaggerated: as long as there are much easier pickings remaining in, say, the Asteroid Belt or the Oort cloud, a species capable of traveling across interstellar distances does not seem to be very likely to need much anything from the bottom of a deep gravity well also known as the Earth. Curiosity items and biological/historical information, perhaps, but wars do not seem to be the optimum method for obtaining them. Trading would be much easier for all concerned – and let’s face it, put a bit of glassbeadanium from a hyper-advanced civilization on offer, and you’ll get all the museums and collectors on Earth lining up for a chance to trade whatever treasures and/or employees they have for it.
However, some more recent works, such as The Killing Star by Pellegrino and Zebrowski (1995), depict a bit more dystopic universe. These authors have thoroughly understood what is also known as the Jon’s Law: Any interesting space drive (i.e. anything that is capable of interstellar flights in at least somewhat reasonable timescales) also happens to be an immensely powerful weapon of mass destruction.
Why? Because interstellar travel requires pretty humongous expenditures of energy. And that energy needs to be controlled very carefully; as one thinker puts it, how would you like to have the captain of the Exxon Valdez skippering a tramp freighter with an antimatter drive?
Consider just a rather simple, down-to-Earth example, a beamed propulsion space probe proposed by a noted hard sci-fi author and rocket scientist (no, really!) Robert L. Forward (1984, 1996). He believes that using only technologies currently being developed, and a rather modest outlay of funds compared to, say, the Olympics, the humanity could soon send a 785-ton interstellar probe hurtling through the cosmos at a rather brisk pace of 50 % of light speed, or 0.5 c. Now, what happens if this small probe just happens to have a slight brush-up with a planet?
It goes boom. Big time; with some 2600 gigatons to be precise. To get a sense of scale, one estimate puts the entire nuclear arsenal of the entire Earth at somewhat firecracker-y 6.4 gigatons, give or take some decimals.
I don’t know for sure what 2600 gigatons of flaming relativistic death will do to a planet, but I’m fairly certain it’d be bad news all round. There is a reason why techno-jargon for these things is Relativistic Kill Vehicle, or RKV. (Yay, acronyms!)
And did I mention that stopping a chunk of metal that’s coming for you at 0.5 c seems to be somewhat “challenging,” no matter what kind of technology you would be using? Even if the probe can be hit, much of it would simply break up – and instead of a single planet-shattering blast, you’d get what would amount to nearly 500 all-out nuclear wars being fought within a second. Not exactly good news, that. In fact, if one wants to relativisticly fry an inhabited planet, the optimal approach would most likely be to break the RKV into smaller fragments well before impact. Harder to detect, harder to intercept, more chances of hitting the planet and something important in it, and less energy wasted into gigantic fireballs that promptly exit the atmosphere. Also, longer baseline for distributed sensors, meaning better targeting capability.
That, my friends, was a single small probe from a pretty sub-standard civilization that has barely learned how to fly up to space and not burn all up when coming down. What if the attacker is using three such probes? Or twenty? Or rather larger 7850-ton ships Forward believes we could also have relatively quickly? (26 000 gigatons per ship, enough to punch sizable holes in the planet’s crust.)
In short, it seems that Attack will always be more effective than any Defense. The Attack Will Always Get Through, and when it does, the results will be spectacular. (From a safe vantage point that is, i.e. from the next star system.)
So – either interstellar travel is not really feasible, in which case we have little to fear from the E.Ts and the discussion whether SETI is a Good Idea is largely moot, or it is feasible, and we have the aforementioned problem. In answer to which, any aliens could be hell-bent on killing us off sooner rather than later simply because
- Their survival will be more important than our survival; if an alien species has to choose between them and us, they won’t choose us.
- Wimps don’t become top dogs; no species makes it to the top by being passive.
- They will assume that the first two laws apply to us. (Pellegrino & Zebrowski 1995, p. 115)
In other words, we could be under threat simply because we may be a threat one day, some day. This is a sobering thought: if it is possible that we may be a threat, shouldn’t it be rational to exterminate us sooner rather than later?
Fortunately for us, there is one thing working in our favor. The alien attackers do not know whether we can already retaliate. Yes, they might fry us with their RKV’s or Alien Death Zappers (ADZ’s for MOAR acronyms!), but are they sure we are not hiding any of our own in the Asteroid Belt, for example? Really sure?
Of course, we know we don’t have those things. But think about an alien civilization picking up the first TV broadcasts from the Earth – incidentally, they’d probably see Hitler opening the 1936 Olympics, which may not be the best introduction to our species unless they’re really into uniforms, but let’s leave that for now. Could they really piece together enough information from our 1950s soap operas, observations of Earth from a distance, et cetera, to absolutely rule out that this species is incapable of building RKVs or something even nastier? What if – let’s say – the grainy broadcasts are all part of some futuristic version of a Renaissance Fair, or a religious ritual? What if it’s a trap ?
And if they then send their killer probes, what will happen during the 200-2000+ years or so the probes will need to reach us? (It seems likely there are no advanced civilizations within 100 light years from Earth, although one must always be ready to be surprised.) Are they really, really sure we then don’t have technologies, if not to defeat the attack, at least to respond in kind? After all, the aforementioned planet killers are likely to be within our reach during this century, and any space habitats in out-of-way places like the Asteroid Belt would have a good chance of surviving the initial attack.
Furthermore, are they absolutely certain we’re not talking to any other aliens? It would be somewhat suspicious if the last message from the Earth would scream about relativistic attack, and the recipients would very likely want to lock’n'load some RKV’s or ADZ’s of their own. Yes, nth aliens are unlikely, but if you find one species, that’s existence proof that it’s not impossible.
In short, it would seem that any civilization that has any reason to be afraid us would also be so afraid of us that it’s far from certain they would really want to hit us first. In other words, they would be deterred from attacking. Yes, they might be very concerned if humans get their dirty hands on relativistic space probes, and they might take a dim view of us practicing parallel parking with those vehicles in their neighborhood. But concerned enough to launch a preventive attack? Hardly.
After all, we’ve been there and we’re still around to tell the tale. From the late 1940s, a group of very eminent minds – including John von Neumann, widely considered one of the greatest mathematicians ever – became increasingly concerned with what they saw as the relentless logic of nuclear war. If there is a non-zero probability of a nuclear war, they said, it is logically only a matter of time before a war breaks out. And given the trend towards increasing numbers of nuclear weapons, a war in the far future would be far more devastating than a war today. So, they and their followers in the military argued, let’s bomb the nasty Commies before they do the same unto us. An example is given in the 1954 briefing to the President Eisenhower by a U.S. Joint Chiefs of Staff advance study group: the U.S., they said, should
“…deliberately precipitat[e] war with the USSR in the near future… before the USSR could achieve a large enough thermonuclear capability to be a real menace to [the] Continental U.S.”
(Kaku & Axelrod 1987:101)
Another air staff study from the period concluded that anyone calling for restraint and relying on retaliation in the event of nuclear attack, i.e. not advocating surprise attack, was a
“…pseudo-moralist who insists that [the U.S.] must accept this catastrophe.”
(Kaku & Axelrod, 1987:100; Rosenberg, 1983:196)
Substitute Aliens for the U.S., Humans for the USSR and RKV for thermonuclear, and there you go. Von Neumann et al‘s logic was sound, in theory, but in practice, these things are fortunately not so simple. It’s quite widely acknowledged now that the logic of deterrence – the inevitable retaliation – made deliberate nuclear wars pretty much impossible, although accidental wars remain threats enough on their own (as I’ve written before). I see no real reason why deterrence wouldn’t work in interstellar relationships: if hitting us is seen as a strategy to ensure continued survival of the alien species, and if any humans survive for long enough to hit back – even if retaliation takes hundreds of years to find and reach the perpetrators – the strategy will be deeply flawed. Simply put, the negative payoff from a not-100%-successful attack will be so large, and the likelihood of the attack being 100% successful so low, that ensuring survival of the species is not a good reason to try to kill off another. (Yes, there may be other reasons, but these, too, will be at least somewhat deterrable.)
But that’s enough qualitative rationalization for now. The next installment of this post will detail the history of One Million Interstellar Wars, fought by yours truly (it’s amazing what you can do with telepresence these days). With that, I’ll show quantitatively why preventive attacks are generally not a good idea – except, perhaps, in a few well-defined cases. Also: what we can do to avoid painting a bullseye on the Earth.