A recent critical essay by Kevin Kelly on the prospects of “strong” artificial intelligence (AI) has sparked some debate between those who believe that the emergence of a superhuman artificial intelligence is only a matter of time, and between those – like myself – who are somewhat more skeptical. Kelly’s essay touches many salient points I’ve been pondering about for quite some time now, and makes the case far more elegantly than I have been able to: the AI myth rests on what is likely a false premise (that intelligence is something that can be both disembodied and measured on a single scale), and because everything ink nature involves trade-offs, there is no actual reason to believe that humans aren’t already quite close to the maximum for human-like intelligence. The essay covers many other important points that tend to get ignored in the techno-optimistic (or pessimistic? I guess it depends on the point of view) discourse about artificial intelligences, and I heartily suggest you read the whole essay.
However, there seems to be one further piece of evidence suggesting that strong general-purpose artificial (machine) intelligence may be extremely difficult if not impossible to construct. As far as I know, this evidence has not been widely discussed, but it might be useful nevertheless, so here goes:
If strong AIs are almost inevitable (as many futurists claim), why haven’t we seen an alien one yet?
This is, of course, a variation of the classic Fermi Paradox: if intelligent aliens exist, where are they?
I’m not going to wade very deeply into the voluminous debate this seemingly innocent question has sparked. It only needs to be said that even when using very conservative assumptions about the emergence of spacefaring species, the speed of space travel and colonization of star systems, then the Milky Way ought to be positively brimming with alien civilizations. The galaxy is old, and in a cosmic blink of an eye, an ambitious species could conceivably have colonized all the star systems it desired.
Recently, we’ve come to learn that starfaring might not be as easy as we once assumed, however. As Kim Stanley Robinson put it in his excellent fictional treatise of the subject, Aurora, it is quite possible that (complex) life is for all intents and purposes a planetary phenomenon, fundamentally unsuited for the demands of living and thriving in worlds it hadn’t co-evolved with. Fleshy meat-sacks like use are just not very well suited for the rigors of vacuum, zero gravity and hard radiation. What’s more, even if these fairly straightforward and linear hazards can be mitigated, it seems extremely, profoundly difficult to construct self-contained miniature ecosystems that can keep humans healthy and happy for almost infinite periods of time. Complexity theory in fact suggests that while constructing such habitats might be possible (only very hard), making them self-sustaining to the extent required for subluminal travel between the stars may be forever out of our reach. (I suggest you to read what Kim Stanley Robinson has to say about the topic here.)
What is perhaps an obvious solution, therefore, is to engineer an intelligence that is better adapted to the interstellar void. I remain agnostic about whether that might be feasible at some point in the future: there seems to be no fundamental reasons to think it will be impossible, but I wouldn’t hold my breath while waiting regardless. My suspicion is that such an intelligence would in effect be an engineered version of baseline Earth-human, adapted to life in space but still essentially biological.
However, in theory, space would be the perfect environment for a true machine intelligence. A machine intelligence should not care overmuch about those pesky biological limitations that may forever doom (or bless) the baseline humanity to inhabit this one world only. Vacuum and low gravity might in fact be the preferable environment for such a being: outside corrosive atmosphere, with abundant sunlight to capture for energy and the vast wealth of the asteroid belt to mine, what limitations a machine intelligence would have to suffer from? It seems quite straightforward to assume that if machine intelligences arise at all, they would sooner or later make their way to space – the final frontier. A machine intelligence in space would be free to expand almost exponentially, building automated factories or true self-replicating von Neumann machines. And for a machine intelligence, the vast gulf between the stars might be tolerable to cross – if only to seed the another star system with von Neumann machines. That, by the way, is the most cost-effective and the fastest way to explore the whole galaxy, and it would only require the AI to successfully launch one self-replicating starprobe – hardly a task beyond the capabilities of projected superhuman AIs.
In fact, many knowledgeable observers believe that if humanity is to ever contact intelligent aliens, the actual contact is most likely to happen through intelligent machines. There is even a possibility that such machines already exist in our solar system, but for some reason or another they haven’t chosen to make themselves visible to us. A machine intelligence might be watching – but it might also be deeply uninterested about the comings and goings of “biologicals” deep within the gravity well and corrosive, unpredictable atmosphere of one of the planets. But how long would it remain uninterested? Remember, if even one alien super-AI would’ve become interested in space exploration with inexpensive von Neumann probes, it could seed all the star systems in the galaxy within half a million years – provided that it takes 500 years for the self-replicator to construct a copy of itself.
Remember also that it’s totally believable that alien civilizations just as capable as us have risen at least one billion years before our time.
There is, of course, another possibility. The simplest explanation for the fact that we haven’t seen even one alien AI is that truly strong artificial intelligence that transcends the drawbacks of its biological originators is for all intents and purposes impossible to construct. After all, as Fermi originally noted, the galaxy (not to mention the universe) is both vast and old, and there seems to be little reason to assume humans have been its first species to seriously think about building an artificial intelligence. (This hunch seems to find quite a bit of support from the recent discoveries of extrasolar planetary systems.) And an AI-driven self-replicating von Neumann probe should have been able to reach every planet in the galaxy in practically no time flat, even if travel between the stars takes thousands of years.
Practical impossibility of a superhuman AI would explain why we haven’t noticed any signs an artificial intelligence. It would also explain why we haven’t seen any signs of a runaway AI, for instance: as far as we know, in the admittedly tiny sphere of the universe we’re practically able to observe in sufficient detail, there are no megaprojects such a demented AI might construct – as many AI enthusiasts caution. (I believe a person who is obsessed about the dangers of a strong AI is as much an AI enthusiast as the one who believes such an AI would usher in an era of unprecedented prosperity and even eternal life.)
It might well be that the continuing absence of either human-built or alien AI eventually validates much of Kevin Kelly’s criticism in the essay mentioned in the introduction: a priori, it seems just as believable to assume that humans are already fairly high up on the general intelligence ladder and hence difficult to improve upon, if we use human intelligence as a measuring stick (which we probably shouldn’t, but that’s a different discussion). It may well be that constructing an intelligence that is significantly more intelligent than we are will be fundamentally impossible, because of unavoidable trade-offs and drawbacks that are likely to be inherent in such a complex system. And it is entirely possible that the current computational paradigm is fundamentally incapable of even replicating human thought processes, except in speeds that are far slower than what actually happens in the human brains – and, hence, it is very possible that more transistors and faster computation, which many blithely assume will eventually overtake human brains, will never in actuality produce an emulation that will outcompete a human.
It may well be that not just complex life but human-like intelligence are essentially planetary or at least biological phenomena, and as long as we don’t see an alien AI probe bearing towards us, this conclusion is just as likely as other conclusions made about artificial intelligence – a concept of significant, even religious power, but which does not exist.
And, as far as the Fermi paradox goes, my hunch is that the combination of two things explains most of the question: complex life is difficult to sustain outside the ecosystem it co-evolved with, and we aren’t observing von Neumann messengers or machine civilizations because the necessary general purpose toolmaking intelligence is also very hard thing to sustain outside the fundamentally biological substrate that is the only medium we know it can reside in. Of course, we haven’t been looking up in sufficient detail for very long, and I may be proven very wrong.
You’ve missed a possibility – that space-faring technological life has a galaxy-scale niche, and the first species to emerge will certainly fill that niche, lacking competition. So we are a candidate to fill that niche in our galaxy, if we survive.
You might enjoy David Brin’s novel Existence, which explores these issues, but I won’t spoil it for you by elaborating, as his position emerges gradually through a very entertaining story.