Our Final Invention by James Barrat, is an antidote to Kurzweil’s optimism. The dour serious man in the below extensive interview on “singularity 1-on-1” has written a sober and pessimistic book about the dangers of AI.
The book is very well researched and the topic is perhaps the most important in the history of our species. Our survival could be on the line and we should do all we can to protect ourselves even if the threat is unlikely. Barrat does not consider the threat of AI unlikely, but rather considers are destruction almost inevitable. Yet, the defenses he proposes against this scenario come late in the book and are not dealt with in detail. I will provide them here as I think it’s the most important part of the book. However my summary is by no means exhaustive of what is covered in the book.
Barrat starts with a thought experiment called the “busy child” and returns to it throughout the book. The “busy child” is essentially a vivid narrative description of a “hard take-off” in which the first self-aware human level AI immediately explodes past the level of human intelligence and escapes out into the world.
The busy child’s progression happens is through a very logical even inevitable seeming series of events. As soon as a human level AI is created it is able to look at its own code, make improvements, and then make itself smarter perhaps by tweaking a few algorithms. Next, by taking advantage of those improvements it creates the next set of improvements even faster. This is an “intelligence explosion” as described by IJ Good and what makes it frightening in the eyes of Barrat is that it is assumed to happen very quickly- in fact nearly instantaneously.
In reality an instantaneous “hard take off” is not a a forgone conclusion. As Kurzweil describes below just because the work of improving the AI could hypothetically be done by a computer does not mean it will be done instantly.
Improving technology is hard work and an AI will probably be created by a team of many people building upon the work of many, many more people. Adding one more being of equal intelligence will not cause the process to explode overnight. The smartest single computer scientist on the team that built the Jeopardy playing robot Watson would probably take decades to improve it working alone even if he were twice as smart as he currently is and in the “busy child” hypothetical we are only adding one more, of equal intelligence.
Another problem with the assumption that the “busy child” will instantly become a super-human intelligence is that it is inconsistent with another one of the fears Barrat has about AI and that he emphasizes throughout the book. Barratt frequently points out that an AI might be a “black box” type system because it will be built at least partially though a self-organizing type system like a neural net rather than through what we think of as ordinary programming such as a series of “if this then do that” type statements.
In this Barrat is almost certainly correct. The programmers who created Watson, which is comparably much simpler than a human level AI, cannot tell you why Watson gets some questions correct and others incorrect. This uncertainty is a cause for concern to be sure as it adds unpredictability to what is created. But consider: How easy is it going to be for a runaway intelligence explosion in such a scenario? If we cannot fully understand the machine’s code there is no reason to believe an AI of equal intelligence to us will be able to understand it either. It is not a matter of just going in and tweaking a few lines of code to improve Watson. Watson’s knowledge base was built by learning – going out on the internet and reading documents. Of course Watson reads orders of magnitude faster than a human and we will eventually improve on its processes but its “black box” complexity will slow the process.
On the other hand an intelligence surpassing ours will of course not simply be more of the same or a sped up human and could be a difference in kind in ways we cannot even conceive.
Eliezer Yudkowsky hypothesizes in the below chart that what we think of as the wide gap in intelligence between a random fool and a genius is nothing compared to the gap between species and with potential AIs.
One thing I’ve noticed about Yudowsky and his cadre, many of which seem as convinced as Barrat that we are doomed, is that they seem to assume that humans have gaping holes in their thinking- irrationalities and by products of our evolutionary past that so effectively hold humans back that a machine that didn’t have them could rapidly blow by them. We have these, but while things like availability bias are cute I don’t think they hold back the collective advance of science and technology that much. In fact the availability bias probably does less harm than good in many cases. Also, humans work together in efficient systems and that is what an AI would be up against. It would not be up against a lone human with biases and dealing outside of her domain of expertise. We have built systems like peer review and the scientific method. Also we have specialization and cooperation on a global scale. Then again, it is ultimately very difficult to gauge the degree to which human rationality could be fundamentally flawed at its core.
Defenses against AI
Late in the book Barrat discussed defenses against an AI and they were not given as much attention as I would have liked. Here is a list of defenses I saw in the book. I cannot do justice to such an important topic. I have placed them in order of effectiveness in my opinion. To my recollection Barrat does not express an opinion on the strength of these defenses.
1. Build It As Fast as Possible
AI research Ben Goertzel presented what I think is the most realistic and effective way to prevent the type of disaster this book warns of which is to build the AI quickly. The “busy child” run-away AI scenario is dependent on it having the hardware to recursively grow its mind into. I’m not sure Goertzel himself even presented this as a defense in his motivations but I do believe that our values have a greater chance of being preserved if things happen in the primitive present where humans still have hands on some of the levers of society.
2. Apoptotic Computing
A second defense was to build defenses into computer systems with something called Apoptotic computing. This would essentially allow AI researchers to kill any AI with a kill switch if it became to powerful. This might be an effective tool but to build a self-destruct module into a sentient being is prohibitively unethical in my eyes.
3. Develop a Science of AI
A third defense was to develop a science of AI as is currently being pursued by Steve Omohundro. This is certainly an extremely laudable project and important. It has value even in a non-apocalyptic world. It is a vague and difficult task however, and in my opinion may not directly come to bear in a defense against AI. Such a science would be comparable in complexity to human psychology which has not completely come to understand humans or guard against their violent tendencies. To the extent we have peace in my opinion it derives more from politics of power than from psychology. Perhaps a system of checks and balances or a politics of AI would be more effective by analogy. Also engineering historically tends to take advantage of phenomenon before we have understood them completely so I think this approach will come too late.
4. Build friendly AI
Yudkowsky recommends building a friendly AI which also protects humans from other AI. Hypothetically this seems like a possible defense although it could have the unintended consequence of reducing humans to a pet-like existence because to allow us to grow further in intellect would be dangerous. I think its an unlikely defense because unless there is a massive sea change in the population’s priorities the funds directed towards this type of AI will not be competitive with funds directed towards other projects.
5. Merge with Computers
I see this as a likely scenario. Our machines are shaped naturally by market forces to work for us and to enhance us. The smart phone is the dominant form of technology as of this blog post and it is in many ways a brain extender. Brain computer interfaces are the future. However, as Moravec writes about in “Robot Mere Machine to Transcendent Mind” we will always be “second class robots” and as Hugo De Garis articulates in the below clip the robotic portion of ourselves may come to dominate. This may not be different in effect from an AI apocalypse but I still feel much better about it. If you watch the clip, I do not agree with his conclusion that the woman has “killed the baby.”
One last thing about this book which is not particularly relevant to anything was somewhat annoying and unproductive. Barrat devotes some space in his book to criticizing the “singularians” who he describes as mostly male in their 20s and 30s and “childless.” He compares their thought process to religion for all the obvious reasons. They think people will live forever in a utopia similar I guess to Christians is probably who Barrat is comparing them to. As someone who is not religious himself but bears no animosity towards religion I think that most people’s dislike of it comes from the perception that it discourages critical thinking- not that it gives people false hope and optimism. But Barrat actually attributes logical thinking to the “singulatarians” as one of the “puryifing factors” as he stretches the religion metaphor well past its breaking point. I guess his reason for doing this is that he feels their perspective on the important issue of a rapidly approaching AI is colored by a sanguine optimism. He said something to the effect of “I get off the bus on immorality” In an interview on singularity one on one and in the book. He isn’t really willing to discuss the subject of reversing the progressive damage we call aging. He is of the opinion that it opens a door to irrationality and bias to even think about something so appealing. Needless to say he is wrong. It would make just as much sense to refuse to talk about the dangers of AI because as soon as you postulate an apocalypse you can’t view the subject of AI rationally. I don’t begrudge his opinion in this section only the rude way he presents it. For example, he writes about how old Kurzweil looks in places as if it has any relevance at all other than to take a cheap shot.
To the extent he dealt with aging, Barrat seemed to think that technological progress leading up to an AI was more obvious than towards ending aging. He’s right. It is more obvious. However, Barrat takes the position that life extension is a silly fantasy while writing an entire book that basically assumes that a super intelligence that is orders of magnitude greater than the human race is at our fingertips. Something orders of magnitude smarter than us in the way Barrat describes and coupled with other tools could almost certainly end aging, whether what this implies makes him uncomfortable or reminds him of something that makes him uncomfortable doesn’t matter. One of the things about Barrat’s singularity is that it is more explicitly negative than Kurzweil’s is explicitly positive. If there’s any chance an AI is compatible with human values it can bring us many good things, if there is no chance I don’t know what the point of this book was.
This book would probably best be viewed as a call to arms or wake up call for humanity. It is certainly not a battle plan for defense and has no thesis as how best to protect ourselves. The book is well worth reading even if you aren’t into reading a sober account of our impending doom and would rather read science fiction. It has some interesting things I have not mentioned. The various AI companies, some of them operating in stealth mode was an interesting topic. Also the discussion by a quant of AI in an effort to improve its financial models, creating an entire matrix like simulated reality to make better trades was in particular interesting to me because I think this universe is a simulation but that is a subject for another time.