There are trucks traveling around NYC announcing that everybody should go indoors because trucks are spraying for west nile virus. It feels very orwellian to hear the trucks go by. Personally the idea of spraying harsh chemicals that could have long term effects all over the environment near millions of people does not on balance make sense. However, the logic of government is you take action in a visible way. If cancer rates increase in 20 years nobody will blame the current administration. Remote risks are on their radar and we get decisions forcibly made for us that we would not make on our own. I would not spray my own yard for fear of West Nile Virus.
How often do you hear that Ray Kurzweil’s predictions or singularity predictions are bullshit because some other predictions in the past were bullshit? I see this argument more often then any other, although its not really an argument but more of a quick way of dismissing the idea. The second most common argument is that people who think there will be a technological singularity are believers in a new “religion.” Both the argument from failed predictions in the past and the argument from “singularity is a religion” are reasoning by analogy.
I was confused when I first read that Elon Musk had argued against reasoning by analogy and for the record I do not think he ever said this in the context of the singularity and I do not know whether he thinks the singularity will happen. I was surprised by Musk’s criticism of reasoning by analogy because it is so common and so pervasive that I almost thought that there was no way to reason without reasoning by analogy. I realize now that reasoning by math or as Musk says by “first principles” is at least one other way to reason, it is perhaps the opposite of reasoning by analogy, and in my opinion superior.
No doubt a huge variety of people have been making predictions for many years with little success. Speculating on the future of technology is interesting. I submit that it is unlikely that very many people who made predictions in the past had a rigorous methodology although they no doubt had their rationalizations. Yet, people dismiss the predictions of someone like Kurzweil or Moravec because of the failed predictions of other people which had nothing to do with their predictions. It would make more sense to hold people accountable for their own track record and Kurzweil’s is not bad at all. However, the larger point is that dismissing Kurzweil’s prediction this way is reasoning based on analogy and by way of contrast Kurzweil’s argument is from first principles. Since, Gordon Moore’s paper, and perhaps the observation has been made sooner but at least as early as Gordon Moore’s paper we have had an observed trend in the semi-conductor industry seen in exponential growth. Kurzweil extended this trend through 5 paradigms of growth in computing. Kurzweil, further notes that an exponential trend can either continue or die out and notes the actual physical constraints that will actually bring such a trend to an end and notes that there is a lot of room at the bottom. This leads to the conclusion that a singularity will occur. Kurzweil’s argument for the singularity is entirely based on first principles reasoning, in the same way Elon Musk in the video reasons by first principles in the battery example.
Another type of person having never heard of the singularity and wanting to quickly see if it can be easily dismissed and thus thinking about it be avoided hears its promising of eternal life and all sorts of fantastic things and instantly dismisses it as a religion. We don’t have time to evaluate all the crazy things people say so its actually understandable that people avoid the question when they can. The simple fact of the matter is that reasoning by analogy is weak and will often lead to the wrong answer. Many things are similar in one way but not others. Kurzweil predicts greatly extended life (“eternal life” is a bit of a straw man) but for very different reasons than religions do. When things are close analogies are even worse. For example take the obviously wrong statement “a dog named rover is a cat.” Rover has fur, is a mammal, has dna and lives with humans. Cats have fur, are mammals, have DNA and live with humans. Therefore Rover is a cat. Reasoning by first principles of what a dog is would not lead to this mistake but its easy to make a bad analogy that would make this mistake if you don’t have a clear picture of what a dog is and cannot derive what a dog is from first principles. This example may sound contrived but children actually make this mistake often.
Reading this over I realize I’ve used a few analogies in just this short blog post. Analogies neither can nor should be completely avoided. However, I feel Musk’s simple point has opened my eyes and everywhere I look I see “bad analogies.” Therefore, Musk’s reasoning by first principles has been one of the most insightful and helpful things I’ve learned in recent memory.
Our Final Invention by James Barrat, is an antidote to Kurzweil’s optimism. The dour serious man in the below extensive interview on “singularity 1-on-1” has written a sober and pessimistic book about the dangers of AI.
The book is very well researched and the topic is perhaps the most important in the history of our species. Our survival could be on the line and we should do all we can to protect ourselves even if the threat is unlikely. Barrat does not consider the threat of AI unlikely, but rather considers are destruction almost inevitable. Yet, the defenses he proposes against this scenario come late in the book and are not dealt with in detail. I will provide them here as I think it’s the most important part of the book. However my summary is by no means exhaustive of what is covered in the book.
Barrat starts with a thought experiment called the “busy child” and returns to it throughout the book. The “busy child” is essentially a vivid narrative description of a “hard take-off” in which the first self-aware human level AI immediately explodes past the level of human intelligence and escapes out into the world.
The busy child’s progression happens is through a very logical even inevitable seeming series of events. As soon as a human level AI is created it is able to look at its own code, make improvements, and then make itself smarter perhaps by tweaking a few algorithms. Next, by taking advantage of those improvements it creates the next set of improvements even faster. This is an “intelligence explosion” as described by IJ Good and what makes it frightening in the eyes of Barrat is that it is assumed to happen very quickly- in fact nearly instantaneously.
In reality an instantaneous “hard take off” is not a a forgone conclusion. As Kurzweil describes below just because the work of improving the AI could hypothetically be done by a computer does not mean it will be done instantly.
Improving technology is hard work and an AI will probably be created by a team of many people building upon the work of many, many more people. Adding one more being of equal intelligence will not cause the process to explode overnight. The smartest single computer scientist on the team that built the Jeopardy playing robot Watson would probably take decades to improve it working alone even if he were twice as smart as he currently is and in the “busy child” hypothetical we are only adding one more, of equal intelligence.
Another problem with the assumption that the “busy child” will instantly become a super-human intelligence is that it is inconsistent with another one of the fears Barrat has about AI and that he emphasizes throughout the book. Barratt frequently points out that an AI might be a “black box” type system because it will be built at least partially though a self-organizing type system like a neural net rather than through what we think of as ordinary programming such as a series of “if this then do that” type statements.
In this Barrat is almost certainly correct. The programmers who created Watson, which is comparably much simpler than a human level AI, cannot tell you why Watson gets some questions correct and others incorrect. This uncertainty is a cause for concern to be sure as it adds unpredictability to what is created. But consider: How easy is it going to be for a runaway intelligence explosion in such a scenario? If we cannot fully understand the machine’s code there is no reason to believe an AI of equal intelligence to us will be able to understand it either. It is not a matter of just going in and tweaking a few lines of code to improve Watson. Watson’s knowledge base was built by learning – going out on the internet and reading documents. Of course Watson reads orders of magnitude faster than a human and we will eventually improve on its processes but its “black box” complexity will slow the process.
On the other hand an intelligence surpassing ours will of course not simply be more of the same or a sped up human and could be a difference in kind in ways we cannot even conceive.
Eliezer Yudkowsky hypothesizes in the below chart that what we think of as the wide gap in intelligence between a random fool and a genius is nothing compared to the gap between species and with potential AIs.
One thing I’ve noticed about Yudowsky and his cadre, many of which seem as convinced as Barrat that we are doomed, is that they seem to assume that humans have gaping holes in their thinking- irrationalities and by products of our evolutionary past that so effectively hold humans back that a machine that didn’t have them could rapidly blow by them. We have these, but while things like availability bias are cute I don’t think they hold back the collective advance of science and technology that much. In fact the availability bias probably does less harm than good in many cases. Also, humans work together in efficient systems and that is what an AI would be up against. It would not be up against a lone human with biases and dealing outside of her domain of expertise. We have built systems like peer review and the scientific method. Also we have specialization and cooperation on a global scale. Then again, it is ultimately very difficult to gauge the degree to which human rationality could be fundamentally flawed at its core.
Defenses against AI
Late in the book Barrat discussed defenses against an AI and they were not given as much attention as I would have liked. Here is a list of defenses I saw in the book. I cannot do justice to such an important topic. I have placed them in order of effectiveness in my opinion. To my recollection Barrat does not express an opinion on the strength of these defenses.
1. Build It As Fast as Possible
AI research Ben Goertzel presented what I think is the most realistic and effective way to prevent the type of disaster this book warns of which is to build the AI quickly. The “busy child” run-away AI scenario is dependent on it having the hardware to recursively grow its mind into. I’m not sure Goertzel himself even presented this as a defense in his motivations but I do believe that our values have a greater chance of being preserved if things happen in the primitive present where humans still have hands on some of the levers of society.
2. Apoptotic Computing
A second defense was to build defenses into computer systems with something called Apoptotic computing. This would essentially allow AI researchers to kill any AI with a kill switch if it became to powerful. This might be an effective tool but to build a self-destruct module into a sentient being is prohibitively unethical in my eyes.
3. Develop a Science of AI
A third defense was to develop a science of AI as is currently being pursued by Steve Omohundro. This is certainly an extremely laudable project and important. It has value even in a non-apocalyptic world. It is a vague and difficult task however, and in my opinion may not directly come to bear in a defense against AI. Such a science would be comparable in complexity to human psychology which has not completely come to understand humans or guard against their violent tendencies. To the extent we have peace in my opinion it derives more from politics of power than from psychology. Perhaps a system of checks and balances or a politics of AI would be more effective by analogy. Also engineering historically tends to take advantage of phenomenon before we have understood them completely so I think this approach will come too late.
4. Build friendly AI
Yudkowsky recommends building a friendly AI which also protects humans from other AI. Hypothetically this seems like a possible defense although it could have the unintended consequence of reducing humans to a pet-like existence because to allow us to grow further in intellect would be dangerous. I think its an unlikely defense because unless there is a massive sea change in the population’s priorities the funds directed towards this type of AI will not be competitive with funds directed towards other projects.
5. Merge with Computers
I see this as a likely scenario. Our machines are shaped naturally by market forces to work for us and to enhance us. The smart phone is the dominant form of technology as of this blog post and it is in many ways a brain extender. Brain computer interfaces are the future. However, as Moravec writes about in “Robot Mere Machine to Transcendent Mind” we will always be “second class robots” and as Hugo De Garis articulates in the below clip the robotic portion of ourselves may come to dominate. This may not be different in effect from an AI apocalypse but I still feel much better about it. If you watch the clip, I do not agree with his conclusion that the woman has “killed the baby.”
One last thing about this book which is not particularly relevant to anything was somewhat annoying and unproductive. Barrat devotes some space in his book to criticizing the “singularians” who he describes as mostly male in their 20s and 30s and “childless.” He compares their thought process to religion for all the obvious reasons. They think people will live forever in a utopia similar I guess to Christians is probably who Barrat is comparing them to. As someone who is not religious himself but bears no animosity towards religion I think that most people’s dislike of it comes from the perception that it discourages critical thinking- not that it gives people false hope and optimism. But Barrat actually attributes logical thinking to the “singulatarians” as one of the “puryifing factors” as he stretches the religion metaphor well past its breaking point. I guess his reason for doing this is that he feels their perspective on the important issue of a rapidly approaching AI is colored by a sanguine optimism. He said something to the effect of “I get off the bus on immorality” In an interview on singularity one on one and in the book. He isn’t really willing to discuss the subject of reversing the progressive damage we call aging. He is of the opinion that it opens a door to irrationality and bias to even think about something so appealing. Needless to say he is wrong. It would make just as much sense to refuse to talk about the dangers of AI because as soon as you postulate an apocalypse you can’t view the subject of AI rationally. I don’t begrudge his opinion in this section only the rude way he presents it. For example, he writes about how old Kurzweil looks in places as if it has any relevance at all other than to take a cheap shot.
To the extent he dealt with aging, Barrat seemed to think that technological progress leading up to an AI was more obvious than towards ending aging. He’s right. It is more obvious. However, Barrat takes the position that life extension is a silly fantasy while writing an entire book that basically assumes that a super intelligence that is orders of magnitude greater than the human race is at our fingertips. Something orders of magnitude smarter than us in the way Barrat describes and coupled with other tools could almost certainly end aging, whether what this implies makes him uncomfortable or reminds him of something that makes him uncomfortable doesn’t matter. One of the things about Barrat’s singularity is that it is more explicitly negative than Kurzweil’s is explicitly positive. If there’s any chance an AI is compatible with human values it can bring us many good things, if there is no chance I don’t know what the point of this book was.
This book would probably best be viewed as a call to arms or wake up call for humanity. It is certainly not a battle plan for defense and has no thesis as how best to protect ourselves. The book is well worth reading even if you aren’t into reading a sober account of our impending doom and would rather read science fiction. It has some interesting things I have not mentioned. The various AI companies, some of them operating in stealth mode was an interesting topic. Also the discussion by a quant of AI in an effort to improve its financial models, creating an entire matrix like simulated reality to make better trades was in particular interesting to me because I think this universe is a simulation but that is a subject for another time.
You might think you’ve seen it all in Silicon Valley, but unless you’ve come back from lunch to find a 45 year old legendary VC performing the “rights of Venus” on a 23 year old intern in the meeting room of your startup you haven’t. Let me explain. I was hired as the fourth employee of a small messaging/dating app company. The aforementioned VC was an angel investor in the company and he got me the job. The company was started by a young couple out of Stanford and a former Googler. Things were going well relatively speaking, we were burning through our initial investment but we had a few solid backers. That’s when the VC, lets call him Jason, called me up and told me that he wasn’t going to be investing further in the company. I begged him to tell me why but he wouldn’t. Later I would find out from the male co-founder let’s call him Ken. His girlfriend had slept with the third co-founder. He told me that he couldn’t even look at them and that he couldn’t help but be jealous, its human nature, so the startup was finished. That last part stuck out in my mind. I had just finished reading a book called “Sex at Dawn” that called the concept of male jealousy as a consequence of evolution into question.
I tracked down Jason at a bar I know he frequents and told him that all I asked is that he go on the weekend retreat we had planned. I thought if I could just get everyone together there was a chance I might be able to save the startup. I put together a deck and made the best presentation I’ve ever done. For 45 minutes I went on and on about anthropological examples, biological examples, cross-cultural experience, the mosuo, partible paternity, social experiments, risk taking and Plato’s Republic. I was fighting not just for the startup but for a whole new way of looking at social relationships. Here is the beautiful part. I had everyone together and because everyone assumed that the startup was done we spent the next 8 hours discussing my presentation.
Six months later the startup is stronger than ever. I don’t think I can fully explain what we’ve created if you haven’t read Sex at Dawn, lets just say what Google did with not wanting people to leave work to get food we’ve expanded upon. You’ve heard of work spouses? We have become an extremely tightly knit group. Jason, who used to come by rarely, now stops by at least twice a week. His mentorship is invaluable. We have just hired our 9th 10th and 11th employees lets call them, Gretchen, Melissa and Iyelllen. We still aren’t profitable but I’m more confident then ever we will be. At the very least we are no longer hemorrhaging users. I don’t think this could work for everyone, we had a young open minded group of people and it was like the perfect storm. I have deliberately hidden the name of the company and its members so I hope you all will respect the anonymity the internet provides but I do plan to right a book about the experience some day if the startup fails and there is interest.
This is one of the best books on the singularity I have ever read. Set in a fictional company that very clearly is intended to represent Google, it takes the reader through a plausible, fascinating and entertaining trip. It reminded me of a talk by Jaan Tallin in that it encourages looking at the world from the perspective of data centers.
Essentially, it presents a highly plausible way in which AI could arise. The open minded perspective the author brings to it is reminiscent of Issac Asimov.
Hats off to William Hurtling on a fantastic first novel.
At $2.99 for the kindle version it might be the best money you ever spent. Cheaper than a cup of coffee which I’m sure the author would appreciate…
These two works of fiction go further than any others I have read in imagining what a post-singularity future would be like. They are both also available for free online. I have criticisms of both books but I wholeheartedly recommend them both.
Metamorphosis of Prime Intellect depicts a hard take-off scenario in which a strong AI emerges in the near future. Accelerando depicts a soft take-off with less of an emphasis on strong AIs developing but rather takes the reader through a future in which change happens gradually, albeit quickly and exponentially, and themes of mind-uploading, and augmenting human intelligence play a more prominent role.
Thematically, Metamorphosis of Prime Intellect focuses, to a fault in my opinion, on the hypothetical problem of what it would be like if we had everything. Kurzweil hinted at this theme in the Age of Spiritual Machines in which he recounted an episode of The Twilight Zone in which a man dies and goes to heaven which is a casino where he always wins. Eventually bored with always winning the man asks to go “to the other place” and is told that he is in “the other place.” Personally, I do not think we define our lives with pain and problems. We can also create new games, new art, have new experiences and will probably never run out of things to learn. I think we are better off without real problems in a world where, for instance, there is no war, we could still play war video games if we are so inclined. I think imagining that having everything would be bad is far off base. Nevertheless, I think this book is very interesting and creates a plausible seeming future world that is definitely worth reading.
Note that this story has extreme sex and violence that might offend some
Here is a link to a free full text version ofThe Metamorphosis of Prime Intellect.
Accelerando stretches the limits of human imagination and to my mind embodies the theme that we have no idea what to expect post-singularity. I especially enjoyed the early parts of this book and overall found it very enjoyable. As the novel progresses it gets more incomprehensible but I suppose that might be the point.
Here is a link to the free full text of Accelerando. (I didn’t know it was free and purchased it on kindle myself)
The analogy of “the singularity” as it is sometimes formulated is supposed to be a point that we cannot look beyond. Maybe such a point exists but I still find it fascinating to speculate at what life will be like just before the singularity. Its hard and maybe even impossible to make predictions beyond a certain point but I see no harm in trying. Anyway, I should add that neither book claims to be predicting anything and even predicting the future before the singularity occurs is inherently very hard.
I posted this online because you do not have your email available. (at least from what I saw you’re only reachable by social media)
I write to express my disappoint with the way you portrayed the concept of price gouging on This Week in Start Ups.
“Price gouging” is often maligned but it is a good thing for the following two reasons:
1. It sends a signal to service providers to provide goods where they are needed.
2. It causes consumers to ration goods and thus allocates resources more effectively in times when this is of utmost importance
I understand that some people have their sensibilities offended by pricing going up in times of crisis. However, not everyone shares this emotional reaction. I do not wish to question the legitimacy of an emotional response but I hope that you understand that by perpetuating an emotional response to this you make crises worse in the future for people like myself who live in New York.
Economists agree that anti-price gouging laws and anti-price gouging sediment make crises worse. The inconsistency between the way many people feel and the reality of the situation have been addressed by both NPR and Slate.
Sadly, I think those who criticize price gouging are like advocates of the death penalty. They emotionally feel justified but are on the wrong side of history. In any event, all the best.
This book is primarily a biography of Zuckberberg and the company he founded. To a lesser extent it explores the social implications of the company and tries to draw general business lessons from the company’s culture. In this way it is different from What Would Google Do. This book is less about the changing technological landscape and less about the singularity so I won’t spend a ton of time on it. But I would recommend it as an interesting book about one of the great business success stories of our time. This book paints Zuckerberg in a more favorable and probably more honest light than the doubtless more widely seen movie “The Social Network.”
Without a doubt Neal Stephenson produced a masterpiece with this novel. So many interesting concepts ranging from religion, psychology, politics, technology and many more are explored and the novel is extremely well written.
Most famously, this book introduced the idea of the “metaverse” an entire world in software which rivals the real world in realism. The closest real world analogy to this we have today is Second Life. Second life lacks in realism today but there is plenty or reason to believe that we are aiming in that direction as our technology improves exponentially.
In the past few years I think voice recognition has gotten a lot better and this has sort of snuck up on us and not gotten much media attention. As we build towards a more metaverse-like technology the same thing might happen. Teleconferencing companies are GoToMeeting and toys like the Kinect pull the user into a virtual environment. They are hints at things to come.
This book also deals with what’s now called trans-humanism although the term isn’t used. The way he explores this topic is imaginative and compelling. Among the great fiction books reviewed on this site this one may be the best.
I stumbled on this book while looking for info on the new Google Glasses on Youtube. I enjoy this type of science fiction. Like Rainbow’s End it deals with technology that is not way into the future but that should be available in less than 5 years. This might be the first time in history that we are able to read interesting science fiction about technology that is so close at hand. The people in this video loved this book. If you click the video you will see them raving about it. The book was only $1 and short which was very cool. The book was not a bad read. Technology its central insight seemed to focus on the superiority of contacts over glasses which is kind of obvious but interesting to see it explored in fiction.
The book made me laugh out loud at times because I’d seen the video. I kept thinking that the main character in this book, Stewart, was a stand in for the author and the main character’s significant other, Kimi, was a stand in for this author’s significant author Margie. There’s this whole motif of violence against women and general resentment towards society running through Plenzes pretty hardcore and I couldnt help but think it must have been a little akward for the author’s significant author to read it. Anyway if you like your fiction bite sized click the amazon wheel somewhere on the page, spend $1 on this and I’ll earn a fraction of a penny or something.