“People have shown they are quite willing to give up privacy for convenience”

This is often said, for example in the clip below. The video itself is by conspiracy theorists, who actually mention chem trails, but most of the discussion is of Elon Musk’s purposed neural lace.

I do not think people have shown a willingness to give up privacy for convenience. I think people care quite a bit about privacy, that is why they undertake the enormous inconvenience of typing with their thumbs when speaking commands into the phone is much more convenient. People have shown a willingness to give up a theoretical intangible sort of privacy for an incredible convenience. That’s what the guy in the video is talking about, as does everyone who repeats this. Google gives you access to the world’s knowledge and email allows you to instantly reach anyone anywhere. We know that theoretically the NSA could read everything we type into a search engine or email, unless we use TOR or PGP which few do. However, this is mostly theoretical and intangible. We don’t get calls from the FBI saying, “You typed that in? You weirdo.” We know that government agencies don’t have time to look at all the data they collect and we don’t expect that they would ever read anything unless we typed something truly terrible which most people have no interest in. And if you did care on principle alone to stop this theoretical intangible privacy invasion you would have to make an enormous sacrifice. One would literally be reduced to sending letters and going to the library to communicate and access the world’s knowledge. One could use PGP and TOR but by the very act of doing so they would shine a light on themselves, likely still lose their privacy and be subject to an invasion of privacy that was no longer theoretical. One final point is that AI will also allow for a non-theoretical invasion of privacy. A machine like IBM’s Watson will one day be able to read all our emails and search queries at human level reading comprehension.

Is there an uncanny valley for intelligence?

The uncanny valley is “the uncanny valley is the hypothesis that human replicas that appear almost but not exactly like real human beings elicit feelings of eeriness and revulsion among some observers.” https://en.wikipedia.org/wiki/Uncanny_valley So if one thing isn’t quite right it spoils the whole picture when it comes to how we judge human appearance.

Watching the clip below from Ben Goertzel made me wonder if there might be an uncanny valley for intelligence. If there is one thing a human can’t do he or she is considered stupid. If he or she excels at one thing we call him or her an “idiot savant.” Perhaps this could be a reason that the current progress in AI is underestimated. If so, it will continue to be right up until machines exceed our abilities in everything.

I really don’t get Max Tegmark’s argument against the simulation hypothesis

This post refers to the video clip below. His problem with the simulation hypothesis seems to be that if you accept that you are in a simulation you can make the same argument that you are in a simulation of a simulation for the same reason. This is a reductio ad absurdum because you could go on and on forever. This doesn’t seem true at all to me. You could have simulations within simulations but wouldn’t the amount of computational resources to do this expand geometrically? You could argue that you are likely in a simulation of a simulation but it would be orders of magnitude less likely that you are in a simulation of a simulation of a simulation simply because simulating a universe is not computationally free despite the fact that we have the resources, it appears, to simulate a vast number of them.

He also argues that if we’re in a simulation we don’t know the real laws of physics. This is correct but if you start from a position that we are in a simulation you have already conceded. As I see it there are two options:

1. We could be in a real universe that is capable of simulating trillions of universes.

2. We could be in a simulated universe. In this case Tegmark is correct we don’t know the real laws of physics. The “real universe” could be, for example, tiny and only capable of simulating one other universe. However, you cannot disprove the likelihood of this being a simulation due to the possibility that the odds are more even in the real universe without conceding the argument.

As I see it the logic goes if 1 likely simulated if 2 100% simulated. The only way we can not know the laws of physics is if this is a simulation and its bizarre to use that premise to somehow refute the simulation hypothesis.

Richard Stallman always makes me laugh for some reason

Don’t get me wrong. I think he’s done great things for the world especially with the great success of GNU Linux. I also think the four freedoms he always talks about is a laudable aspiration. But for some reason listening to him always makes me laugh out loud. For example in this clip below:

Bitcoin is Software Enforced Inter-Subjectivity

In his excellent book, Sapiens: A Brief History of Humankind, (“Sapiens”) Yuval Noah Harari lays out his theory of the human species. This is no small topic. His focus is on the idea of fictional ideas. He attributes human civilization, chiefly, to legal and religious fictions, as described in this excerpt which well summarizes the main thesis of the book:

How did Homo sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.

Any large-scale human cooperation – whether a modern state, a medieval church, an ancient city or an archaic tribe – is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe that God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights – and the money paid out in fees.

Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.

People easily understand that ‘primitives’ cement their social order by believing in ghosts and spirits, and gathering each full moon to dance together around the campfire. What we fail to appreciate is that our modern institutions function on exactly the same basis. Take for example the world of business corporations. Modern business-people and lawyers are, in fact, powerful sorcerers. The principal difference between them and tribal shamans is that modern lawyers tell far stranger tales.

Harari, Yuval Noah (2015-02-10). Sapiens: A Brief History of Humankind (pp. 27-28). HarperCollins. Kindle Edition.

Further, Harari discusses the concept of “inter-subjectivity” as described in this excerpt:

In order to understand this, we need to understand the difference between ‘objective’, ‘subjective’, and ‘inter-subjective’.

An objective phenomenon exists independently of human consciousness and human beliefs. Radioactivity, for example, is not a myth. Radioactive emissions occurred long before people discovered them, and they are dangerous even when people do not believe in them. Marie Curie, one of the discoverers of radioactivity, did not know, during her long years of studying radioactive materials, that they could harm her body. While she did not believe that radioactivity could kill her, she nevertheless died of aplastic anaemia, a disease caused by overexposure to radioactive materials.

The subjective is something that exists depending on the consciousness and beliefs of a single individual. It disappears or changes if that particular individual changes his or her beliefs. Many a child believes in the existence of an imaginary friend who is invisible and inaudible to the rest of the world. The imaginary friend exists solely in the child’s subjective consciousness, and when the child grows up and ceases to believe in it, the imaginary friend fades away.

The inter-subjective is something that exists within the communication network linking the subjective consciousness of many individuals. If a single individual changes his or her beliefs, or even dies, it is of little importance.

***

Similarly, the dollar, human rights and the United States of America exist in the shared imagination of billions, and no single individual can threaten their existence. If I alone were to stop believing in the dollar, in human rights, or in the United States, it wouldn’t much matter.

Harari, Yuval Noah (2015-02-10). Sapiens: A Brief History of Humankind (p. 118). HarperCollins. Kindle Edition.

In Sapiens, money is discussed as a fiction. A dollar isn’t really worth anything, at least since they are no longer backed by gold. However, because everyone behaves as if they have value, they do. If everyone were to stop believing in it the value would disappear. Many fiat currencies have lost all value, usually due to hyperinflation. This is less true of gold which will, for the near-term foreseeable future, retain jewelry value. What is interesting about Bitcoin is that it on the one hand only has value to the extent that people believe it does. It could either become wildly popular or it could break due to software flaw, it could be replaced by a competitor or it could simply be forgotten. However, the software itself also solves the byzantine generals problem with a sort of software enforced inter-subjectivity. Value is where 51% of miners believe it to be. This could be accomplished in a more decentralized manner without the mining but the proof of work essentially protects the network from a Sybil attack. Intersubjective ideas have changed in the past. Do we believe in many Gods or one? Are the southern states their own country now or still part of the United States? Which money will we use? In the past bloody wars were fought over these fictions. In an increasingly dematerialized world and as Bitcoin 2.0 concepts such as smart contracts greatly expand the possibilities one could imagine a world in which a state bloodlessly forms due to a simple change in hashing power.

In Sapiens, Harari also discusses the history of writing, beginning with accounting being done by empires that needed to expand their ability to track data in a complex society that was beyond that which any human brain could remember. Harari describes the evolution from partial script, to full script, and importantly to formal mathematics. He describes modern mathematics with Arabic numerals as “… the world’s dominant language.” Harari, Yuval Noah (2015-02-10). Sapiens: A Brief History of Humankind (p. 130). HarperCollins. Kindle Edition. Harari describes the eventual development of binary and computer science as an even further development of language. Bitcoin represents both a new higher level computer science language and a new way of digitally enforcing intersubjective fictions. It truly stands to be transformational.

John Smart, 2013: exponential increase in search query length

Fascinating theories. Are search length of queries is still showing signs of exponential growth, empirically speaking?

Here is some raw data which could be analyzed to see whether its truly exponential or just increasing linearly.

Disclaimer the below analysis is extremely rough and I recommend looking at the raw data if you need this for anything important. Also I can’t vouch for the raw data’s accuracy its just information I found randomly on the web.

This is a back of the envelope analysis using only the USA data for January of each year except in instances where there was no January data in which case I used the February data. It seems to conflict with other data. For example this site lists the average query length to Google as 4.29 words as of 2012. I treated 10+ word queries as 10 word queries. Note also the Y axis is truncated. For some reason this data seems to show a drop off in 2013 aside from which the average word length seems to be steadily growing. For many searches of course we can find precisely what we’re looking for with two or one words. So we should certainly expect a level off at some point. And one of Google’s latest innovations is to allow a person to ask follow up questions such as “Who is the current U.S. President”, followed by “how old is he?” This could shorten query lengths slightly. In conversation with other humans we often ask one word follow up questions such as, “really?” or “right?” which we would not currently ask of Google.

On a related note a recent study found that “optimum length for an email is 50 to 125 words”. If we had true AI we might make a 50 to 125 request or series of requests of Google fully explaining what we are looking for as we might do with a friend or colleague.

I would be curious as to anyone’s opinion as to whether search length really dropped off in 2013 and if so why. There are a lot of intersecting factors. Google has no true competitors and is constantly being “gamed” and adjusting its algorithm. The data or my analysis could be wrong or impacted by random variance. The rise of inter connectivity and enhanced availability to access another human could be a factor. For example, we might now send an email knowing it will be quickly responded to when in the past we would have spent time constructing a complex search. We might also make more complicated inquiries of other humans in interest based social platforms such as reddit or twitter for more complicated questions. For example if you have a 10+ word question on programming you’re probably better off emailing a friend or posting on stack overflow. However, if you look at the raw data, even 10+ word searches are growing. And currently most of the searches cluster at 4 words or less. We have a long way to go before we are talking conversationally with our computers.

At the end of the above clip John Smart noted that the average question length for a human to human question was 11 to 14 words. He also stated in the video he thought we would reach this point by 2019. He also noted that by then every child would have a cell phone because they’d be dirt cheap by then. His latter prediction definitely looks like it will be correct.

words per search

EDIT:
The below quote from his essay seems to disagree with my analysis. Emphasis added.

Predicting the CI Emergence

When can we expect the CI’s emergence? In March 2005 Google’s director of search Peter Norvig noted that their average query is now about 2.5 words per query, by comparison to 1.3 on Alta Vista in its heyday, circa 1998. In subsequent email conversation with him he has told me that the actual number is “closer to 2.6 or 2.7.” This is an initial doubling time of only seven years, if this is a quasiexponential function.

It appears that the growth of the CI as a complex adaptive technological system is in the early phase of an S-curve, well before the inflection point, and thus its growth will continue to look exponential for some time to come.

[2008 Note: Average query length to Google now exceeds 4 words, apparently just this month. This is more early evidence that this phase of search query length growth will remain exponential up to the inflection point.]

In my opinion this average search query length, averaged across all the leading search engines of the day (Google, Yahoo!, Bing, etc.) will be one of the key numbers to watch to gauge the growing effectiveness of statistical natural language processing (statistical NLP) in creating a conversational front end for the internet and all our other complex technologies in the 21st century.

Is Microsoft’s new email, hotmail trying to hide emails or is it just horrible?

The new hotmail makes it nearly impossible to cut and paste an email address from an email received on a hotmail address. Perhaps its just horribly designed. Or perhaps its just an attempt to keep people on that platform. To anyone considering migrating off that site it is worth the effort. It strikes me as anti competitive or at the least not in the decentralized spirit of email to hide addresses as the new hotmail seems to do so well.

Question for media members obsessed with Theranos

What concerns you most?

1. VCs losing money that they can afford to lose? Accredited investors who are certainly big boys and can take care of themselves?

2. Elizabeth Holmes getting attention she doesn’t deserve? (who cares)

3. People getting ineffective blood tests for a short while? In no way could their ineffectiveness be kept a secret for long.

4. You think medicine is too lightly regulated?

Take a look at this clip below and realize that biology is hard and if we make a gigantic deal out of every failure we’ll get more heavy handed regulation and drive away people who actually can solve health problems.

Notes to: Machine Learning 101 Episode 10

This podcast series is an excellent overview of machine learning.

http://www.learningmachines101.com/2014/

  • Nature vs Nurture
  • Deterministic vs probabilistic view of reality
  • environments are not intrinsically probabilistic or non probabilistic
  • goal is to determine probability weights of events
  • Bayesian priors (dog genetic wiring example)
  • Maximum likelihood estimation probabilistic law that makes observed data most likely
  • Math estimation problem