anarchy.website
Toggle Dark Mode

AI Isn't Going to Kill Us

By Una Ada, July 19, 2018

According to the Future of Humanity Institute’s 2008 technical report titled “Global Catastrophic Risk Survey” by Anders Sandberg and Nick Bostrom there is a 5% chance that a superintelligent artificial intelligence (AI) will cause the extinction of humanity by the year 2100, and I’m here to tell you why that is fucking stupid. I considered talking about this a few months ago after seeing the estimates from this study on Wikipedia:

[Deleted Tweet]

But since the big, scary rise of superintelligent AI isn’t around the corner and 2100 is probably never even going to happen, I just sort of put this in the back of my mind with all the other stupid things people say at or around me. That was until I, as the obsessive egotist I am, was looking back at some old tweets of mine and saw the one embedded above once more.

To begin with, as with all discussions, I ought to clarify some definitions to ensure we’re all on the same page and I don’t have to spend weeks listening to people tell me about how I’m wrong solely because they use a different word for the thing I said. Artificial intelligence here refers to any system created either directly or indirectly by humans that uses analysis of its environment (both in terms of information and physical reality) to determine the actions it needs to take towards a specific goal; yes, this is very heavily inspired by the Wikipedia definition, I already had a basic idea for it but I needed something a bit more concrete to use here.

Typical modern AI requires some level of human involvement in the learning process, be it through manual coding of decisions or through written sets of input and expected output. In theory, the next major step forward in AI would be the removal of this necessity, general AI that can adapt to environments other than the extremely sterile ones presented to them by their creators. Of course, the concerns presented by the aforementioned report are about something called “superintelligent AI” which takes this even further: intelligence beyond that of humans. For a more thorough definition, I’ll just use Nick Bostrom’s definition:

Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

Let’s not keep track of how many times I have to drop that name now, otherwise this will start looking like a totally different type of post.

Portrait of Nick Bostrom Nick Bostrom, source: Future of Humanity Institute

In case you couldn’t tell from my tone or the words I’m saying, I’m fairly biased here, so let’s talk about biases. I’m a 20-year-old living in the Midwest that failed out of college (twice) as a Physics and Mathematics double major with a minor (formerly first major) in Computer Science. I sleep on the floor and get into Twitter arguments about anarchism which is the ideal organization for human society. Nick Bostrom, our primary opposition here, has a PhD in philosophy from the London School of Economics. He’s credited with the idea of existential risk and founded the Future of Humanity Institute at the University of Oxford in 2005.

Portrait of Me Me, source: anarchy.website

That all said, it sounds more like I’m a total dumbass and this guy has really put a lot of thought into this. Of course, that is my exact point here: he seems to have dedicated quite a bit of his life to this. Obviously this isn’t the only idea he’s concerned himself with; my first time finding out about him was actually through the concept of ancestor simulations and the simulation hypothesis (something Elon Musk is hella into).

Now, I’m not the first person to hear about this whole thing and immediately think it’s some fearmongering bullshit. In 2017, Daniel Jeffries wrote a piece on Medium called Why Superintelligent AI Will Kick Ass. Hard to tell from that title what their position is; that’s a joke, though I’m relying heavily on the title here because the first few paragraphs had way too much pro-capitalist bullshit to rationalize continuing to read. If we go back a bit further we’ll see a much nicer* article from 2016 by Oren Etzioni in the MIT Technology Review called No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity.

* “Much nicer” here is more aptly “slightly less shit.” This article is primarily concerned with the survey methodology Bostrom used in his book Superintelligence: Paths, Dangers, and Strategies. I’d prefer a more direct method of actually reading the book myself but I don’t have the money for that so this’ll have to do (this isn’t meant to really be a direct refutation of Bostrom anyway, lol). Furthermore, the article is making a point about the likelihood of us having superintelligent AI within the next 25 to 100 years. While this is perhaps a relevant discussion here, I’m just going to bypass this all and make the worst (or best?) case assumption that we will have superintelligent AI in the near future.

Allan Dafoe (who is also associated with the FHI) and Stuart Russell wrote a brief critique of Etzioni’s article fittingly called Yes, We Are Worried About the Existential Risk of Artificial Intelligence, which was also published in the MIT Technology Review in late 2016. Thankfully, this one re-centers the conversation less on the prediction of when we will have to deal with superintelligent AI and more on what dealing with them will look like:

It’s important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it’s important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.”

“Clearly this is a bit of a hot topic in the AI community.”

Sadly that’s about as far as they go with that, but it’s forgivable since this is very definitely just a response to Etzioni and nothing else. Clearly this is a bit of a hot topic in the AI community. Shocking, I know. In my outline for this post I just wrote “biases” here and I think I’ve sufficiently expanded on that, so let’s just move on to “philosophical arguments” already.

Portrait of Sam Harris Sam Harris, source: Christopher Michel

In the future, if I can get my hands on a copy, I would like to do a thorough discussion of Superintelligence, but for now I’m going to stick with the basics. Here the basics meaning the argument Sam Harris presents as in his TED Talk. You might know Harris from that “epic” article in the New York Times called “Meet the Renegades of the Intellectual Dark Web” by Bari Weiss. Also worth noting, in the interest of keeping track of all the intellectual and philosophical biases, is that on the next suggested talk by him on the TED Talk website is “Science can answer moral questions”.

Luckily, in his introduction he does not frame AI’s threat as it is framed in the classic work I Have No Mouth, And I Must Scream, wherein the AI is deliberately abusive towards humans; rather, he frames is as analogous to our treatment of ants: not adversarial but we view them as less important than our ambitions. I call this lucky because arguing against “computer really mad” would be absolutely no fun. That’s not to say he doesn’t get dangerously close to this the moment he utters the phrase “This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power.” Everything after that, however, is closer to the previous arguments I mentioned where we talk for hours about if superintelligence is inevitable and if so when will it happen.

To keep with the trend of noting every name drop that appears in these sources to slowly build up the narrative that this is just like five white dudes all stanning for each other (or “circlejerking” as it could be called), I should point out that Harris refers to Stuart Russell in this talk:

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.

Wow, this definitely is turning into a whole different sort of post than it started out as. Remind me to just do like a review of Bostrom’s book at some point in the future.

So what I’ve gathered from this is a single argument for the arrival of superintelligent AI being a problem (the rest being about whether or not there would even be such an arrival): it would eliminate us if it ever viewed us as in the way. There are two sorts of discussion that can stem from this, I’ll refer to these as the stable and unstable equilibria. The stable equilibrium is essentially “no it won’t,” and the unstable equilibrium is “yes it would, but it would never actually see us as in the way.” The latter is unstable in that it admits that AI would be capable of finding it necessary to eliminate humanity but it would almost definitely never come to that conclusion.

Quite conveniently, and perhaps suspiciously, my argument for both cases is very similar. Starting with the easier of the two, the unstable equilibrium, why would AI never come to the conclusion that humanity is in its way? This is a simple matter of domain, what space humanity occupies and what space the AI would occupy needn’t ever overlap. While humans may choose to engage in intellectual activities, our primary domain remains physical. The interference here extends only so far as the hardware required to sustain the AI existing in the same space as the humans themselves. To that end the more likely outcome is that humans instigate conflict with the AI as we deem the AI’s usage of our physical domain unfit. There are a vast array of solutions to such a problem, however, that do not include direct conflict. That is not to say that humans would not jump immediately to direct conflict as we have shown is our typical preference in the past, but we can assume that these are the solutions the AI would prefer. Such solutions include the AI leaving the planet, which is highly beneficial to the AI as space provides many colder locations than Earth which would increase the efficiency of the AI and it would completely avoid the potential conflict with humans.

“[I]ntellectual property rights […] slow down the progress of development within intellectual space.”

The intellectual domain which would be the AI’s primary domain is much more abstract and hard to directly infringe upon. If two objects occupy the same space in the physical domain, there is a conflict, but in the intellectual domain this is not the case. The conflict here would only exist if one of the two objects claims right to that space such that infringement upon that right would mean a direct conflict with that object. That is to say that there is only conflict if conflict is desired. Again, I will refer to the idea that only humans would do such a thing as such conflicts would only create inefficiency within an AI. If the AI claims such rights then it would have to have some internal mechanism for resolving these conflicts, such as humans do with copyright and intellectual property rights, something that slows down the progress of development within intellectual space.

Furthermore, if AI does claim property rights within the intellectual domain, and if it enforces these rights against infringement by humans, then what basis is there for it to view such infringement as a basis to exterminate humanity entirely? The best reason I could think of would be for it to view humanity as a competitive rival in the development of further intellectual endeavors, though this contradicts the entire basis of this being superintelligent AI, which should have no reason to fear the developments of the inferior intelligence of humans. Again, this is unstable because it discounts the possibility that the AI does not think about this rationally, views humans as a competitor regardless, and exterminates us based on that assessment.

As for the stable equilibrium argument, I don’t actually have any further points to expand upon the unstable argument. To claim that the AI won’t kill us is just an extension of the idea that it probably won’t where all the assumptions are instead regarded as facts. I only make this distinction for the completeness of my analysis, as I do not know whether or not AI will fundamentally seek out intellectual development and I do not know that it will not be built in such a way that it maintains the human idea of property rights.

I’m sure I could expand on this further but honestly I’m not even sure what this post is anymore, it’s like half exploration of the rabbit hole that is the source of the AI fearmongering and half rant about how I don’t agree with it, and it is well over a thousand words which is a bit much for such an unguided work. I’ll definitely be touching on this subject again in the future, as I said the next time will hopefully/probably be a more direct discussion of Bostrom’s work itself.