Reasoning
Today, I will reason about reasoning. Lol. Let’s try that again… Today, I will try to reason you through the argument of why some of us have the tendency to reason wrongly about reasoning. Ugh. Let’s try this again… Today, I’m writing about reasoning. Simple.
First, a short excerpt from a conversation between Lex Fridman and Andrej Karpathy (“(…) computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla. He specializes in deep learning and computer vision.”):
“Lex: Do you think neural networks can be made to reason?
Andrej: Yes.
Lex: Do you think they are already reasoning?
Andrej: Yes.
Lex: What’s the definition of reasoning?
Andrej: Information processing.
Lex: So in a way that human’s think through a problem and come up with novel ideas…it feels like reasoning. So the novelty out of distribution ideas, you think is possible?
Andrej: Yes and I think we are seeing this already in the current neural nets. You are able to remix the training set information into true generalization, in some sense. It doesn’t appear in a fundamental way in the training set. Like, you are doing something interesting algorithmically. You are manipulating you know some symbols and you are coming up with some correct unique answer in a new setting.
Lex: What would illustrate to you holly shit this thing is definitely thinking?
Andrej: To me thinking or reasoning is just information processing and generalization and I think the neural nets already do that today.
Lex: So be able to perceive the world, or whatever the inputs are and to make predictions or actions based on that, that’s reasoning.
Andrej: Yes you are giving correct answers in novel settings by manipulating information. You have learned the correct algorithm. (…)”
So let’s translate that into normal people language: This conversation is about the current state of artificial intelligence (AI). Here, they are talking about one specific area within AI, namely neural nets. Neural nets are what allows AI to think and reason. Andrej understands reason to be what he calls “information processing and generalization” which essentially boils down to:
taking in information
processing and learning from that information
taking action in a novel setting based on the learning of that information
I believe that Andrej is 100% right. I can’t really comment on whether AI is already doing that but I’m totally convinced that the skill of reasoning and thinking is exactly what he describes, and nothing more. And that’s where I see the thinking flaw in how some of us humans think about reasoning.
We tend to believe that thinking/reasoning is a skill that makes us humans unique. We tend to believe that that’s what makes us humans, humans. We tend to believe, that that’s our competitive advantage compared to other species and organisms in this world.
It is only a question of time before AI can reason and think better than the most capable of us. By that time we have hopefully realised that what we should value in our fellow human beings and society as a whole is not brain capacity and the traditional definition of “intelligence”. Otherwise, some of us are headed for a rough awakening and some kind of weird “human identity crisis”.
Let me illustrate how we think and how AI thinks and why AI is certainly gonna reason/think “better” than us.
Human reasoning: We have senses that allow us to take in all kinds of information from the outside world. The outside world that feeds into our senses is what we describe as experiences. Experiences are what we read, hear, see, feel, taste etc. The entirety of our experiences make up for our whole world. In other terms, it’s all the data making up the environment we exist in. This data is what we use to think, reason, reflect, judge, make decisions, form opinions and come up with ideas. But how do we do that? We do that by processing the data. We humans continuously have new experiences, take them in, process them and add pieces of it to what we call our memory. Once stored in the memory we can take pieces of it to make up our minds about whether we should vote for Biden or Trump or whether we should have cornflakes for breakfast or a beef steak.
All of this inflow, processing and generalization of experiences to new settings and situations happens continuously and is the process of what we call reasoning or thinking. When boiling it down to first principles like I just did, it’s really not that complicated and sophisticated. So let’s see how an AI could be able to reason and think better than us humans.
AI reasoning: An AI doesn’t have senses in the same way than we humans do but as we saw previously, senses just have the task of allowing us to experience the world and collect data. An AI could do that in 2 ways:
Collect data from a large database like the internet for example. On the internet there is an infinite amount of text, images and videos. Since this data all comes from us humans in the physical world based on ideas generated from our experiences, we could imagine it to be enough data to teach an AI about how we humans experience the world.
Collect data from the physical world by experiencing it in the same way as we humans do. This would consist of collecting data using computer vision (cameras), audio, touch, maybe even taste or smell. Basically collecting data in a similar fashion as we humans would.
Once it has collected the data it is now time to store it, process it and generalize it. Memory is a big challenge, especially for long term memory of AI systems right now, but once this is solved the AI would be able to access all the data it has collected whenever it wants to. As we saw previously, neural nets are doing the task of processing and generalization. As you can see reasoning is not hard to imagine for AI and is actually quite a straightforward process.
So before leaving you here all freaked out about AI, let me tell you why we won’t die even if AI can reason better than we humans do.
The actions of us humans do not only depend on our capacity to think and reason. In fact, there is other factors that come into play:
The experiences we have: As humans, what we experience defines our perception of the world we live in and therefore our ideas, thoughts, opinions, judgements etc. Different experiences mean a completely different input for our thinking and reasoning process which means different generalizations and different conclusions. Regarding AI, it’s the same limitation. Right now, we humans decide what the data is that gets fed to the AI. As long as that’s the case the AI can not do what it wants to and can’t reach undesired conclusions in its reasoning. This also means that as long as we are aware of that and act responsible there is nothing to worry about. It’s in our hands not in the ones of the AI.
The ability to act on our conclusions: As humans, even if we come up with ideas and conclusions, we can’t automatically act on them. For instance, I want to marry Dakota Johnson tomorrow. Well, I could try to do that but there is many obstacles in the way, one being that she might not be interested in me because she likes women or simply the physical distance between where she lives and where I live and the time it would take me to go there and convince her about her luck to marry me. A more straight forward example is my desire to go live in another galaxy. It’s cool to have the idea and even if on paper I had figured out how to go there, the entire rocket still needs to be built. Good luck to me without any practical skills of building that freaking thing. The same logic goes for AI, even if it can reason 1000X better than any human, it doesn’t mean that it has the physical capabilities to execute on any of it’s conclusions. An AI system that only lives on a computer will never be able to suddenly grab a gun and shoot you because you said something mean to it. It doesn’t have a body, it doesn’t have a hand so it’s not able to grab a gun and shoot. As long as we don’t put a super powerful AI into a super powerful giant robot with superhuman physical capabilities or a software system that controls every single machine in the world, we will be just fine. Again, it’s in our hands not in the ones of the AI.