Very powerful computer software is on its way. Some of the most respected AI researchers, who made very accurate predictions in the past, predict that by 2045 AI will make us 1 million times more intelligent. Meaning, our ability to solve problems with limited resources would increase significantly. Higher intelligence would allow us to solve more complex problems, and solve problems faster.
It’s hard to wrap our heads around the possibility of such a future. A reason for that being our misunderstanding of what a computer really is. It’s not simply a Macbook or a PC, or software and hardware. Computers are about intelligence. It’s the role they have always played, by helping us humans solve more complex problems and solve problems more quickly. Computer software, encodes rules and knowledge. It’s encoded understanding.
While in the past all computers needed humans to encode understanding into their software, this isn’t the case anymore. Now we grow software through training that allows computers to create their own understanding. This will enable us to solve new and more complex problems while continuing to increase the speed of solving existing problems.
The more intelligent the software becomes, the closer and more involved we will want to be with it because it becomes more useful in meeting our wants and needs. This trend started a long time ago. For example, we spend more and more time looking at screens and there is close to no job anymore where there is no computer involved at some stage of the process. The development of powerful software as well as the merge between humans and software is a continuous process, that will continue until the point we are really just one.
But as with everything else, humans like to differentiate, to see things as different and separate from one another. And that’s also the problem in how we are thinking about AI and why instead of imagining a merge we are believing that AI will take over.
That’s right, I believe an AI take over is not a problem but a distraction right now, because humans will for different reasons —power, FOMO, curiosity, necessity— decide to fully merge with it. Most people who are warning of a takeover do it for profit driven reasons, for example Elon Musk and Sam Altman, or for reasons of genuine fear and concern, Eliezer Yudkowski, this very interesting AI 2027 project, or Geoffrey Hinton. But so what is it that we should worry about? To understand that we must first think about what the consequences of a complete merge will be.
What does it mean for humanity to have people among us who are 1 million times more intelligent than other people? The answer is: fruit flies to humans. A Drosophila melanogaster is approximately 1 million times less intelligent than the average human. I know this is not exactly right and that intelligence is difficult to compare between species, but I couldn’t find a better example, the picture looks cool, and you get the point.
The people who won’t merge with AI will get extinct unless they find a way to “not be in the way” of super intelligent humans. Just like fruit flies. If a fruit fly sits on your watermelon and you want to eat that watermelon, you will get rid of the fruit fly, slap it and maybe kill it. If a more intelligent being has wants and needs and you are in their way they will get rid of you and you won’t be able to anticipate it because you aren’t able to understand its wants and needs. I mean this is already happening. There are very intelligent people who manipulate the world to their own advantage without caring for those who suffer and who can’t figure out a solution to their suffering. In a few years no one who doesn’t use computer software to solve problems and make decisions will exist anymore unless they are useful to the super intelligent humans in one form or another.
So, what we should really worry about is who is ahead in software development, and who develops and merges with AI first. Is it going to be an authoritarian system or a democracy, individuals who are selfish and want power at all cost or people who want the common good? In that respect, powerful software is much more like the atomic bomb than some alien super intelligent being taking over the world and enslaving or killing everyone. Those who will have the most powerful AI will be those with more power and capabilities in controlling others, and they might be able to kill everyone else if they want to.
That’s the reason why it should be the number one goal for democracies and everyone who believes in a future ruled by the people, to stay ahead in the development of powerful software. And there should be a push for developing and adopting AI. In other words, developing and continuously merging with it.
The wrong approach of doing things is to ban phones in schools. Instead we should focus on a better usage of these devices starting now, and if necessary ban some useless apps from the app store or something similar. Phones are useful, for example for AI native learning. Also we should adopt AI in the workplace as much as possible, not only for productivity reasons but to get as many people as possible involved in the merge “early on”.
So, in summary, some people will completely merge with AI in the near future. These merged people will have incredible power over those who decide to stay normal because the former will be significantly more intelligent. If it turns out that merged people have bad intentions there is almost no chance for normal people to survive. They will get extinct. To avoid that we, democracies, must embrace this future merge, and race towards developing powerful software, AI, first.