The AI+Human Apocalypse

In his recent TED talk, Sam Harris discusses the looming rise of superintelligent machines, and our lackluster emotional response. He points out that, unless something crazy happens, these machines will arrive. The fact we are seemingly in a race to develop them means it’s likely to happen sooner rather than later.

Most dystopian visions of malevolent AI feature robots assuming violent control of the planet. But is this the way things are likely to play out? Harris mentions the belief some have that the safest way to welcome artificial intelligence of this magnitude is by incorporating it into ourselves. In this way we would become the machines.

Harris however argues against this notion, saying:

“…building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.”

He may very well be right—intelligent machines may arrive first. But merging these machines with our brain is not that far behind. In fact, such fusions are already taking place.

In the previous article I wrote about modern technology connecting to our nervous system, of which there have been some amazing accomplishments:

While much of this is aimed at restoring functions people have lost, the technology also stands to add to our current capacities—need to increase your strength 100x? Install a thought controlled mechanical arm. Never want to forget anything? Install a memory chip.

While the race toward intelligent machines heats up, a similar race exists in upgrading the human body, and the brain is just another territory to be conquered. And we are beginning to conquer it, thanks in part to the relative uniformity of the cortex.


The cortex is that thin, wrinkly layer that surrounds the upper portion of the brain. It’s where all the intelligent, complex stuff happens. The structure of the cortex, despite its many roles, is also fairly consistent.

NeuronsVernon Mountcastle discovered in 1957 that the cortex has a distinctive arrangement in which small groups of 80-120 neurons are grouped into columns. There are also distinct layers in the cortex that serve specific functions.

Humor, math, language, empathy—the structures that give rise to these higher order functions are very alike. What differentiates them is how they’re connected, and this is something we achieve predominantly through experience.

As we currently understand it, the cortex works by taking some input through several levels of processing, with each level tasked with finding patterns and regularities from the one before it.

For instance, when your eyes are presented with the image of a bird, that image is first translated into electro-chemical impulses that can travel along nerve fibers to your brain, specifically the visual cortex at the back of the head.

There, early levels of your cortex will identify colors, basic lines, and corners, while the higher levels of processing start recognizing depth, objects and eventually how all the parts interact, such as where the bird’s attention is directed, or what it might be thinking.

Likewise, in writing we first identify basic shapes and lines, it is only in the higher levels we begin to recognize words, sentences, and the meaning which they intend to portray.

Funnily enough, this layered processing style has become the model by which AI is being developed.


The pride and joy of AI research is in machine learning. The goal is to design programs that can learn the best method for interpreting data of some type.

For instance, one that could look at images on the web and identify what’s in there; or one that can recognize what we’re saying, figure out what it means, and conjure an appropriate response.

This is currently accomplished using neural nets. Put simply, a neural net takes some data set, processes it through several layers, and outputs the result. For instance, the program is given a picture of a bird, it processes all the pixels through several layered processes, until it recognizes a bird in the image.

How does it learn to identify the bird? It may be given a number of examples from which it will find consistent patterns, while someone provides feedback in the form or right/wrong identifications.

As the number of processing layers increase, the better able the machine is at finding more abstract patterns within the information. Neuroscientist and computer engineer Ankit Patel speaks of these layers:

“The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it’s able to disentangle.”

That all being said, there are still marked differences between the way we learn and the way these machines learn. Patel continues:

“…in order for a machine to understand what it’s seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff. We humans learn those things on our own and take them for granted, but they are totally missing in today’s artificial neural networks.”

“There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly. … What the brain is doing may be related, but it’s still very different. And the key thing we know about the brain is that it mostly learns unsupervised.”

There was also a recent study suggesting that intelligence—at least some aspect of it—is due to the connectivity between areas of the brain, specifically between the frontal and parietal lobes. If connectivity rather than levels of complexity is necessary to improve intelligence, this would throw another spanner in the works.

Understanding our own intelligence seems a necessary step in knowing how to build artificial intelligence. That could mean we need to understand the brain first—making our own augmentation a short step further—or that we must find a novel way of building intelligence.

On Intelligence“Many people today believe that AI is alive and well and just waiting for enough computing power to deliver on its many promises. … I disagree. AI suffers from a fundamental flaw in that it fails to adequately address what intelligence is or what it means to understand something.”

—Jeff Hawkins, On Intelligence

Nobody said there weren’t obstacles to overcome. We are still a way off superintelligence in human or machine, and I feel that their arrivals won’t be far apart no matter which comes first. But the question of which route is safer now seems very prescient.


If there is a device that takes us from error-riddled human intelligence to some elevated superhuman level, what are the chances it also makes us a better species?

Intelligence is great, but does increasing it make us more compassionate, loving people? Or does it simply make us more able to get what we want?

I do not believe people are inherently evil or corrupt. They are made this way through experience. I also believe that many evil and corrupt people probably wouldn’t want to change a lot about who they are, especially if they have made it to the “top” of some corporate ladder.

Will the rich and powerful accept a device that makes them more compassionate, giving, and less greedy? Would they install a device that caused them to enjoy money to a lesser extent—that is, to suddenly find the thing they’ve been chasing all these years is not actually that important to them?

Changing our likes and dislikes would seem to be changing who we are. We would be replacing our “successful” selves with some goody-two-shoes. It’s a nice idea from the perspective of a goody-two-shoes, but I can’t see everyone going for it.

Most powerful humans would want to increase their intelligence to either protect what they have or accumulate even more. It’s difficult to see contentment as part of human nature, rather, we’re always looking to get more, go further, become better—a drive that can be put to good use, for sure, but can also be led astray, to the detriment of many.

If we’re not careful, it could leave us with an equality gap like none other. Those at the top suddenly have the mental tools to take them so much further, while all the regular folk have even more trouble keeping up.

While we’re worrying about the dangers of intelligent machines and the human errors that might allow for them, perhaps the real threat is a super-intelligent human motivated by greed and power. A super-villain if there ever was one.

There are of course dangers in the intelligent machine route if we do not take care in establishing their wants and goals—if we tell them to end human suffering, they could kill us all, ending the human race and with it, all human suffering. Yet it seems, to me at least, that we are in greater control of the machine’s motivations.

Humans are the product of evolutionary trial and error; the machines, on the other hand, could be our best chance at welcoming a being of true intelligent design. The only way for us to become the intelligently designed being, in a positive way that benefits everyone, might be to let go of what we call human nature.

Check out the rest of the Digital Brain Series

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *