Humans think — AI, not so much. Science explains why our brains aren't just fancy computers

The world, and countless generations of interactions with it, coaxed our brains to evolve in the unique way that humans perceive reality. And yet, thanks to the past century's developments in cognitive science and now artificial intelligence, we have entrenched a view of the brain that doesn't spend much time on this dynamic. Instead, most of us tend to see our brains as a "network" made of undifferentiated brain cells. These neurons produce cognition by the patterns in which groups of them fire at once — a model that has inspired advanced computers and AI.
But accumulating discoveries of different specialized brain cells pose a challenge to models of human or artificial intelligence in which thoughts and concepts arise purely from the distributed firing of many essentially-identical brain cells in a network. Perhaps it's time to consider that if we want to replicate human intelligence, we ought to take a closer look at some of the amazing adaptations that have evolved in mammalian neurons — and specifically, neurons in the human brain. Instead of the popularly understood idea of the brain as a neural network of undifferentiated brain cells, research has increasingly found that different neurons, even of the same basic type, have their own specific functions and abilities.
In fact, in the modern, popular understanding of the brain, we really tend to think of this organ as a sophisticated version of the technology it inspired. Merriam-Webster defines neural network as "a computer architecture in which a number of processors are interconnected in a manner suggestive of the connections between neurons in a human brain and which is able to learn by a process of trial and error." This is a typical definition, in which the computer-brain analogy focuses on the distributed connections between neurons (or, in a computer, nodes) with no attention to what exactly those neurons are for.
It's a definition that has been good enough since the 1980s, when future Nobel Prize-winner Geoffrey Hinton and others picked up on an older idea called backpropagation, applying it as an algorithm that mimics human brains by systematically reducing errors through repeated iterations and thus allows for more efficient training of multilayer neural networks. This reinvigorated the earlier idea that a system of nodes and connections that mimics the human brain might work to create an artificial form of intelligence, leading to the deep learning models and machine learning we have today. Since the discipline of artificial intelligence latched onto the neural network, though, it's largely focused on developing different forms of artificial (or simulated) neural networks, and mostly moved away from studying the human or animal brain as an artifact of evolution with specifics worth mimicking.
Individual pieces of research have gradually identified a host of different brain cell types, upending our simple image of the brain as a very powerful computer.
But while it's true that that most neurons are important only for their firing or non-firing, not for their specific role, even as computer scientists have been expanding the things that artificial neural networks can do, research on the brain itself has continued. Over decades, and especially the past few years, individual pieces of research have gradually identified a host of different brain cell types, upending our simple image of the brain as merely a very powerful computer.
Instead, they reveal mammalian brains to be the product of millions of years of evolution and adaptation to environments. Over all those years, countless tiny changes have led animal brains to evolve a unique nervous system in which the key component, the neuron, is now able to represent our experiences and thoughts and surroundings in specific and wondrously clever ways not available to other animals who have not evolved our most recent adaptations. Our particular form of intelligence, it seems, depends on this minority of specialized neuron types.
The brain as a computerBack in 2001, Yuri Arshavsky wrote, "I argue that individual neurons may play an important, if not decisive, role in performing cognitive functions of the brain." At that time the research was already accumulating, but the idea went counter to the prevailing view in neuroscience. By now, though, it's becoming hard to argue against Arshavsky's claim.
There are brain cells that represent entire concepts, some with an affinity for visual information and others for olfactory input. Scientists have also found neurons that can encode entire concepts with the firing of a single cell, or that are devoted to specific aspects of cognition and how we represent the world, and that fire when their particular function is needed: warm-sensitive neurons, place cells and related time cells, olfactory concept cells, visual concept cells, Lepr neurons that control metabolism... the list of discoveries is long and still growing.
New research looking into the already-brewing notion of time and space-encoding cells demonstrates how different cell types work together to give us both "what" and "where" information that allows our brains to represent our experience of time. Researchers still haven't decided how best to classify all the different types, but they are increasingly trying to map the specific kinds of input they encode through the patterns of which neurons fire when, and the relationships between the different representations this creates.
"I do agree that today’s AI models have important deficiencies — and among them might be that they lack some of the predispositions various parts of our brains may have," Jay McClelland, a noted cognitive scientist at Stanford University, told Salon in an email.
AI is doing incredible (and destructive) things these days, solving impossible medical problems and generating imagery that manages the trick of being simultaneously trite and bizarre. The computing power this requires sucks water from a parched earth and puts entire creative industries out of work. AI models that act as "artificial brains" are able to do therapy, provide health care or write (in a manner of speaking.) But there are ways in which the large language models and similar generative AI are missing not simply the feeling of being human, but the actual function.
How do we know we think differently from computers?Most of our understanding of how the brain works at the single neuron level — equivalent to a node in an artificial neural network — comes from studies of murine (mouse or rat) or primate models, because it isn't considered ethical to do brain surgery on humans just to find out what interesting things are going on in our brains. So it's only with the recent development of a technique that allows for single neuron recordings to be taken during unavoidably necessary brain surgery done on epilepsy patients for diagnostic purposes that researchers gained subjects who would be available for perhaps a week at a time to look at things and talk about them while scientists recorded which neurons fired, how intensely, and for how long.
This is a very particular situation, but luckily there are many people with epilepsy of different kinds, and a subset of them need electrodes implanted in their brains to record their spontaneous seizures over the course of a week or two so as to figure out if they are a candidate for surgery to cure them. These implants are done in two parts of the brain that often produce seizures, the medial temporal lobe and the medial frontal lobe.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.
The majority of brain cells are neurons, while some cells have other functions. But the exact number of neuron types is unknown, although recent research in human brains has identified at least two million neurons, which researchers were able to categorize into different types: "31 superclusters, 461 clusters, and 3313 subclusters," resulting in a massive number of individual types. It's remarkably different from the simple three neuron types classification — motor neurons, sensory neurons, and interneurons — one might have learned in a cursory overview of brain science.
Itzhak Fried, lead author on the newly published research on time and space cells, is a neurosurgeon at UCLA whose lab, and postdoctoral students trained there, spawned many of the major discoveries of these specialized neuron types. Fried told Salon about the two decades of research, or more, that have led to the profusion of concept cells and other neuron types we now understand to play critical roles in encoding and representing our experience with the world.
Not just with the world, but with our imaginations, and experiences that now live only in memory rather than being triggered by external stimuli. Fried cited the work of Hagar Gelbard-Sagiv, a postdoc in his lab, who, as described in a 2008 paper, found that when subjects were shown a variety of film clips while researchers recorded the activity of single neurons in their hippocampus and surrounding areas, a subset of those neurons fired in response to a particular concept — there was one neuron, for example, that began firing at the start of a clip from "The Simpsons" and continued firing despite the changing images on the screen. That is, it responded not to a specific image but to the general Simpsons concept — and not to any other videos that weren't Simpsons-related.
Even more remarkable was that when the movie-watchers were asked to tell the researchers what they'd seen, they would begin to describe the assortment of 20-odd movie clips they'd been shown, and that particular neuron would fire during the actual act of remembering the Simpsons video.
"After we presented, let's say, 20 videos ... we said to the patient, 'Just tell us what you say, okay?' She says, 'well, you know, I remember Martin Luther King's speech, and I saw a landing on the moon'. And suddenly the Simpsons neuron started firing and then a second later, [the patient] says 'The Simpsons'," Fried recalled for Salon. "It's as if there was some process going on [that] she didn't even realize yet, as there was already a signature of that memory. Obviously there was no sensory input. She was completely locked in her mind. And that concept neuron started firing, and the memory came out, essentially emerged at the conscious level."
In some ways, we do work like computers and use distributed networks of firing neurons in important ways. In fact, most parts of the brain work like that, Dr. Florian Mormann, a cognitive and clinical neurophysiologist at the University of Bonn who conducts single neuron recordings on epilepsy patients (and who was a postdoc in Fried's lab), told Salon in a video interview. "One control region we have in the visual pathway is the parahippocampal cortex, which indeed features a distributed network code, which is what most of the brain regions do."
And in the Simpsons neuron case, for example, it was just a subset of neurons in the medial temporal lobe that behaved with extreme specificity to enable patients to quickly grasp the relevant concept. Just a single neuron could determine the patient's memory that a video of, say, Itchy and Scratchy, or of Moe's bar, or of a three-eyed fish at the Springfield nuclear power plant, was a video about the Simpsons.
AI just doesn't work like that. Instead, it analyzes large amounts of data to detect patterns, and its algorithms rely on the statistical probability of a particular decision being the right one. Incorrectly chosen, biased or inadequately large data sets can result in the famous "hallucinations" to which AI models are prone.
"It comes to a fundamental issue about what sort of a system do we need to model intelligence," McClelland explained in a keynote talk, Fundamental Challenges for AI, that he delivered last April at the Computer History Museum in Palo Alto, CA.
Writing to Salon, he offered the example of place cells, the specialized neuron he's most familiar with.
"There are different views, but the role and nature of so-called place cells is extremely nuanced. These cells can code for all kinds of things in tasks that aren’t simply requiring animals to move around in space," McClelland said.
McClelland pointed out that the differences between human brains and artificial intelligence systems include how we learn. Indeed, learning and the necessary process of memory formation and retrieval are key to the specialized roles played by concept cells and some of our other specialized neurons.
"I also think that our brains use far different learning algorithms than our current deep learning systems," McClelland said. "I’m taking inspiration from a fairly-recently discovered new form of learning called Behavioral Time Scale Synaptic plasticity [BTSP] to think about our brains might have come to be able to learn with far less training data than we currently need to train contemporary AI systems."
Concepts in the human brain, as we've seen, can be encoded with just a small number of neurons firing, or even just one.
The pattern recognition that allows AI to learn is based on something called Hebbian-style synaptic plasticity, based on Donald Hebb's idea that learning arises through repeated use of the same connections between neurons in the brain: repeated activation strengthens the efficiency of cells firing together. The term "synaptic plasticity" just means the ability of these connections to be strengthened or otherwise changed.
"The prevailing theories of the 20th century and later all proposed that the primary mechanism of CA3 ensemble or attractor formation was Hebbian style synaptic plasticity, based on correlated AP [action potential, or neurons firing] activity," write the authors of a study published in Cell in November that explored the dynamics of neurons contributing to memory formation. Hebbian-style synaptic plasticity allows for creation of memories and learning from experience within a network of neurons and synapses.
This is the basic understanding that underlies deep learning models used in AI. But the authors of the Cell study propose that in human brains, what's actually going on is a different form of synaptic plasticity, BTSP, which allows for far fewer firings of neurons to create a memory — in fact, you might just need a single "event" to result in learning. Like another hypothesis for how neurons do their thing, the sparse coding hypothesis, BTSP works well because it doesn't need the kind of overlap that Hebbian-style plasticity requires.
Concepts in the human brain, as we've seen, can be encoded with just a small number of neurons firing, or even just one, Mormann explained: "So when I say sparse versus network, or sparse versus distributed, that means that [most] neurons are silent, and then just a few neurons suddenly say 'Look, this is my favorite stimulus.' It indicates that that stimulus is there."
A reason evolution might allow itself "the luxury of having these sparse representations" when network codes would be more efficient, Mormann suggested, is that "they actually provide the semantic building blocks that are being pieced together to form mnemonic episodes." That is, our episodic memories are pieced together from a small number of concepts embellished by the brain's tendency to make up less important details, or to remain fuzzy about them.
"The only things that are really reliable and can be reliably tested are a few core semantic facts, and those are the ones that we believe are represented or provided by concept neurons, and they are being pieced together to form episodic memories," Mormann said.
Although we have not yet created a complete picture of how humans represent experience, including through the apparently vital roles played by concept cells, place or grid cells, time cells and other specialized cell types, it's becoming clearer that neurons in animals have evolved unique adaptations. Researchers have, for example, identified thousands of specialized neurons in mice. But they help them do mouse things. In humans, culture, language, care, tools and other still-to-be demonstrated ways in which we interact with the world around us has produced specializations that let us encode entire concepts and think in an abstract way, internally representing our experiences.
So AI might do well to look back at how the world has shaped us, letting us do human things by the way our brains now make the world.
salon