Key Points
Is the brain a general purpose machine, or a collection of specialized modules? This is a fundamental question in the field of neuroscience, and has been the subject of intense back-and-forth debate for centuries.
Improvements in technology, statistical methods, and clinical techniques in the mid-20th century, when the field of behavioral neurology emerged, ultimately led to the widespread acceptance of the modular model of brain organization.
And yet, in recent years, thanks in part to the mis-steps in the field of fMRI, some have been advancing the generalist model once again.
Perhaps the reason for this seemingly never-ending battle between these two models of brain organization is that it’s a false dichotomy. The brain needn’t be one way or the other – generalist or specialist, modular or distributed – and presenting it is an either/or decision has created a false debate.
The model that best accounts for all the evidence is that the brain is a vast network of functionally specific, but anatomically distributed (with clear areas of anatomical clustering), modules.
Links, etc:
The “Learn to Uke” Brain Fitness Challenge
Carl Wernicke
Paul Broca
“Auditory Apperceptive Agnosia” by Heilman et. al.
Franz Gall and Phrenology
The Infamous “Dead Salmon” Study
20 Years’ Worth of fMRI Studies May Be Bunk
The Brainjo Collective, a community of lifelong learners
Episode Transcript
In 1873 Carl Wernicke, a German neuro psychiatrist, and major figure in the field of neurology, encountered a patient, who despite being able to speak and having intact hearing, was unable to understand what was said to him. This inability to understand speech in his native tongue had started abruptly after the onset of a stroke.
It was discovered at autopsy that his stroke had been in the left hemisphere, in the back part of the temporal and parietal lobes. This was only a few years after Paul Broca had published his landmark findings on a series of patients with a different sort of language disturbance. In these patients, damage to the back part of the left frontal lobe had rendered them unable to produce speech. Yet, their ability to comprehend it had remained intact.
Wernicke would go on to publish his work, The Aphasic Symptom Complex, in 1824, laying out a model for the neural organization of language function, much of which remains intact today. He would coin the term sensory aphasia to describe this particular deficit where speech production is intact, but understanding of speech is impaired. A deficit in the sensing or decoding of spoken words. It is now also commonly referred to as Wernicke’s Aphasia. And the area of the brain that was damaged in his original patient is now commonly referred to as Wernicke’s area.
In 1975, almost 100 years after Wernicke’s case of sensory aphasia from left hemispheric damage, neurologist Ken Heilman, a name you may remember from prior episodes, published a study performed on 12 subjects. Six of those subjects had lesions in their left hemisphere, in the aforementioned Wernicke’s area, and as a result, now had a Wernicke’s Aphasia. The other six had lesions in the corresponding region of the right or non-dominant hemisphere.
The subjects were presented with 32 tape recorded sentences. In half of those they were asked to analyze the content of the speech. And in the other half they were asked to judge the emotional mood of the speaker. In comparison to those with left hemispheric damage, those with damage to the right hemisphere were significantly impaired in their ability to identify the mood of the speaker based on the tone of voice.
The proposed explanation of this finding was the existence of circuitry in the non-dominant hemisphere whose role was to interpret emotional content of speech through the analysis of tone of voice. And the deficit was given the name Auditory Affective Agnosia, which translates to the inability to perceive emotional content of speech. Research in the ensuing years would provide further support for this separate location for tone of voice analysis. And it is now widely accepted that the decoding of different elements of speech is distributed broadly throughout the brain.
Hi, I’m Doctor Josh Turknett, founder of Brainjo and the Brainjo Center for Neurology and Cognitive Enhancement. And this is the Intelligence Unshackled podcast. Join me as we take a tour through the human brain to explore and understand the true nature and scope of human intelligence. And to unlock the secrets of optimizing brain health and function.
So, before we get into the meat of this episode, there are just a few things that I wanted to share with you. As we come upon the end of another year and the beginning of a new one, I wanted to first of all say thanks so much for your support of this podcast. It’s a privilege to be able to work on the project, and I’m really excited about what’s to come in the next year.
And I couldn’t do any of it without you and without you listening and supporting this podcast. So, first of all, thank you for being a listener. Thanks to those of you who’ve left a rating in iTunes, and a huge thanks to those of you have become part of the Brainjo Collective. We’ve already begun to amass a wonderfully diverse and enthusiastic group of people who are all interested in this mission of how we can protect, support and release the full potential of this magical organ that sits in our skulls. And I can’t tell you how much I appreciate your support.
As I mentioned recently, in January we’ll be launching the first of the Brainjo fitness challenges for members of the Brainjo Collective. These challenges are intended to be a way of boosting brain health and cognitive function by engaging in rewarding, but cognitively demanding activities. And, as I’ve said previously, I think there’s no better do-so than to learn a musical instrument. So first up we’ll be having our Learn to Uke challenge where members will learn to play the ukulele through a series of videos and guidance inside the member forum. And you can learn more about that and become a member of the Brainjo Collective by visiting EliteCognition.com/fitness. So this year, instead of making a new year’s resolution to exercise more or grow really big muscles, maybe this time you can make one to grow a really big brain.
For centuries people who studied the brain wrestled with one fundamental question, a question that had to be answered in order for us to make any meaningful headway in the field of neuroscience. And that question was about how the brain handled the information that it perceived and analyzed. Was the brain a general purpose machine, with all of its workings brought to bear on whatever problem it was solving at any given moment? Or did it divide and conquer? Did it consist of functionally specific areas, each performing their own distinct kind of analysis on information acquired through the senses? In this model, a particular region would be called into action depending on whatever problem the brain was solving at any given moment.
In other words, was the brain one big generalist, or a collection of specialists? Do the same brain networks and same anatomical regions operate on all the problems, or is it organized into distinct subunits? Intuitively it feels like our brain is a generalist. The experience we have of our selves isn’t of a fragmented set of isolated functions, but rather as one single, unified whole, taking the world as it comes. However, those of you who’ve listened to previous episodes know that our intuitions can sometimes lead us astray, especially when it comes to understanding ourselves. And those of you who’ve listened to previous episodes may already have an inclination about which of these models is closer to the truth.
Yet, while it is true that we’ve come a long way in answering this question, the debate still continues in some ways. Though, I think that debate is largely a false one based on the misapplication and conflation of some key concepts. And so part of the goal of this episode is to help explain why. The resolution of this age old quandary and the model of brain organization we end up with, has significant implications for how we understand the nature of our intelligence, and the ways in which it may be suppressed or released. This debate about whether the brain is a generalist or a collection of specialists has been around ever since people began contemplating a brain with different sides prevailing at different points in history.
One of the most notable proponents of the specialist model was Franz Gaul. Born in 1758, Gaul was a German neuro-anatomist and physiologist who believed not only that the brain was organized into functionally distinct areas, but that you could predict a person’s personality and cognitive strengths and weaknesses by the bumps on their skull.
The idea behind this was that there were certain anatomical areas that mediated particular cognitive functions. And if someone was especially strong in one area, that part of the brain would grow and, the skull around it would swell causing a bump. Gaul contended that there were 27 different mental faculties that you could predict, including things like memory function, talent for poetry and even how much you loved your children. This became the practice of phrenology in which practitioners would feel the bumps on someone’s head as a means of assessing these various mental functions.
The idea of specific functions being localized to certain parts of the brain, what we can refer to as the localizationist model, was further supported by Paul Broca’s publication in 1865 of eight cases of patients who had experienced the inability to speak. All of whom were found to have lesions in their left hemisphere at autopsy, in an area located in the posterior, or back part of the frontal lobe. What is now referred to as Broca’s area.
Carl Wernicke’s subsequent publications a decade later of patients with a different type of speech disfunction from lesions in the posterior temporal lobe, would add more support to the localization of function theory. Though, by this time, proponents of this model were no longer in the majority.
And in fact, despite the mounting evidence, the localization theory would fall further into disfavor in the early 20th century. By then, phrenology, the field based on Gaul’s theory that skull bumps could predict mental function, had been roundly debunked by the scientific community, and the era of phrenology is viewed as another embarrassment in the history of medical science, lumped into the same category as other pseudo scientific endeavors like palm reading.
And because phrenology was tightly associated with the idea of the specialist model of brain organization, the reputation of that model was similarly spoiled by association for many years after. By this time the generalist or wholistic model was widely held. According to this theory, programs for speech production and comprehension were not localized to a specific area, but represented diffusely. The specific deficits that resulted from damage to the brain, including those described by Broca and Wernicke, were only a function of the amount of brain involved, rather than being linked in any way to the location of that damage.
This view wouldn’t be challenged again until the mid-20th century when the field of behavioral neurology emerged. Ultimately, new technologies for assessing the brain, new statistical methods for analyzing data and new clinical techniques would all continue to build a strong body of evidence in support of the localizationist theory, which has been further strengthened in the era of functional imaging.
The idea that not only our elemental sensory and motor, but also the higher order cognitive functions unique to humans are localized to specific regions is now central to the field of neurology and informs much of the day to day practice of it. In many ways, then, Gaul’s original ideas have been vindicated. And it’s unfortunate that his reputation has been tainted by the practical misapplication of that knowledge in the field of phrenology. And yet, this debate about the fundamental organization of the brain, whether it is localized or it is distributed has resurfaced in recent years. In part, thanks to the advent of another controversial topic, FMRI.
I imagine that most of you listening are familiar with functional MRI or, FMRI. I’m almost certain you’ve seen it. FMRI is a technology used for studying the activity of the brain in realtime. While there are other technologies for doing so, FMRI has become the most popular in cognitive neuroscience research for a number of reasons. The basic principle behind it is that when a particular part of the brain increases its activity, it uses up more energy, which results in more blood being diverted to that area, which causes a change in the oxygenation in the hemoglobin in the blood. And that change in oxygenation of hemoglobin, changes its magnetic properties, which can be detected by the MIR scanner, which is a big magnet. So, in essence, change in regional blood flow in the brain are used as a surrogate marker for neuronal firing, which is what we’re really interested in.
So, in a typical FMRI study a subject carries out a cognitive task and researchers look to see what parts of the brain light up as they do so. It may occur to you that this basic experimental design paradigm already assumes a localizationist model of the brain. You wouldn’t conduct a study designed to determine the part of the brain that mediates a particular function, unless you already thought the brain was a collection of specialized functional networks.
Journalists love the pictures that FMRI machines generate as they lend an automatic air of credibility to any article that has them in it. For better or worse, pretty brain pictures sell, which in the early days of FMRI research led to a growing market for more and more studies like this, and plenty of funding for research that used it.
Without a doubt, these pictures that FMRIs generate, certainly give the impression that something profound has been discovered. That at last we have a technology that lets us see inside the brain in realtime, as it’s doing its thing. It seems like this is the pinnacle of neuroscience research.
Well, not so fast. It turns out FMRI has its limitations and they aren’t insignificant. First of all, its temporal and spacial resolution isn’t so hot. At least when you’re talking about the speed at which neurons communicate and the tiny scales at which they operate. So for fast cognitive processes, and there are a lot of those, the limited temporal resolution means that it can’t assess cognitive operations that wrap up in the order of milliseconds. And, the smallest piece of an FMRI image, referred to as a voxel, which you can think of as 3D pixel, contains hundreds of thousands of neurons.
But, more relevant to this discussion is the way in which FMRI data has been analyzed and interpreted. No data set is perfect, and one of the challenges with any data is deciphering the signal from the noise. In other words, what part of the data you’ve gathered is associated with the thing you’re interested, which is the signal, and what part is coming from elsewhere, which is the noise?
The raw data that an FMRI generates is just an image of differing signal intensities. And so, what we’re really looking at are relative differences in the intensity from different areas of the brain. With areas of brighter intensity presumably reflecting an area of relative higher activity, compared to areas that are less bright. But again, that assumes that all the data we’re looking at is associated with the thing we’re interested in, which is brain activity and not an artifact of something else, or noise. And there’s lots of noise of multiple kinds in the raw FMRI data.
However, the procedure or algorithm that’s used to remove the noise from any signal can profoundly shape the picture that you end up with. With the wrong algorithm you run the risk of falsely claiming that there’s brain activity in an area where there isn’t, or lots of false positives. Or, filtering out real brain activity under the impression that it’s noise, or lots of false negatives.
So to build an algorithm that does this right, you have to have some idea of how you might separate the signal from the noise. Just think for a moment about how you might design the spam filter for email. The goal is to keep all the emails that you want and filter out the ones that you don’t. And there are multiple variables that might be relevant to this filtering process such as the subject line, who it’s from, text that’s in the email and so on. But as we all know, this is no easy feat. Too aggressive a spam filter, filters out stuff that we want. And one that’s not aggressive enough leaves our inbox hopelessly cluttered. So to build an algorithm that does this right, you have to have some idea of how you might separate the signal from the noise in a way that minimizes the odds that this happens.
There are multiple methods that have been used to do so and suffice to say that solving this problem has been one of the central challenges in this field since its inception. A full discussion of the nuances of the various approaches is well beyond the scope of this discussion, not to mention not being my primary area of expertise. But, the story of the signal to noise problem has impacted the conversation around the central topic of this episode, the question of whether the brain is a generalist, or a specialist. As I mentioned, the localizationist model is already embedded in the primary experimental paradigm of FMRI research right from the start. And, it has similarly been embedded in some of the approaches to solving the signal to noise problem.
For example, one such way to separate the signal has been to perform a smoothing function on the data in the processing or even pre-processing phase, before the raw data is generated. So if I select a little piece of the FMRI image, and I see a blip of signal in that piece, and then I see that it’s surrounded by other blips of signal on all sides, it stands to reason that it’s part of a module, or functionally specific network in that are.
On the other hand, if I select another little piece of the FMRI image and see a blip of signal, but the surrounding area is dark, then based on what I assume about the localization of brain function, I’m going to conclude that this blip is less likely to be real brain activity than my first blip, even if they’re of equal intensity. So, a smoothly function averages the activity over multiple voxels, which will tend to increase the signal from areas where there’s a cluster of intensity and reduce it from areas that are isolated.
The draw back here is that this kind of function increases the risk of what’s known as Type Two Error, or false negatives. Meaning that it makes you more likely to say that an area wasn’t real brain activity, when in fact it was. But with the smoothing function, the chances of a false negative is indeed unlikely if different functions are anatomically segregated in the brain. But false negatives become highly likely if that’s not the case if the brain activity that supports a particular cognitive task is not anatomically clustered, but broadly distributed.
Regardless how you design your smoothly function, just like how you design your spam filter, requires making assumptions. In this case, how localized, or distributed brain functions are. And critics of FMRI have argued that the bias introduced by these assumptions has tainted the quality of the data. Our expectations of what we think we’ll see are making it more likely that we’ll see those things.
Other missteps in solving the signal to noise problem have been even more significant. The field took, perhaps, its biggest hit with the publication of the dead salmon study in 2009 by Craig Bennett and colleagues by the Journal of Journal of Serendipitous and Unexpected Discoveries. Seriously. In this study, researches placed the dead salmon in the FMRI scanner. They then showed it some pictures of human faces showing different emotions. Lo and behold, the filtered FMRI result showed activity in a particular part of the brain. The usual, but in this case obviously absurd, conclusion here would have been that dead salmon could perceive human emotions, and that we even found the brain region where they do that. However, what this was really saying was that there was a big problem with false positives when it came to FMRI.
In this case, the problem was in the failure to correct for multiple comparisons in the analysis of the raw data, which many FMRI studies to this point had also failed to do, casting into doubt a huge body of research. Since then, more statistical mistakes have been uncovered, again, casting doubt on all the research that made them.
Suffice to say, then, that a significant portion of the FMRI research has been called into question and the many critiques and false steps in FMRI since its inception has led some neuroscientists to claim that most of the published findings in this field are noise. And this is quite unfortunate given the amount of published research in this area in a relatively short period of time, and how many bold claims have been made over the years based on these findings.
Ironically, the field of FMRI research is now faced with its own signal to noise problem in trying to find the published research that’s true. Worse yet, thanks to the media’s fascination with this area of research, it’s also been the public face of cognitive neuroscience for the past decade or so.
In many ways, the backlash against FMRI mirrors what happened with Franz Gaul and the backlash against phrenology in the early 20th century. His idea that bumps in the skull could be used to predict mental functions is no longer scientifically tenable. And with the benefit of hindsight, seems like a silly notion to have ever entertained. And yet, it’s grounded in the idea of functional specificity. That different parts of the brain handle different sorts of information processing.
As I mentioned earlier, in the backlash against phrenology, the localization model took a hit as well. Throwing the baby out with the bath water. And a similar thing seems to have happened in the early 21st century with the backlash against FMRI. FMRI has heavily reinforced the localizationist model, and the biases and signal analysis that were a product of a belief in the localizationist model is partly to blame for the aforementioned missteps.
And so recently some folks have been championing the generalist model again, despite the large body of clinical and laboratory evidence that supports the localizationist perspective, and holding up the missteps in the world of FMRI as evidence against the localizationist model. And for sure, there are merits to this point of view, but once again, we must make sure not to throw the baby out with the bath water. So how can we do that? Well, I think there’s a way of reconciling all of these points of view to produce a coherent framework that answers our fundamental question and that saves our baby.
This podcast is brought to you by the Brainjo Collective. The Brainjo Collective is a community of like-minded people interested in furthering our understanding of the brain, and translating that knowledge into ways we can release potential, protect the integrity of our brain over the course of our lifespan and create lives of lasting fulfillment and wellbeing. Members of the collective receive access to a private forum moderated by a team of advisors, including myself. And by becoming a member of the collective, you’ll also be supporting the research and production costs of this podcast so that it can always remain free from advertisements. So, if you like to geek out on cognitive neuroscience and the optimization of brain health and function, I’d love to have you as part of the collective. To learn more about it, and to join, just head over to EliteCognition.com/collective.
So now let’s return to the opening story of Carl Wernicke’s description of patients with difficulty extracting the concepts from linguistic sounds, or comprehending speech, that was consistently associated with damage to the dominant hemisphere of the brain. And then, Ken Heilman’s publication about a hundred years later describing patients who were unable to decode the information contained in tone of voice. In particular how it reflected the emotional state of the speaker, due to damage on the opposite side of the brain.
This kind of finding is referred to as a double dissociation. Double dissociation is a term first coined by Hans-Lukas Teuber in 1955 and refers to instances where two experimental manipulations produce differing effects on two dependent variables. In the realm of cognitive neuroscience, this refers to an instance where a lesion, or damage in one area of the brain impairs function A, but not function B, while a lesion inn a different area impairs function B, but not function A. So, in this example, we have a lesion in one area of the brain affecting the ability to connect word sounds to their associate concepts, but not impacting the ability to decode tone of voice. While a lesion in another part of the brain does not impact the ability to connect sounds to concepts, but does impair the ability to decode tone of voice.
For behavioral neurologists, double dissociations are the gold standard for demonstrating functional specificity and there are a great many examples of them. And the very existence of double dissociations provides evidence for the segregation of function in the brain. Further more, we also know that even in the presence of large insults to parts of the parts of the brain, other very sophisticated cognitive functions like math, reasoning or spacial navigation remain intact. Which is not what we predicted by the model of the brain as a generalist, where losing large chunks of processing power should degrade all functions similarly.
Without a doubt, the brain is organized into scores of functionally specific networks. Our brain is a vast collection of different modules and as discussed in prior episodes, our intuition that we’re a unified whole is one of the many constructions our brain creates for us that, while useful, doesn’t reflect the underlying reality.
There is clearly a division of cognitive labor in the brain performed by a vast network of specialist circuitry. But, there are a couple of really important things to remember here. The first is that none of these modules, or functionally specific networks, exist in isolation. Even though a software program is a self contained piece of code, it still requires computer hardware to run on, and a user to provide inputs and so on. And the same is true of our brain networks. None of them exist in isolation. They are constantly receiving inputs from and are modified by other parts of the brain. And so, it would be misleading to say that any given task is performed entirely by an isolated network. Just as it would be misleading to ignore the role of the computer hardware and the user in the output of a piece of software.
And the second important thing to remember is that while certain functionally specific modules tend to cluster in certain areas of the brain, there are often elements of a functional network that are distributed in different areas of the brain. So, a single functional network may be anatomically distributed across various parts of the brain, especially if that network assimilates disparate kinds of sensory information.
Case and point is the organization of circuits of analyzing the information contained in speech. There’s a circuit on one side of the brain for decoding the meaning of words, and other circuits on other side of the brain for detecting the modifications of meaning that come from the tone, or frequency spectrum of those words. So, if we broadly consider the brain networks for analyzing speech, we find that anatomically they are broadly distributed through the brain, even though they’re functionally segregated.
One of the more headline grabbing findings in recent years was the discovery of the Jennifer Aniston neuron. A single neuron that would only fire in the presence of Jennifer Aniston’s face. The paper that published it also showed this to be the case for any face that a subject was familiar with. At first glance, this would seem to suggest the most extreme version of the localizationist model. However, that particular neuron’s activation is still dependent on the activity of networks involved in the acquisition and filtering of visual information.
Furthermore, there are certain circuits that are likely relevant and involved in all cognitive functions. For example, it’s necessary to be awake to perform a cognitive task, so we’re going to find activity in those wakefulness networks in every cognitive function that we analyze. And there is evidence that there are other circuits that are involved any time we engage in a challenging task. So, in this respect, these sorts of circuits are generalists and done restrict their efforts to specific tasks.
And so the answer to the fundamental question of whether the brain is a generalist or a specialist is that it’s neither one nor the other. There are elements of both and there isn’t any reason that it must be one or the other. And by framing it in this way we’ve created a false dichotomy and a false debate. A debate that vanishes once we clarify our terms a little bit. The brain is clearly segregated into functionally specific networks. A principle that is not only central to the day to day work of neurologists, but one that is also critical for understanding the nature of human intelligence and as I will argue in future episodes, critical to understanding the hidden potentials inside every brain.
And yet, there are also ways in which the brain’s organizational structure is distributed, as discussed. And it appears the recent debates on this issue have arisen largely from the inflation anatomical segregation and functional segregation. In other words, while a circuit involved in a particular task may tend to cluster in a certain area, other parts may be found elsewhere. In this case, the circuit is still functionally segregated, but it’s anatomically distributed even though it may still exhibit anatomical clustering. Much in the same way that the shipping department of Amazon is a functionally distinct unit within the company, but its operation is meaningless outside of the context of the rest of the company. And there are multiple shipping warehouse locations that are part of this functional network.
So while there are still plenty of details left to be worked out, our fundamental question has been answered. Is brain activity distributed or modular? No, that’s the wrong question. It’s both.
All right, so that’s it for this episode of the podcast. As always, you can find show notes, links and transcripts from this episode, including a link to the infamous dead salmon study that I mentioned earlier by going to EliteCognition.com/podcast. And if you like this podcast and you want to help others find it, it’d be awesome if you left a rating or review in iTunes. It really does help. So I will talk to you again in the new year.