We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.


The Neuroscience of... Language

The Neuroscience of... Language content piece image
Listen with
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 6 minutes

In this article, the latest in our “The Neuroscience of…” series, we investigate recent, leading theories about the emergence of language, the surprising phenomena that correlate with or might have preceded language, and what it means for our place in the animal kingdom.

The uniqueness of human language

Over 7,000 different human languages are spoken around the world today. And yet, human language is a unique phenomena. No other animal can do it quite like we can.

Humans can create arbitrary, abstract symbols for words, and juggle them in our head to create a meaningful story or convey an emotion to others. Apes and other primates cannot. We can create recursive sentences, continually embellishing short sentences into more complex and descriptive ones. Other animals cannot do this, even when researchers attempt to teach them.

It is clear that if we are to examine the age-old philosophical question, "What makes humans special?", analysing the neuroscience of language seems a wise place to begin the inquiry.

What sets human language apart? 

Most animals have the innate ability to convey emotions through communication. However, there is a robust consensus among researchers and linguists that none of these non-human animal systems of communication come close to human language.

What sets us apart from our animal cousins? There are five characteristics of human language that make it a unique phenomenon in the animal kingdom:

  1. Our lexicon (vocabulary) is enormous - and is constantly growing and changing

  2. We use abstract words that only have meaning in relation to other words (if, that, but, and, or…)

  3. We can refer to the past and future

  4. We use metaphors, analogies, and other tools of figurative language

  5. Our language makes use of flexible, recursive syntax (i.e. we can keep adding adjectives or clauses into a sentence and it will still make sense. For example: "The big dog jumped over the fence" → "The big yellow dog with floppy ears jumped over the tall fence" → "The big yellow dog with floppy ears that hung below his snout jumped powerfully over the tall white picket fence", etc…)

Perhaps a common misconception, there is no one "language region" of the brain. Human language is a symphony of different brain regions all playing their part perfectly. In fact, a lot of the neurology that underpins different elements of language is thought to have evolved first for other, more primal functions, independently of one another. The neuroscience of semantics vs syntax is a prime example.

The building blocks of language: lexicon, semantics, syntax

Expressing ourselves verbally comes so naturally to most of us, we don't even think about what goes into it. No matter what language you examine, you will find there are three key building blocks, each with their own important role to play in conveying ideas and understanding:

  1. Lexicon (scattered around the cortex): the actual words we use, vocabulary (e.g. "switch")

  2. Semantics (Wernicke's area): the meaning of the words we use (e.g. whether we mean the lever that turns light off and on, or the action of swapping places/objects)

  3. Syntax (Broca's area): the grammatical rules holding the structure of language together into something understandable

In order for language to make sense, we have to make sure the words we are using are in fact conveying the meaning we intend them to, while also structuring them in a way that makes sense for the recipient. Wernicke's and Broca's areas work together to orchestrate this complex executive function. However, their activity remains segregated, since damage to one area does not impair the activity of the other.

Wernicke's area gives meaning to words

Located in the left temporal lobe, Wernicke's area takes care of the semantics of language. Patients with lesions in this brain region are able to construct grammatically sound sentences that sound fluent (since their Broca's area is functional), but which hold no meaning whatsoever - a phenomenon known as Wernicke's aphasia. You can get a sense of what this looks like from this video of a patient. An excerpt has been transcribed below:

"Well, it is quite viable in the jealousy. You don't understand it, but if the buzz and they're wrong, three of each others from there and it comes source in the country house, you see. Paul, you hundred, see it comes up and finally, comes out here and goes out."

Broca's area builds grammatically correct sentences

Broca's area, on the other hand, is concerned with the syntax of language. Damage to this region of the brain leads to patients who are unable to produce grammatically correct speech, but can maintain some semblance of meaning in their ideas (as their Wernicke's area is unaffected). Loss of prepositions, articles, and linking words are very common, such that "I took the dog for a walk" may become "I walk dog". Some sufferers of Broca's aphasia do not speak at all. However, this can be incredibly frustrating as despite their difficulties communicating, their understanding of other people remains intact.

Where did human language come from?

Researchers aren't exactly sure where language came from, or how it evolved, but there are a few theories that have been touted - some more plausible than others. One of the most convincing is the theory of "synesthetic bootstrapping".

Neurologists V.S. Ramachandran and E.M. Hubbard postulated in a monumental 2001 paper that language emerged as a byproduct of cross-mapping in the brain, not unlike the cross-mapping seen in people with synesthesia. Synesthetes experience one sense through another, for example seeing shapes when hearing certain sounds, or tasting textures as they eat. The most common type of synesthesia is grapheme-colour synesthesia, when subjects see numbers as colours - and always in the same combination(e.g. four is always red, five is always green, etc).

Is it a coincidence that the area of the brain that processes numbers and the part that processes colours are located next to each other? Surely the simplest explanation for this grapheme-colour synesthesia then is some kind of cross-wiring between these two areas, such that the experience of a number cannot be decoupled from the experience of a colour. This is what Ramachandran and Hubbard argue.

You might be wondering what this talk of synesthesia has to do with the development of language. There is evidence to suggest the kind of cross-wiring seen in synesthesia might have given rise to language as well, and that the verbal sounds we associate with objects may not be as arbitrary as they initially seem.

Understanding metaphors

Two shapes - "bouba" and "kiki" -  are shown.

Bouba and Kiki, Credit: By Monochrome version 1 June 2007 by BendžVectorized with Inkscape --Qef (talk) 21:21, 23 June 2008 (UTC) - Drawn by Andrew Dunn, 1 October 2004.Originally uploaded to En Wiki - 07:23, 1 October 2004 . . :en:User:Solipsist . . 500×255 (5,545 bytes), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=19653163

Consider this simple psychology experiment created and then explored by Kohler and later by Werner - the Bouba-Kiki Effect. In the Martian language, one of the above shapes is called "bouba" and the other, "kiki". When surveyed, 95% of both Tamil and American speakers said the shape on the right is "bouba", and the shape on the left is "kiki".

This suggests that the sounds we associate with shapes are not arbitrary, but come from something innate. Ramachandran & Hubbard argue that this innateness is a cross-wiring between visual, audio and motor regions in the brain.

People generally associate "bouba" with the shape on the right because the shape our mouth makes when we say the word mimics the roundness of the shape. "Bouba" also sounds soft, just as the motion of our tongue is soft and smooth inside our mouth as we say the word associated with the shape. By contrast, the sound "kiki" has a sharpness that is mimicked by both the spikes in the image, and the inflection of the tongue on pronunciation.

This suggests that our brains have the capacity to recognize abstract qualities across various senses, including sight, sound and muscle movement (sharpness, softness, roundness, etc). It isn't a stretch to see how this kind of cross-modal abstraction, and the inferior parietal lobule (IPL) - where it takes place in the brain (which also happens to be the intersection of the touch, hearing and visual parts of the brain), could give rise to metaphors in language - one of the five key characteristics of human language outlined earlier. Fun fact: synesthesia, which also hinges on this kind of cross-wiring of different brain regions, is much more prevalent among writers, poets and creatives than the general population.

Finally, it is interesting to note that while 95% of people in the initial study agreed the round shape corresponds to "bouba", this number drops to 56% within the autistic population, showing perhaps a lesser ability to perform cross-modal abstraction. Is it a coincidence that people with autism also experience difficulties in understanding metaphors and other figurative language?

Tying this all together, it seems language emerged in a non-arbitrary way. The verbal cues we associate with objects and ideas seem to mimic some abstract quality of the object or idea, and this cross-wiring in the brain could also be what allows us to identify these patterns, as well as create and understand metaphors.


Wrapping up, language is one of the key identifiers that separates humans from the rest of the animal kingdom. Different brain regions evolved to manage different functions associated with language, such as Wernicke's area for semantics, and Broca's area for syntax.

Moreover, the actual lexicon we developed does not seem to be arbitrary. The IPL - the intersection of the touch, vision, and hearing parts of the brain - has been identified as playing a key role in identifying multi-sensory abstract qualities of objects or ideas, and attaching words to them that align with those abstract qualities, as shown by the Bouba-Kiki Effect.

This cross-modal abstraction is thought to be the same mechanism that underpins synesthesia, and our ability to grasp metaphors. Coincidentally, people on the autism spectrum struggle to grasp figurative language, and do not ascribe the same abstractions to Bouba and Kiki as the rest of the population, suggesting the IPL may somehow be involved in the development of autism.

Language continues to be one of the biggest question marks in modern neuroscience. Very little is still known about it, but it is obvious any learnings we gain will unlock deeper understanding of other brain regions, their functions, and our human potential.