Interlude C

Interlude C: The Singing Cerebrum: Music and the Brain

Interlude C: The Singing Cerebrum: Music and the Brain

 

Summary:

Interlude C, the third “extra-musical” discussion, explores the amazing ways that the human brain processes music and enables our musical taste and experience—via an intersecting review from musical neuroscience and musicology. The interlude begins with introductory account of the brain’s overall form and function, as well as the potent role of the ear in musical processing. It then proceeds by defining the brain’s multi-tiered process—based on the module model of Stefan Koelsch and Walter Siebel—beginning with the lower-level modules of auditory feature extraction, Gestalt formation, and interval analysis. Using an anecdote from the author’s youth as a spur, the interlude then proceeds by step to the higher-level modules of music processing: syntax, semantics, emotion, and memory. In each of these, as noted, the discussion intersects the recent findings of cognitive neuroscience with well-established concepts from musicology.

 

Supplements:

·       Page 255

( Clarification on image and function map of the brain ):

 

Viewing a map of the brain and its functioning regions naturally poses technical and illustrative challenges. In recent years, advanced interactive versions are becoming accessible to the lay reader—such as the scrollable 3-D version found at the Inner Body website:   http://www.innerbody.com/image/nerv02.html .

 

·       Page 258

( More on the hair cells in the ear ):

 

To be specific, there are 3500 “inner hair cells” which capture frequency; there is also another set of 12,000 outer hair cells, which largely amplify the sound coming into the inner ear for proper interpretation.

 

·       Page 262

( More on the measurement of “intensity” ):

 

Specifically, intensity level is measured by what is called the “auditory nerve spike”—the sum of neuronal activity in the nerve; yet also see Relkin, Evan M., and John R. Doucet. "Is loudness simply proportional to the auditory nerve spike count?" The Journal of the Acoustical Society of America  101, no. 5 (1997): 2735-2740.

 

·       Page 263

( Details on brain regions involved in processing timbre ):

 

The regions involved in extracting timbral information include the superior and middle temporal gyri, Heschl’s gyrus (temporal lobe), the precuneus (parietal lobe), and various regions of the cerebellum.

 

·       Page 265

( Details on brain regions involved in processing Gestalt formation / echoic memory ):

 

The regions involved in Gestalt formation include the premotor cortex and prefrontal cortex, as well as a region on the left hemisphere encompassing the inferior frontal sulcus and inferior frontal gyrus (BA’s 44 and 45), known collectively as Broca’s area.

 

·       Page 266

( Details on brain regions involved in interval analysis ):

 

Within the temporal lobe, these regions include especially the right hemisphere’s superior temporal gyrus, and both hemispheres of the supratemporal auditory cortex. See Peretz, Isabelle, and Robert J. Zatorre. "Brain organization for music processing." Annu. Rev. Psychol.  56 (2005): 92ff.

 

·       Page 267

( More on Descartes’ evolving perspective on consonance / dissonance ):

 

Contradicting his earlier (1618) belief in the universal agreeableness of consonances over dissonances, in a 1630 letter to Mersenne, Descartes wrote: “I do not recognize any qualities in the consonances that correspond to the ‘passions’.”

 

·       Page 267

( Details on Chomsky’s ideas on syntax ):

 

Thus, when we read Chomsky’s famed nonsense phrases, “Colorless green ideas sleep furiously” and “Furiously sleep ideas green colorless”, we are able to recognize the first as syntactically viable despite being meaningless, and the second as neither syntactically nor semantically viable.

 

·       Page 272

( Details on brain regions involved in processing musical syntax ):

 

Most notably, three areas of the frontal cortex otherwise recruited in the processing of language syntax and comprehension are activated in music syntax: the pars opercularis, an area partially comprising Broca’s area (also active in Gestalt formation); the pars orbitalis (Brodmann Area or BA 47); and the ventrolateral prefrontal cortex, located on the inferior frontal gyrus (IFG). In addition, some studies have looked at syntactic issues at the level of formal structure and sequential patterns—which especially elicits the IFG.

 

·       Page 272

( More on the neurological link of processing music and language ):

 

The overlap of musical and linguistic processing is the source of Ani Patel’s “shared syntactic integrations resources hypothesis” or SSIRH; see, for example, Aniruddh Patel, et al. "Structural integration in language and music: Evidence for a shared system." Memory & cognition  37, no. 1 (2009): 1-9. Moreover, there is some evidence that early in life the two realms are undifferentiated in terms of neural circuitry: young infants process music and language as much the same stimuli; as the child matures, these circuits are then “pruned”, in Levitin’s words, to specialize in either one or the other—with music activation stronger in the right hemisphere, language in the left; see Levitin, This is Your Brain : 130. Whether language or music takes precedence in this maturation process is, of course, open to debate—Patel arguing for the former; Koelsch for the latter: “the human brain, at least at an early age, does not treat language and music as strictly separate domains, but rather treats language as a special case of music.” See Koelsch, “Toward a neural basis (2011)”: 16.

 

·       Page 279

( More on Zajonc’s theories of liking and exposure ):

 

More technically it is referred to as the “affective primacy hypothesis” developed by Zajonc between 1964 and 1980. Much more on this in Interlude G.

 

·       Page 279

( Details on brain regions involved in processing musical semantics ):

 

Timings of semantic following syntactic processing: for those interested, are determined by means of a noninvasive technique of evaluating brain functioning known as “event-related potential” (ERP), which measures neuro-processing responses triggered by a specific mental stimuli (sensory, cognitive, motor, etc.) using an EEG (electro encephalography monitoring setup (electrodes on the scalp).

 

For intra-musical processing, specifically, on the temporal lobe: BA 21 (in proximity to the middle temporal gyrus), BA 37, and Wernicke’s area (BA 22); on the frontal lobe: the back part of the inferior frontal gyrus. Similarly, the processing of extra-musical meaning has been localized to BA 21 and 37 of the temporal lobe, though more in proximity to the superior temporal sulcus—an area specifically tied to processing language semantics.

 

Regarding extra-musical meaning: indeed, the same ERP technique used to determine response timing has been shown by Koelsch to demonstrate differing neural processing for intra- and extra-musical meaning. Specifically, he found that the distinctive ERP response (known as N400) that is triggered whenever an unrelated word is inserted at the end of a sentence (e.g., “Yesterday I went to the hamburger”) is also triggered when a musical passage is followed by a word that is semantically  unrelated—such as when a calm passage by Debussy is immediately followed by the word “typhoon”. This, moreover, provides empirical evidence that, as Koelsch states, “musical information can have a systematic influence on the semantic processing of words… and can activate representations of meaningful concepts.” By contrast, intra-musical semantics produces a different ERP response (N5), which is specifically triggered by violations of syntactic expectations—and thus linked with processing of syntactic meaning.

 

·       Page 280

( Details on brain regions involved in processing musical emotion ):

 

Processing of a sustained mood (“indexical moments”) includes BA 7 of the parietal lobe, otherwise associated with visuo-motor coordination, as well as Broca’s area (BA 44 and 45) of the frontal cortex.

 

·       Page 282

( More on historical notions of memory ):

 

Specifically, memory in Ancient Greece was deemed a gift of Memory, the mother of the Muses, into which the “perceptions and thoughts” of each experience make an imprint like “impressions from seal rings”.

 

·       Page 283

( More on contributions of Guido d’Arezzo ):

 

In his Micrologus  (1026), Guido is also credited for originating the solfège (solmization) system, as well as an early version of the so-called “Guidonian hand”—a mnemonic system whereby the note names (ut-re-mi-fa-sol-la) were mapped to distinct parts of the hand.

 

·       Page 283

( More on the role of memory in medieval thinking and beyond ):

 

The memorization regimen for practicing medieval musicians was intensive—ranging from the full repertoire of chants and modal formulae to the complex melodic and rhythmic patterns used in polyphonic music, borrowing mnemonic techniques from grammatical and mathematical treatises. Such an environment suggests that the best music making of the era may have been improvised—not unlike jazz today—where gifted performers artfully embellished patterns previously memorized. By contrast, written notation was largely a means of preserving and standardizing canonical works—as well as an effective mnemonic tool, where the staff could be “visualized” in the performer’s mind as an aid to memorizing. From the Renaissance onward, one finds a gradual shift from collective to individual musical genesis. Of course, the imperative of memory and paraphrase maintained currency well beyond the Middle Ages—whether in the cantus firmus  or “paraphrase” Masses of Renaissance composers, or the cantatas by J.S. Bach built on Lutheran hymns. This reverence may even be seen in the 14-year old Mozart’s allegedly transcription from memory of Gregorio Allegri’s “secret” motet Miserere (c. 1630)—after just two listens!

 

·       Page 285

( More on the functioning of the short-term memory “store” ):

 

Specifically, the chunks are said to remain in STM for some 15-30 seconds, after which they too decay and disappear if not further rehearsed, verbally or mentally. The exact number depends of chunks held at one time (from 4-9) depends not only on the capacity of an individual’s “memory span”, but also on how the chunks are constructed. The classic example is with phone numbers: if a single chunk has 10 numbers, a person may only be able to hold 4 or 5 numbers in STM at a time; but they if are grouped as a phone number (i.e., with the first 3 as an area code), the capacity increases.

 

·       Page 285

( More on the theory of working memory ):

 

The notion of Working Memory was first introduced by Allen Baddeley and Graham Hitch in 1974. In the Baddeley-Hitch model, WM operates via 2 primary “modules”: a “phonological loop” whereby verbal and acoustic chunks are held and manipulated; and a “visuo-spatial sketchpad” which does likewise for visual content.

 

·       Page 287

( More on recall ):

 

At the same time, “identification”, the ability to actually name something or someone, is neither necessary, nor at times possible, in recall—as we’ve all experienced.

 

·       Page 288

( More on the neurological function of “chunking” in audio processing ):

 

Koelsch notes how the proper functioning of WM relies on both a broad “mental lexicon” (semantic / general knowledge already present in LTM) and a specific “musical lexicon” based on knowledge of musical syntax; he also notes how music entails its own version of the “phonological loop” mechanism, a “tonal loop” in which pitch-related elements can be held and manipulated. See Koelsch, “Toward a neural basis (2011)”: 14.

 

·       Page 288

( Details on brain regions involved in processing musical memory ):

 

The hippocampus, located under the cerebral cortex (in the medial temporal lobe), by the way, serves as the brain’s main “consolidator” of STM / WM and LTM. According to MTT, spatial and temporal context is stored in the hippocampus, while semantic (general knowledge) memory traces are stored only extrahippocampally (in the cortical areas). See Nadel & Moscovitch. “Multiple trace theory.”

 

For the processing of echoic memory, within the frontal lobe: particularly Broca’s area (BA’s 44 and 45) and in the dorsolateral region of the prefrontal cortex, within the inferior frontal sulcus.

 

Regarding processing of working memory: one interesting finding in this regard is that musicians seem to have differentiated systems / structures for the “tonal loop” (pitch) and “phonological loop” (words) mechanisms, whereas such differentiation is lacking in non-musicians. See Koelsch, “Toward a neural basis (2011)”: 14.

 

Primary Bibliography:

Richard Passingham, Cognitive Neuroscience: A Very Short Introduction (Oxford University Press, 2016)

Daniel J. Levitin, This Is Your Brain on Music: The Science of a Human Obsession (New York: Dutton, 2006)

Oliver Sacks, Musicophilia: Tales of Music and the Brain (London: Picador, 2012 )

Aniruddh D. Patel, Music, Language, and the Brain (Oxford University Press, 2008)

Seth S. Horowitz, e Universal Sense: How Hearing Shapes the Mind (New York: Bloomsbury Publishing USA, 2012)

Stefan, Koelsch and Walter A. Siebel, “Towards a Neural Basis of Music Perception,” Trends in Cognitive Sciences 9, no. 12 (2005): P578–84; later revised in Stefan Koelsch, “Toward a Neural Basis of Music Perception—A Review and Updated Model,” Frontiers in Psychology 2 (2011): 110–29

Bregman, Albert B. Auditory Scene Analysis: e Perceptual Organization of Sound . (Cambridge MA: MIT Press, 1990)

Susanne K. Langer, Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art, (Cambridge, MA: Harvard University Press, 1957)

Leonard B. Meyer, Emotion and Meaning in Music (Chicago: University of Chicago Press, 1956)

Isabelle Peretz and Robert J. Zatorre, “Brain Organization for Music Processing,” Annual Review of Psychology 56 (2005)

 

External links:

"Your Brain on Music" (University of Central Florida)

"The Surprising Science..." (Fast Company)  

"Music in the Brain" (MIT)

 

Share by: