While there has been experiential and anecdotal evidence as to the benefits of music therapy, it was hard to prove scientifically. There were clear limitations to cognitive brain research from a neuroscience perspective before the development of noninvasive research tools to study the human brain while a person was still alive. Developed in the mid- to late-1980s, advanced brain imaging techniques were slow to become available to musical brain research (Thaut).
Recently, brain imaging studies have shown that neural activity associated with music listening extends well beyond the auditory cortex, involving a widespread bilateral network of frontal, temporal, parietal, and subcortical areas related to attention, semantic, and music-syntactic processing, memory and motor functions, as well as limbic and paralimbic regions related to emotion processing. This postulates that mechanisms that drive cognitive processes in music, such as in attention and memory, are shared by equivalent processes in nonmusical cognition (Thaut).
Musical experiences are multimodal, involving at the least the auditory, visual, cognitive, affective, memory, and motor systems. A number of studies have indicated that music processing involves functionally independent modules. Music reading activates an area on the right side of the brain parallel to an area on the left side that is activated during language reading (Hodges). EEG recordings show that music induces significantly higher oscillatory synchrony in the lower alpha band rhythms in bilateral prefrontal neural networks underlying memory than spoken word (Thaut). The Shared Affective Motion Experience (SAME) model, which was based on brain imaging research, suggests that musical sound is perceived not only in terms of the auditory signal, but also in terms of the intentional, hierarchically organized sequences of expressive motor acts behind the signal. Within a neural network involving the temporal cortex, the frontoparietal
Mirror Neuron System (MNS), and the limbic system, auditory features of the musical signal are processed primarily in the
superior temporal gyrus and are combined with structural features of the expressive motion information within the MNS (Overy). A key aspect of the SAME model is the proposed role of the
anterior insula as a neural conduit between the limbic system and the MNS (Overy).
This research has helped music therapy gain increasing prominence in helping people with cognitive deficits, traumatic brain injury, and stroke regain functionality.
One of the areas in stroke rehabilitation where music therapy is showing the greatest promise involves speech and communication. Language has three components: cognition, linguistics, and pragmatics. Music therapy impacts all three by supporting memory and retrieval of information (King).
For more than 100 years, clinicians have noted that patients are able to sing words that they cannot speak.
Aphasia is a disorder of language that results in garbled syntax and meaning. A person suffering from nonfluent (expressive) aphasia has difficulty producing meaningful words, phrases, and sentences (M. Kim). All forms of aphasia have the characteristics of
anomia (difficulty in naming or finding/producing words), although the severity of anomia will vary depending on the type of aphasia. Other common symptoms related to aphasia include
paraphasias, errors that may involve unintentionally substituting one word for another (verbal paraphasia) or unintentionally substituting one sound for another (literal paraphasia) (King).
One explanation for the phenomenon of
preserved singing (with lyrics) in aphasia is that song is a form of
nonpropositional speech, which is defined as conventionalized, context-dependent speech that does not involve syntactic parsing or the conscious formulation of new utterances to express a semantic message (Wilson).
Modern imaging suggests that language and musical expression centers in the brain may not be as separate as previously thought, and they share some important neurological aspects of processing (Wilson). Maess et al. used
magnetoencephalography in a language and music analysis task that indicated that the Broca area of the hominid brain, which is linked to speech production, is not only active during syntactic analysis for auditory language comprehension, but is also responsible for analysis of incoming harmonic sequences (Maess).
Go To Types of Music Therapy