Music as an emergent property of sound
Randall, R. and Greenberg, A.

Percepts are formed from sensory data to create mental representation of the world around us. While psychological studies of sensory representations have focused largely on vision, investigation of auditory representations have begun to gain considerable ground. This project investigates how low-level mental representations of music are formed and uses neuroimaging to examine the neurobiological basis of such representations. We suggest that music is an emergent property of pitched sound which requires auditory scene analysis (ASA) cues to create the organized structure we call music. Given a sequence of pitches these cues allow their integration into what has been called an auditory perceptual object. A sequence of pitches is not considered musical until it is first recognized as a perceptual object. Therefore, the difference between a sequence of pitched sounds and an auditory perceptual object dichotomizes the presence of musicality. We hypothesize that the degree to which a sequential integration parameter is present will positively correlate with the perception of that sequence as “musical”. In other words, the more strongly an auditory perceptual object is formed, the more musical a sequence will be judged. By asking subjects to evaluate the musicality of pitched sequences in behavioral and fMRI contexts where specific integration parameters are manipulated, we hope to identify the boundary conditions of musicality and the patterns of neural activations that are associated with music perception.

Examples

 

Using MEG to Investigate Habituation in Musical Contexts
Randall, R., Sudre, G., Xu, Y., and Bagic, A. B.

Long-term exposure to music allows us to develop an implicit knowledge of musical syntax and this knowledge serves as the foundation of musical expectation. Expectation violations are an important part of musical experience and give music an "ebb and flow" that keeps listeners interested. Regardless of how many times we listen to a song, when we are confronted with a syntax violation in the song we can still pinpoint that irregularity. The early-right anterior negativity (ERAN) is related to a listener’s response to harmonic-syntax violations and peaks between 150ms and 250ms after stimulus onset. While numerous studies have investigated the ERAN, very few have addressed how it is affected by habituation. These studies rely on complex harmonic stimuli and focus on implicit response. The present study investigates the MEG-equivalent of the ERAN (mERAN) and how habituation modulates the strength of this response to simple melodies that are either syntactically well-formed, conforming to common-practice tonality (M1), or end with an out-of-key pitch (M2). Both musicians and non-musicians explicitly listened to M1/M2 numerous times and neural responses were recorded using MEG. Even with simplified stimuli, our results reliably replicate earlier findings based on more complex stimuli. Whereas previous studies on short-term habituation of the mERAN only look at changes in the violation condition, we comparatively analyze how responses to both M1 and M2 change over time by employing averages of the response over sequential sets of trials for the duration of the experiment. Such method also allows us to study how the relative relationship between M1 and M2 fluctuates, which effectively controls for fatigue and allows us to clearly show how the mERAN changes both independent of and in conjunction with normal responses. Preliminary results show that the difference between M1/M2 conditions sustains, contrasting with previous claims that ERAN response depreciates over time.

 

Listening Spaces
Purcell, R. and Randall, R.

The proliferation of portable as well as computerized audio technologies has radically changed the way the human beings listen, consume, and produce music and sound. We can endlessly personalize the sounds emanating from our cell-phones or use tablet computers as virtual mixing boards and turntables. With the emergence of “cloud” storage services like Dropbox, Amazon, and Google we can effortlessly store and share music files anonymously or with friends. Services like Facebook, Pandora, Last.fm, Amazon, and iTunes use finely tuned algorithms to make musical recommendations and in the process further personalize your experience as a consumer of music. All of these services, many of which are virtual, have come to mediate our intensely personal and communal experiences with music. We have gone from the labor intensive, analog, tactile and at times intensely emotional expe- rience of making a mix- tape to dragging and dropping files onto playlists. File-sharing has replaced handing over a piece of vinyl or even burning a CD. Impersonal machines and equations are doing what friends, acquaintances, DJs, and record-store owners once did: recommending music for us to purchase, listen to, and enjoy. This project seeks to understand the overwhelming impact these mediating technologies have had on our social, political and personal interactions with music. This project will support a series of events organized around four fundamental questions: What do we do with music? Where do we get music? How and why do we share music? How and why do we recommend music? This project will culminate in a co-authored book that reports our findings.