Exploring how musical accompaniment enhances sequntial learning in visual statistical learning tasks
Dr. Thackery Brown (Advisor), Georgia Institute of Technology
Dr. Paul Verhaeghen, Georgia Institute of Technology
Dr. Sashank Varma, Georgia Institute of Technology
Dr. Lila Davachi, Columbia University
Dr. Elizabeth Race, Tufts University
This doctoral research investigates how musical context shapes our ability to extract patterns from noisy environmental input through statistical learning. Building on my prior work on deterministic sequence learning, this project extends into the more challenging domain of probabilistic statistical learning—a process that better mirrors how we naturally acquire temporal knowledge in the chaotic real world, where perfect repetition is rare and patterns must be discovered amid variable inputs.
The study asks a fundamental question: Can familiar music, playing in the background, enhance our ability to detect patterns and segment continuous experiences into meaningful chunks?
While prior research has explored how context changes (visual, spatial, reward-based) affect temporal learning, my work uniquely examines how musical context—with its inherent temporal structure—might provide a "scaffolding" for statistical learning of visual sequences.
This research represents a significant advancement beyond my master's work in two key ways:
It employs statistical learning paradigms rather than explicit sequence memorization, requiring participants to extract patterns from probabilistic input
It presents music as background context rather than an explicit learning tool, mirroring everyday scenarios like learning routes while listening to radio music
Using a combination of behavioral measures and functional MRI, I designed an innovative paradigm where participants viewed continuous streams of images with embedded probabilistic sequences. Some sequences were accompanied by familiar music, while others appeared in silence. During encoding, participants indicated where they perceived "event boundaries," and later completed tests measuring their memory for the sequential relationships.
This design allowed me to examine how music modulates:
Event boundary detection
Learning of statistical regularities
Neural representations of sequence structure
The results reveal that musical context significantly enhances statistical learning and event segmentation:
Behavioral outcomes:
Improved boundary detection for sequences paired with music
Higher accuracy in subsequent sequence memory tests
Faster learning rates throughout encoding
Neural mechanisms:
Reduced computational demands in prefrontal and visual regions during music-accompanied learning
Enhanced connectivity between the medial temporal lobe and visual processing regions
Stronger pattern similarity for within-sequence items in key hippocampal subfields (CA1, dentate gyrus)
More distinct positional coding in the hippocampus for items learned with musical context
This research has significant implications for understanding cross-modal influences on learning and memory. The findings suggest that music can serve as an effective organizational framework that strengthens event boundaries while facilitating sequential learning. This has potential applications in:
Educational settings, where musical context could enhance learning of complex sequential information
Memory rehabilitation, particularly for populations with temporal processing deficits
Understanding how the brain integrates information across modalities in naturalistic settings
Two manuscripts from this research are currently under review at academic journals, with additional publications in preparation.