Big-data in language brain science (taken)

Auditory communication has a central role in our society. However, it remains unclear how our brains allow us to understand complex sounds, such as speech and music. The use of machine learning methods to study neural data has recently led to a major paradigm shift, greatly advancing our understanding of auditory perception. The Di Liberto lab has contributed to this transition by proposing a new research framework that is now widely employed in the field (e.g., Di Liberto et al., Current Biology, 2015; Di Liberto et al., eLife, 2020, Di Liberto et al., Nature Communications, 2023). Differently from the more rigid traditional methodologies, this approach enables the study of brain activation in realistic scenarios involving natural speech and music listening, which are particularly suited for re-analysis from different angles. The research team has gathered an initial set of neural data, however that is still far from enough for training more complex models, life transformers. Some success has been done in that direction (https://www.nature.com/articles/s42256-023-00714-5), which gave great promise for a future consistent use of transformers in brain decoding. In this project, the student will have a choice on how to contribute to this endeavor. For example, one direction involves improving current brain-to-speech decoding models, either by augmenting the available data by pooling additional datasets from the recent literature, or to further develop the analysis resources for the visualisation of brain data.

The student will join an interdisciplinary lab with core expertise in computer science, neurolinguistics, music perception, and electrophysiology. As such, the student will also have the possibility of learning skills that are most compatible with their career goals (e.g., machine learning/data science, neurolinguistics, or even learning about electroencephalography, if that’s interesting for the student).

Lab website: https://diliberg.net