Keynote Speakers

11th International Conference on New Music Concepts

ICNMC aims to bring together researchers, scientists, engineers, and scholar students to exchange and share their experiences, new ideas, and research results about all aspects of Music Studies, and discuss the practical challenges encountered and the solutions adopted.

March 23-24, 2024     Treviso - Italy

Deadline: February 4, 2024



Gus Xia
Music X Lab, Abu Dhabi

Gus is an Assistant Professor in Machine Learning at MBZUAI, and he also holds affiliations at NYU Shanghai and NYU.
Gus has been leading the Music X Lab in developing intelligent systems that help people better compose, perform, and learn music.
Gus received his Ph.D. in the Machine Learning Department at Carnegie Mellon University in 2016, and he was a Neukom Fellow at Dartmouth from 2016 to 2017. Gus is also a professional Di and Xiao (Chinese flute and vertical flute) player. He plays as a soloist in the NYU Shanghai Jazz Ensemble, Pitt Carpathian Ensemble, and Chinese Music Institute of Peking University, where he also served as the president and assistant conductor.
He held a Di&Xiao solo concert at Peking University in 2010, and his team held a Music AI concert in Dubai in 2022.


Matteo Farnè
University of Bologna, Department of Statistical Sciences, Italy

Matteo Farnè (1988) began his piano studies with Olaf John Laneri and received his diploma in piano (2010) and second-level academic diploma with composition-interpretation address (2012) at the "G. Verdi" Institute in Ravenna. Committed to music popularization, he is a fortepiano graduate from the Imola Piano Academy under the guidance of Stefano Fiuzzi. Admitted to the Collegio Superiore of the University of Bologna, he graduated in 2012 with a master degree in Statistics, Economics and Business with highest honors, and then received his doctorate in Statistics in 2016 from the University of Bologna, where he is currently a tenure-track researcher in statistics.
During his carrier, he has held study, teaching and research positions in England (University College London), Germany (European Central Bank), and the United States (UC Davis). His research interests mainly lie in the field of high-dimensional statistics, and include signal processing methods for musical data analysis.
Recently, he attended the master in Music Analysis and Theory sponsored by GATM (Gruppo di Analisi e Teoria Musicale), where he graduated cum laude at the end of 2021 under the guidance of Mario Baroni, in collaboration with the Bologna Liszt Foundation. His thesis work, a data driven analysis of Liszt’s Trascendental Étude no.1, was presented at the IFCS 2022 Conference “Classification and Data Science in the Digital Age”, where Matteo was delivered the “Chikio Hayashi Award” for the best statistician aged 30-35.

Liszt's Etude S.136 no.1: audio data anaylsis of two different recordings
In this presentation, we review the main signal processing tools of Music Information Retrieval (MIR) from audio data, and we apply them to two recordings (by Leslie Howard and Thomas Rajna) of Franz Liszt's Etude S.136 no.1, with the aim to uncover the macro-formal structure and to compare the interpretation styles of the two performers. In particular, after a thorough spectrogram analysis, we perform a segmentation based on the degree of novelty, intended as spectral dissimilarity, calculated frame-by-frame via the cosine distance. We then compare the metrical, temporal and timbrical features of the two executions by MIR tools. Via this method, we are able to identify in a data-driven way the different moments of the piece according to their melodic and harmonic content, and to find out that Rajna's execution is faster and less various, in terms of intensity and timbre, than Howard's one. This enquiry represents a case study able to show the potentialities of MIR from audio data in supporting traditional music score analyses and in providing objective information for statistically founded musical execution analyses.


Zhengshan (Kitty) Shi
Stanford University, USA

Dr. Kitty Shi is an accordionist, pianist, bagpipes player, and a music technologist. She received her Ph.D in Computer-Based Music Theory and Acoustics at Stanford's Center for Computer Research in Music and Acoustics (CCRMA). Her research interests lie in the intersection of music information retrieval and human-computer interaction. She is interested in user-curated computer-assisted expressive musical performances.
She is currently a researcher at Stanford University.

Computational analysis and modeling of expressive timing in music performance
Performers' distortion of notated rhythms in a musical score is a significant factor in the production of convincingly expressive music interpretations. Sometimes exaggerated, and sometimes subtle, these distortions are driven by a variety of factors, including schematic features (both structural such as phrase boundaries and surface events such as recurrent rhythmic patterns), as well as relatively rare veridical events that characterize the individuality and uniqueness of a particular piece. Performers tend to adopt similar pervasive approaches to interpreting schemas, resulting in common performance practices, while often formulating less common approaches to the interpretation of veridical events. Furthermore, some performers choose anomalous interpretations of schemas.
This talk presents statistical analyses of the timings of recorded human performances of selected Mazurkas by Frederic Chopin. These include a dataset of 456 expressive piano performances of historical piano rolls that were automatically translated to MIDI format, as well as timing data of acoustic recordings from an available collection. I compared these analyses to the performances of the same works generated by the neural network trained with recorded human performances of the entire corpus. This talk demonstrates that while machine learning succeeds, to some degree, in expressive interpretation of schemata, convincingly capturing performance characteristics remains very much a work in progress.


Luca Turchet
University of Trento, Italy

Luca Turchet is an associate professor at the Department of Information Engineering and Computer Science of the University of Trento, where he leads the Creative, Intelligent and Multisensory Interactions Laboratory. He holds master degrees in computer science from the University of Verona, in classical guitar and composition from the Music Conservatory of Verona, and in electronic music from the Royal College of Music of Stockholm. He received a PhD in media technology from Aalborg University Copenhagen. He is co-founder and strategic advisor of Elk. He is the founding president of the Internet of Sounds Research Network, and is the Chair of the IEEE Emerging Technology Initiative on the Internet of Sounds. His scientific, artistic, and entrepreneurial research has been funded by several research grants including a Marie-Curie individual fellowship, a grant of the Danish Council for Independent research, as well as grants from the European Space Agency, European Institute of Innovation and Technology and the Italian Ministry for University and Research. His research interests include new interfaces for musical expression, musical metaverse, networked music performance systems, extended reality, haptic technology e multisensory interaction.

The emerging field of the Internet of Musical Things: enabling technologies and open challenges
This talk will present the emerging field of the Internet of Musical Things, a novel framework in music technology research that extends the Internet of Things paradigm to the musical domain. Several examples of connected and intelligent devices for musicians and audiences will be discussed, along with networking technologies supporting musical interactions. The talk will conclude with a critical discussion about the technological and non-technological challenges ahead of us.