Keynote Speakers

6 th International Conference on New Music Concepts

ICNMC aims to bring together researchers, scientists, engineers, and scholar students to exchange and share their experiences, new ideas, and research results about all aspects of Music Studies, and discuss the practical challenges encountered and the solutions adopted.

April 13-14, 2019     Treviso - Italy

Eduardo Miranda
Plymouth University, UK

Eduardo R. Miranda is a composer and Artificial Intelligence (AI) scientist. He studied Music Technology at the University of York and received a PhD on the topic of music with AI from the University of Edinburgh. Currently, he is Professor in Computer Music at Plymouth University where he heads the Interdisciplinary Centre for Computer Music Research (ICCMR), which is pioneering the development of music neurotechnology and biological computing for music. He is emerging as an influential composer for his work at the crossroads of music and science. His music, which includes pieces for symphonic orchestras, chamber groups and solo instruments, with and without live electronics, has been played by ensembles such as Bergersen String Quartet, Leo String Quartet (from the City of Birmingham Symphony Orchestra), Sond'Ar-te Electric Ensemble, Scottish Chamber Orchestra, BBC Concert Orchestra and Ten Tors Orchestra, to cite but a few. In addition to concert music, he has composed for theatre and contemporary dance. The inside story of his acclaimed computer-aided symphony, Mind Pieces, is revealed in the e-book Mind Pieces, published by Intelligent Arts (http://intelligentarts.net/2016/08/miranda).

Title
Cellular Automata Music
Abstract
This talk focuses on issues concerning musical composition practices whereby the emergent behaviour of cellular automata is used to model generative processes for synthesised sound and musical forms. The speaker introduces two cellular automata-based systems, Chaosynth and CAMUS, of his own design, which he used to compose a number of pieces of music. Chaosynth is a granular synthesis system whose parameters are controlled by a variant of a cellular automaton that has been used to model Belousov-Zhabotinskii chemical reactions. CAMUS is a composition system that takes advantage of the pattern propagation properties of cellular automata in order to generate musical forms.

 
 

Stavros Ntalampiras
Unniversity of Milan, Italy

Stavros Ntalampiras is an Assistant Professor at the Department of Computer Science of the University of Milan and an Associate Researcher of the National Council of Italy. He received the engineering and Ph.D. degrees from the Department of Electrical and Computer Engineering, University of Patras, Greece, in 2006 and 2010, respectively. He has carried out research and/or didactic activities at Politecnico di Milano, the Joint Research Center of the European Commission, the National Research Council of Italy, and Bocconi University. Currently, he is an Associate Editor of IEEE Access and a member of the IEEE Computational Intelligent Society Task Force on Computational Audio Processing, while his research interests include content-based signal processing, audio pattern recognition, machine learning, and cyber physical systems.

Title
Transfer Learning: Introduction and Audio Signal Processing Opportunities
Abstract
The talk will introduce the main ideas and concepts of transfer learning, which is an exciting new paradigm for the area of audio signal processing. Such an approach is motivated by the clear trend showing that current approaches move away from the standard audio pattern recognition pipeline where handcrafted features and classification methods were typically used. Focus will be placed on the differences between traditional machine learning and transfer learning as well as the reasoning behind its ever increasing adoption. The final part of the talk will concentrate on the way transfer learning has been employed to fulfill the needs of relevant audio signal processing applications, i.e. biodiversity monitoring and bird species classification, sound and music emotion recognition, and audio-based human activity recognition.