Vienna Talk 2015 on Music Acoustics
“Bridging the Gaps”     16–19 September 2015


Towards realistic and natural synthesis of musical performances: Performer, instrument and sound modeling

Perez-Carrillo, Alfonso 

Proceedings of the Third Vienna Talk on Music Acoustics (2015), pp. 289–294


Imitation of musical performances by a machine is an ambitious challenge involving several disciplines such as signal processing, musical acoustics or machine learning. The most important techniques are focused on modelling either the instrument (physical models) or the sound (signal models), but they forget an explicit representaiton of the performer. Recently, the availability of technology and methods to accurately measure instrumental controls by the performer can be exploited to improve current sound synthesis models. In this work we present an approach that combines the modeling of characteristics of the sound, of the instrument as well as of the performer, in order to generate natural performances with realistic sounds automatically from a musical score. The method uses the violin as a use case and is composed of three layers. The first layer corresponds to expressivity models, the second one is a signal model driven by performer actions and the third one consists of an acoustic model of the sound radiation properties of the violin body.


Export citation

  • sound synthesis
  • expressivity
  • spectral models
  • violin radiation patterns

  • Status
    Invited Paper
    not reviewed

    Banner Pictures: © PID/Schaub-Walzer