requested URI: http://sound.jp/swms/PROJECT/RalphProject-CMSCamb2004_en.html (English)

SWMS
binder_verticalBottom.gif BINARY binder_blank.gif
binder_topRightBottom.gif COTERIE binder_blank.gif
binder_topRightBottom.gif LIVE binder_blank.gif
binder_topRightBottom.gif PROJECT binder_leftToBottom.gif
binder_topRightBottom.gif RECORDING binder_vertical.gif
binder_topRightBottom.gif TEXT binder_vertical.gif
binder_topToRight.gif UPDATE binder_vertical.gif
directory_binder_open.gif
directory_binder_close.gif
horizontalBar.gif
this image is just avoiding blank class.

Ralph Project

This page holds information about Ralph Project.
CMS, Faculty of Music, University of Cambridge

horizontal separator

MEMO: The information here at the moment is based on my understanding. The formal information will be added according to the development.

+ Contents

§ Recent updated/uploaded documents
§ Project title
§ Site strucuture
§ Project summary
§ research questions or problems
§ aims and objectives
§ context for the research
§ outputs of the project

+ Recent updated/uploaded documents

Due to a static server use, currently unavailable....

+ Project title

A software interface between human and computer virtual players for music performance in concert

MEMO: For convenience, we call it, Ralph project :-)

+ Site strucuture

This site has following sections.

> CMS:Ralph2004: People : Biography of project members.
> CMS:Ralph2004: Proposed Ideas : Various notes about proposed ideas and plans
> CMS:Ralph2004: Question and Answers : Answers to proposals in practice of programming.
> CMS:Ralph2004: Binaries : Max/MSP patches, saved database, et cetera.
> CMS:Ralph2004: Terminology : Terminologies used in the end products or discussion.

+ Project summary

The project seeks to address the problem of the interface between the human music performer and the electroacoustic sound world of out time. The out of the research would consist of a software interface designed for performance which would unable human players -including large ensembles requiring a conductor- to play music with computers generating sound and controlling other digital devices such as synthesisers in real time.

The intended interface should be general enough and easy to use so that musician without programming skills should be able to work with it. However, the interface should be flexible enough to allow a composer or performer with some programming skills in the programming language Max to connect the pre-programmed modules in a complex network-like fashion to address very demanding forms of musical interaction.

+ research questions or problems

The 2nd half of the 20th Century represents a departure from the tradition of music making in western serious ‘classical’ music. From 1950 on composers −perhaps for the 1st time in history- wrote music primarily for the instruments of previous centuries and not of their own . In the past a composer such as Berlioz not only composed for the orchestra of his time but was indeed responsible for shaping its design. Yet, today most composers of contemporary serious music are still writing for the orchestra of the 19th century. By contrast, in the so called popular musics -from Jazz to Rock- composing and performing practices have evolved seamlessly with the unfolding of 20th century technology often exploring and pushing its boundaries in the process.

The reasons for the dissociation of contemporary music practice from the technology of our time are many and complex in nature. However we identify one significant practical problems which this project seeks to address: the problem of the interface between the human player and the electroacoustic sound world of out time. Although the variety and pace at which new computer and electroacoustic instruments have appeared is nothing short of staggering the means to integrate them with our existing instrumental tradition remain elemental, particularly in the contexts where computers are used as virtual players.

Whereas composers and performers in the past would have experienced a gradual progression from one technology to the next, the European composer since 1950 has been presented with the microphone, the electric instrument and then the computer controlled synthesiser in a very short period of time.

In the absence of a gradual transition and interface between the technology of −say- the piano or the clarinet and the computer at the other end of the spectrum composer and performers were forced to choose between the unknown and the successful instrumental tradition of the past. The few interfaces that emerged were not generally directed at integrating the classically trained player with the emerging electroacoustic and computer instruments but rather at replacing the performer altogether (by such things as a sequencers) or at making him a more or less passive follower of an electroacoustically produced sound world.

In looking at possible solutions to the problem of the interface between the human player and the computer virtual player in concert several questions emerge.

What is the desirable hierarchical relationship between a human and a computer player? For example: who follows who and when?

How do we articulate different and time varying hierarchical forms of interaction in a seamless performance environment with the robustness required in concert?

What level of computer expertise should be required by the performer and composer who may wish to work with these new tools ? (simplicity vs. flexibility?)

+ aims and objectives

This project seeks to develop a dynamic way of interfacing the vocal and instrumental traditions of serious western music with the computer and electro-acoustic technology of today for the purpose of performance in concert.

We wish to arrive at a performance environment where we can have the best of both world. We identify multiple aspects of the acoustic instrumental technique such as articulation, phrasing etc which we wish to maintain and indeed enhance in the process of integrating them with the rhythmic and timbral possibilities of the computer environment. We believe the proposed software interface would provide an easy to use tool for this purpose that would be of equal interest to performers and composers alike.

+ context for the research

There is an existing body of electroacoustic mixed works which have until now relied on primitive and -more important- unmusical synchronisation methods for public performance. Until now it has been common practice to use a sound or visual prompting impulse such as a click-track or a light blip to sync the human player to the electroacoustic material. In these contexts the human player is confined to rigidly following the computer player from the beginning to the end of a piece.

A truly flexible musical interface would not only encourage more composers to write music using the instruments and technology of their time but would also allow for the existing body of mixed electroacoustic works to be performed in a new flexible way.

A number of other research projects have been conducted in this area in recent years noticeably at IRCAM (Institute Research Acoustic Musique ) the music and science institute in Paris founded by Pierre Boulez in the late 1970’s. There, a number of Max based interfaces have been programmed for specific pieces commissioned during the last 10/15 years. However, no attempt has been made to arrive at a general interface that may be used by the non-expert composer/performer. The assumption has always been that a version of some general or generic Max interface may be adapted by the in-house programmers for the individual requirements of each new piece on a concert to concert basis. The result has been a large number of Max based software interfaces which work well with the individual compositions they have been tailored for but are of no great use to the general population of players and composers who do not work at Ircam and are not supported by expert programmers. Also, for the sake of robustness and in the knowledge that the particular interface would not be used by the general population of musicians little effort has been made at Ircam to present the software in a user friendly manner. Furthermore, throughout the years features of one interface were not carried over to the next one unless it was absolutely necessary because until recently personal computers were not fast enough to handle reliably an interface that supported many simultaneous real time features. In short, these interfaces work well for what they have been designed. However, they are not general enough or comprehensive enough to be used by outsiders to Ircam in a variety of musical situations.

At the other end of the research spectrum a project has been running for many years at the MIT media Lab which seeks to develop a virtual performer that could interact with human players simulating the behaviour of actual musicians. This is an enormous undertaking which involves research in the area of artificial intelligence and it will take many more years if not decades to accomplish. The desired result is nothing short of a virtual player that can listened to music as it is being played, follow a score, compare what it is hearing with the given score and react in real time accordingly making the kind of musical decisions a human player would make.

Our research seeks to develop a tool that would immediately allow musicians to interface conventional acoustic instruments with computer instruments in concert but does not seek to endow the software with artificial intelligence capabilities. The contribution made by this project would have an impact on performance practice today and would be general enough to be used outside the context of serious western music in other performance contexts by musicians with a minimum or no programming skills.

+ outputs of the project

The out of the research would consist of a software interface designed for performance which would unable human players -including large ensembles requiring a conductor- to play music with computers generating sound and controlling other digital devices such as synthesisers in real time. This interface will allow a player or conductor to trigger the music played by the computer or by synthesisers as the music unfolds in time (on a bar to bar or beat to beat basis) The software will also be able to switch itself on and off automatically on specified bars in order to give the conductor and/or players a sound and/or visual output reference for synchronisation that may be required in places where it may be necessary to ‘follow the computer’. This may also be useful in passages where the computer and the human players are playing rhythms that are irrational with regards to each other.

The software interface will consist of a stand-alone programme developed in the Max programming language and running on a personal computer. The criteria will be that composers and performers should require minimal knowledge of MAX in order to use the software but should have knowledge (albeit intuitive) of performance dynamics. The interface should be easy to use in simple performing situations but should be flexible enough to allow for truly complex interactions varying over the course of a piece in more complex performing environments.

We intend to demonstrate the end product in a performance of a recently composed piece for large choir and computer and other new compositions specially written for the system. The existing choir piece had been performed until now in an inflexible set up where the conductor and singers had to follow the pre-recorded computer sounds. The performances of an existing piece will demonstrate the portability of existing musical compositions to the new system.

goto the top of the page ::: Return to the parent page.
this image is just avoiding blank class.
horizontal separator hane.gif
copyright © 2004 Shigeto Wada All rights reserved.
Valid XHTML 1.1! Valid CSS!