Академический Документы
Профессиональный Документы
Культура Документы
D. Giaretta, Advanced Digital Preservation, DOI 10.1007/978-3-642-16809-3_22, C Springer-Verlag Berlin Heidelberg 2011
407
408
22
22.1.3 The 1980s and Later: Mixed Music and Extension to Other Domains
The rst interactive works combining performers and real-time electronic modulation of their parts have appeared in the middle of the 1980s. Electronic devices, either hardware or software, have been used with various musical congurations: the instrument-computer duet, for instance in Philippe Manourys works (Jupiter, for ute and computer, 19871992 ; En Echo, for voice and computer, 19931994); the works for ensemble and live electronics, such as Fragment de lune (19851987) by Philippe Hurel; the works for soloists, ensemble and electronics, such as Rpons (19811988) by Pierre Boulez. Various digital techniques have been developed since then: from the use of various forms of sound synthesis (additive, granular, by physical modelling. . .), to the real-time analysis and recognition of inputs (audio as well as video input), based on articial intelligence (neural networks, hidden Markov models. . .), passing through various forms of distortion of the input sounds (reverberation, harmonization, ltering. . .). Meanwhile, these techniques penetrated into some neighbouring domains, such as opera with live sound transformations (K, music and text by Philippe Manoury, 2001), theatre with live sound transformations (Le Privilge des Chemins, by Fernando Pessoa, stage direction by Eric Gnovse, sound transformations by Romain Kronenberg, 2004), theatre with image generation (La traverse de la nuit, by Genevive de Gaulle-Anthonioz, stage direction by Christine Zeppenfeld, realtime neural networks and multi-agent systems by Alain Bonardi, 2003), music and video performances (Sensors Sonic Sights), music/gestures/images with Atau Tanaka, Laurent Dailleau and Ccile Babiole), or installations (Elle et la voix, virtual reality installation by Catherine Ikam and Louis-Franois Flri, music by Pierre Charvet, 2000).
22.2
409
In April 2008, Dalbavie and his musical assistant Serge Lemouton decided to choose another technique: they built a sampler. It is a kind of database of sounds produced by an instrument. The sounds have been recorded from the old TX 816 at various pitches and intensities. This solution enabled to re-perform the piece by means of a kind of photography of the previous sounds. When no sound corresponding to a given pitch exists, the sampler is able to interpolate between existing les in order to give the illusion that the missing note exists.
410
22
Modern music does not escape from this law, and works produced by modern composers are very often judged as too complex for our ears. In the technical part, complexity is inevitable, as shown in Fig. 22.1. One can think of the hidden part of a modern piano for comparison.
22.3
Challenges of Preservation
411
or the techniques for the sake of a new performance, assuming a certain degree of authenticity. . . But for digital music, things are not so well organized. The dependency on industry is very high, knowledge is not maintained, obsolescence of systems and software is very rapid, thereby increasing the risk. Consider for example Emmanuel Nunes Lichtung II. This work was rst created in 1996 in Paris, using a NeXT workstation extended with a hardware DSP platform developed by IRCAM. The work was then recreated in 2000, using then a Silicon Graphics workstation with a specic piece of software (jMax). The work was recreated again in Lisbon in 2008, using a Macintosh with the Max/MSP software. For each of the subsequent re-performances, a porting of the original process was needed, and was implemented by the original engineer who had developed the rst version. This is only one example, but the amount of works that were originally created for the NeXT/ISPW workstation is huge, and for all of them, the cycle of obsolescence is similar. The risk is then to completely lose some works: for lost compositions from the past, the musical score is sufcient for any new performance. For the digital part, the score does not give any information.
Data objects and processes are used during the live performance, while documentation and recordings are used when preparing a new live performance. Recordings can be considered to be a specic kind of documentation which aims at providing a descriptive documentation of the work (how it sounds like. . .), while documentation as images and text aim to provide a prescriptive documentation on the work (what has to be done). As explained above, recordings are part of the objects to be preserved, but are not sufcient to preserve the ability to re-perform the work. There can be several objects that are used inside a work. For example, there can be several hundreds of different data objects (audio les, midi les. . .), several real-time processes (for instance, more than 50 in En Echo by Philippe Manoury),
412
22
several different documentation les (one for speakers installation, one for microphones, one for general setup and installation. . .). All these documentation les can be grouped together in a single PDF le.
22.3.1 Challenge 1: Preserving the Whole Set of Objects, with Its Logical Meaning
The rst important challenge is to preserve the logical relationship between all these objects. That logical relationship is one of the most important part of the challenge, since the preservation strategy applicable to each element can be dependent on the logical relationship an element has with the whole set. An example of this is a set of audio les used to store parameters. During our analysis of the content of the repository, some problems occurred, one of the most glaring being the case of audio les used to store numerical parameters: instead of storing audio, some audio les are used to store numerical parameters that serve as input to real-time processes in order to change their behaviour. This behaviour seems to be very frequent, since audio les are well known to the community, and their use is thus facilitated. Due to this fact, migration techniques that are applicable to audio les cannot be applied to these specic parameter les (the reason of this is very evident to any member of the community). Thus, the logical relationship to the whole set of objects has to be maintained in order to achieve preservation.
22.3.2 Challenge 2: Preserving the Processes, and Achieving Authenticity Throughout Migrations
As explained in detail below, the second important challenge is to preserve the realtime process (the so-called patch). The obsolescence of the environments able to execute the real-time process is so rapid that processes need to be migrated approximately every 5 years. Moreover, there is a need to achieve a certain form of authenticity throughout the successive migrations that cannot be evidently based on simple provenance and Fixity Information, as dened by the OAIS model.
22.4
413
Moreover, existing documentation is written according to a template (which can be expressed either in LaTEX or PDF). The methodology was dened as follows: Reduce the block-diagram ow to an algebra (choice of existing FAUST language, concise and sufciently expressive developed by Grame, Lyon, France). Store the semantics of the elements, by extracting them from existing documentation To this end, several tools have been developed, according to the architecture shown in Fig. 22.3.
Reference Manual (PDF) RepInfo (XML)
PDI (RDF)
MustiCASPAR - Ingest
DATA (XML)
414
22
The role of each of these tools is dened as follows: DOC tool: extracts from existing documentation the semantics of elements FUNC tool: parses the code of each single process in order to identify elements, verify existence of RepInfo (if RepInfo missing, a warning is generated, see demo below), and provides PDI for the process according to the PATCH ontology template FILE tool: analyze the global structure of all provided les, encodes PDI of work according to provided WORK ontology template LANG tool: re-encodes the syntax of the original process according to the chosen language (FAUST) The results of these extractions are stored in the archive, according to the OAIS methodology, during the Ingest phase.
22.4
415
I L work title I I
composer name
P14_1F_Writer
P14_1F_Composer
C P P14F_carried_out_by
Property P R58F_is_derivative_of
P P P14_1F_Videast
R58_1F_Migration
Migration Derivative
416
C I C I I C L work conception label I L patch label C C I I I C I I I I C E52_Time_Span I C I L E21_Person composer name L work title E35_Title E65_Creation F46_Individual_Work F20_Self-Contained_Expression F30_Work_Conception
F31_Expression_Creation
E73_Information_Object
L function id
L function label
I C C L End of patch creation time value L musical assistant name P R58F_is_derivative_of R14_1F_Composer P P R58_1F_Migration E82_Actor_Appelation Literal
22
year value
P14_1F_Videast
P14F_carried_out_by
Property
Migration Derivative
R14_1F_Writer
22.4
417
418
22
To this end, Representation Information checking is performed in three steps: 1. Checking completeness of information: reconstruction of original process from extracted Representation Information 2. Checking usefulness: construction of an equivalent process, from extracted RepInfo , but executed from PureData (equivalent to a migration) 3. Authenticity: comparison of audio outputs, according to dened Authenticity protocol 22.4.4.1 Checking Completeness of Information The purpose of this checking is to check the completeness of Information Representation. To this end, we apply a transformation using the Language Tool (described above) to the original object (process). We then apply the reverse transformation in order to obtain a new process. This process is supposed to be the same as the original one. We can apply a bit-to-bit comparison method to the objects in order to detect any loss of information illustrated in Fig. 22.7. 22.4.4.2 Checking Usefulness of Information The purpose of this check is to show that a new process, different from the original one, but functional, can be reconstructed from the provided Representation Information as illustrated in Fig. 22.7. In order to show this, an automatic translation tool is used (based on the Language Tool already described), replacing the original Max/MSP environment by the PureData environment. It should be noticed that some manual adjustments have to be made in the current version of the tools (due to incompleteness of Representation Information with PureData). 22.4.4.3 Checking Authenticity In order to check Authenticity, we apply an Authenticity protocol. Here is a slightly simplied version of the AP: At Ingest phase, 3-steps Authenticity Protocol: Choose an input audio le (inputFile1) Apply audio effect on it Record output audio le (outputFile1) At Migration phase, 3-steps Authenticity Protocol : After migration, apply new audio effect on inputFile1 Record output audio le (outputFile2) Compare outputFile1 and outputFile2 (by ear audio engineer, or any other method of comparison, for example comparing spectrograms), illustrated in Fig. 22.8
22.5
419
As an important remark, it has to be noticed that, when comparing output les, some adjustments have to be made on the object itself in order to achieve authenticity.
420
22
Multimedia Performances (IMP) [221]. The section describes several different IMP systems and presents an archival system, which has been designed and implemented based on the CASPAR framework and components for preserving Interactive Multimedia Performances.
22.5.1 Introduction
IMP is chosen as part of the testbeds for its challenges due to the complexity and multiple dependencies and typically involves several difference categories of digital media data. Generally, an IMP involves one or more performers who interact with a computer based multimedia system making use of multimedia contents that may be prepared as well as generated in real-time including music, audio, video, animation, graphics, and many others [222, 223]. The interactions between the performer(s) and the multimedia system [224226] can be done in a wide range of different approaches, such as body motions (for example, see Music via Motion (MvM) [227, 228]), movements of traditional musical instruments or other interfaces, sounds generated by these instruments, tension of body muscle using bio-feedback [229], heart beats, sensors systems, and many others. These signals from performers are captured and processed by multimedia systems. Depending on specic performances, the input can be mapped onto multimedia contents and/or as control parameters to generate live contents/feedback using a mapping strategy.
22.5
421
Traditional music notation as an abstract representation of a performance it is not sufcient to store all the information and data required to reconstruct the performance with all the specic details. In order to keep an IMP performance alive through time, not only its output, but also the whole production process to create the output needs to be preserved.
422
22
Fig. 22.9 The i-Maestro 3D augmented mirror system showing the motion path visualisation
Fig. 22.10 AMIR interface showing 3D motion data, additional visualizations and analysis
22.5
423
Fig. 22.11 The ICSRiM conducting interface showing a conducting gesture with 3D visualisation
also provides a multimodal recording (and playback) interface to capture/measure detailed conducting gesture in 3D for the preservation of the performance. A portable motion capture system composed by multiple Nintendo Wiimotes is used to capture the conductors gesture. The Nintendo Wiimote has several advantages as it combines both optical and sensor based motion tracking capabilities, it is portable, affordable and easily attainable. The captured data are analyzed and presented to the user highlighting important factors and offer helpful and informative monitoring for raising self-awareness that can be used during a lesson or for selfpractice. Figure 22.11 shows a screenshot of the Conducting System Interface with one of the four main visualization mode.
424
22
inter-relationships and additional information considering the reconstruction issues. It is a challenging issue since it is difcult to preserve the knowledge about the logical and temporal components, and all the objects such as the captured 3D motion data, Max/MSP patches, conguration les, etc, in order to be properly connected for the reproduction of a performance [235]. Due to these multiple dependencies, the preservation of an IMP requires robust representation and association of the digital resources. This can be performed using entities and properties dened for CIDOC-CRM and FRBRoo. The CIDOC Conceptual Reference Model (CRM) is being proposed as a standard ontology for enabling interoperability amongst digital archives [236]. CIDOC-CRM denes a core set concepts for physical as well as temporal entities [237, 238]. CIDOC-CRM was originally designed for describing cultural heritage collections in museum archives. A harmonisation effort has also been carried out to align the Functional Requirements for Bibliographic Records (FRBR) [239] to CIDOC-CRM for describing artistic contents. The result is an object oriented version of FRBR, called FRBRoo [240]. The concepts and relations of the FRBRoo are directly mapped to CIDOC-CRM. Figure 22.12 demonstrates how the CIDOC-CRM and FRBR ontologies are used for the modelling of an IMP.
F8.Person
Kia (Director) Frank (Performer) P14F carried out by P16F use specific object
E73.Information Object
Music Music Score
F52.Performance IMP
P16F use specific object P4F has time span P7F took place at
E22.Man-Made Object
Cello Sound Mixer Computer System
E52.Time-Span
2hours:5PM-12/02/07
E53.Place
Leeds -UK
Fig. 22.12 Modelling an IMP with the use of the CIDOC-CRM and FRBR ontologies
22.5
425
testbed domains. In this case, our scenarios are related with the ingestion, retrieval and preservation of IMPs. The ICSRiM IMP Archival System has been designed and developed with the CASPAR framework integrating a number of selected CASPAR components via web services. The system has been used to implement and validate the preservation scenarios. The archival system is a web interface, shown in Fig. 22.13, which communicates with a Repository containing the IMPs and the necessary metadata for preserving the IMPs. The rst step for preserving an IMP is to create its description based on the CIDOC-CRM and FRBRoo ontology. This information is generated in RDF/XML format with the use of the CASPAR Cyclops tool. The Cyclops tool [241] is used to capture appropriate Representation Information to enhance virtualisation and future re-use of the IMP. In particular, this web tool is integrated into the Archival System and it used in order to model various IMPs. During ingestion, the IMP les and the metadata are uploaded and stored in the Repository with the use of the web-based IMP Archival System. For the retrieval of an IMP, queries are performed on the metadata and the related objects are returned to the user. The following Figure shows the web interface of the ICSRiM IMP Archival system. In case a change occurs in the dataset of an IMP, such as the release of a new version of the software, the user has the ability to update the Representation Information and the dataset of the IMP with the new modules (e.g. the version of new software). A future user will be able to understand which one is the latest version of a component and how these components can be reassembled for the reproduction of the Performance by retrieving the Representation Information of the IMP.
426
22
22.5.7 Conclusion
This section of the chapter introduces the usages and applications of interactive multimedia for contemporary performing arts as well as its usefulness for capturing/measuring multimedia and multimodal data that are able to better represent the playing gesture and/or interactions. With two example IMP systems, it discusses key requirements and complexities of the preservation considerations and presents a digital preservation framework based on ontologies for Interactive Multimedia Performances. With the CASPAR framework, standard ontology models were adopted in order to dene the relations between the individual components that are used for the reperformance. We also described the development and implementation of a webbased archival system using the CASPAR framework and components. The ICSRiM IMP Archival System has been successfully validated by users who have created their own IMP systems using their own work for ingestion and using ingested works from others (without any prior knowledge) to reconstruct a performance with only the instruction and information provided by the archival system.
22.6
CIANT Testbed
427
VRML prole would render the 3D scene in using the 3D geometry-related metadata Unreal prole would render the same scene in a more advanced environment of the Unreal Engine Media player prole would wait for video les in order to interpret the data by playing the video Timeline prole would wait for the list of all processes found in the ontology and, based on this information, it would generate the nice-looking timeline widget Graph prole would render the RDF graph and would highlight nodes within the graph based on their activity Subtitles prole would display subtitles added by the modeller for annotation purposes At the very end of the loading process, a slider widget (timeline controller) is instantiated and congured. The end-user controls the whole visualisation tool from the control panel of the slider widget synchronising the other components. In our case, the synchronisation is achieved by sending small UDP packets to software applications representing the selected visualisation proles. An example could be the synchronisation of multiple video players showing the recorded video of the stage from different angles, at the same time rendering the 3D scene based on the recorded motion capture data (see Figs. 22.14, 22.15 and 22.16). More details can be found in an article ambitiously titled >>Long-term digital preservation of a new media performance: Can we re-perform it in 100 years?<< for Mays issue of International Preservation News published by IFLA Core Activity on Preservation and Conservation (PAC) group accessible at http://www.ia.org/ les/pac/IPN_47_web.pdf
Fig. 22.14 Still image from the original recording of the GOLEM performance
428
22
Fig. 22.15 Screenshot of the preview of the GOLEM performance in the performance viewer tool
Fig. 22.16 Performance viewer: from left to right, model of the GOLEM performance, timeline slider, three different video recordings of the performance, 3D model of the stage including the virtual dancer, 3D model used for the video projection, audio patch in Max/MSP and pure data
22.7 Summary
This chapter has shown some of the results from the Contemporary Performing Arts testbeds which apply the techniques described in this book to the multitude of digital objects used in this area. The view of preservation as involving re-performance, and the ideas of authenticity, introduce a number of new ways to test our concepts, tools and techniques.