Precession – work in the pipeline

Pyramid music

It is common for composers to consider their work through a number of dimensions or axes; examples of which can be found in fugue technique, Whishart’s lattice model (Wishart, 2002) or more extremely, embodied throughout the methodology of 12 tone serialism (Schoenberg and Stein, 1950).

This planar thinking lends itself to the invention of various compositional techniques, on a simplistic level running variables in in retrograde fashion through to the nested recursion of the Art of Fugue (Bach, 1750) and Markov Chain compositional techniques (Roads, 2000). This is often linear, looking at a set of pitches or rhythms; interrogating and manipulating a stream of data points. From there it is only a small step to link extra information to this data. For instance, at a given time a certain note could be played at a specified volume by a given instrument (Boulez, Cage and Nattiez, 1999). Thus, in computing terms we arrive at a two-dimensional array of data, something potentially very close to a square if you were to tabulate it.

With the addition of other data and relationships comes the opportunity for higher dimensions within this array, popping an extra axis into a cube, followed by a tesseract, up until such points as it would be difficult to perceive within an audible work.

What if, rather than dealing with data points the composer was able to manipulate the dimensional space of the larger work?

Precession is an exploration of these ideas. It draws on the Pitch Volume Timbre space defined by Wishart but folds in one side of the ‘cube’ to form a pyramid space. Working from the assumption that pitch is mapped to x volume to y and timbre to z notes can be placed anywhere within the body of the pyramid. This simple translation brings a deal of creative perturbation to the system, as when any datum is near the apex of the pyramid (z axis) its potential placement within the two remaining axes (x,y) across the square base are several limited.

Where this becomes especially interesting is being able to roll the pyramid like a die within compositional space. If the view point remains fixed with x for pitch etc. any movement of the pyramid re-voices the notes therein, potentially in strikingly different ways. Thus, phrases are cast and re-cast within the rolling die of the shape causing processionary transformations of the material.

Instrument design realised in Supercollider

Visual components and data transformation in Processing

Composition and array transformations Excel

System communication OSC

Edward Wright 3/12/19

Bibliography:
Bach, J.S. (2013). The Art of the Fugue & A Musical Offering. Composed c. 1740-50. Newburyport: Dover Publications.
Boulez, P., Cage, J. and Nattiez, J. (1999). The Boulez-Cage correspondence. Cambridge [England]: Cambridge University Press.
Roads, C. (2000). The computer music tutorial. Estados Unidos: The MIT Press.
Schoenberg, A. and Stein, L. (1985). Style and idea. First published 1950. Berkeley: University of California Press.
Wishart, T. (2002). On sonic art. London: Routledge.
Advertisement

Lots of words and lots of noises…

This is some the text part of my PhD thesis, put up here so as to make sense of some of the music that is floating around on the internet  and in the real world, and possibly prove useful for people who are into this sort of thing. Also this is at the request of friends and students who would like to read it and for whom I’ve not got round to running off another 10 copies for at £28 each (ouch!).

There are other elements to this body of work such as scores, data sheets, software documentation and audio files that were submitted and are available either from me, or in reference form from the National Library of Wales, some are also gradually making their way onto here. If you do find any of this useful, or cite any of it, or copy and paste it into your own work, please email me at edwardcwright@hotmail.com.

© Edward Wright and Bangor University 2013. All Rights Reserved. No part of this website or any of its contents may be reproduced, copied, modified or adapted, without the prior written consent of the author, unless otherwise indicated for stand-alone materials. Commercial use and distribution of the contents of the work is not allowed without express and prior written consent of the author. 

Symbiosis:

A portfolio of work focusing on the tensions between electroacoustic and instrumental music.

Ed Wright

Acknowledgements

In no particular order I would like to express my gratitude to: my parents Bryan and Elizabeth; Em, Alena, Ben, Bess & Gizmo; Barbara Clark; Professors Andrew Lewis, Jonty Harrison & Robert Normandeau; Dr Chris Collins, Rosie Skelton & Bangor University Symphony Orchestra, Prof. Graeme Harper, Nigel Shaw, Amici del Canto & Sioned Eleri Roberts; Paul Luke, Marion Garrett, Nigel Gaston, Jean Wild & Egerton Rothesay School; Shelly Oscinberg; Dr. Tim Porter; Rhodri Davies; Edward Davies; Andrew Hill; Bethan Elfyn & Adam Walton at Radio 1, Steve Simms & Owain Llyr at Coast/Champion F.M. (later Heart), Ergo Phizmiz at WFMU; Bill Connor; the Parry Williams Scholarship trust, the Drapers Guild, Bangor University School of Music; All the nurses and staff at Alderhey Children’s Hospital; Liz Dow & Alan Couchman; the staff and patrons of the Alex Penmaenmawr; Roger, Wil, Chris, Mauve, John Lawrence & all of the Snowdonia Nights Out Gazette (SNOG) crew; Malcolm and James Hendre; Jack Corlett & Harry Perry; Blipfonica Records; Rob, Zak, Paul, Stelios and Nikos; and to anyone else who was accidentally left out, it was not intentional!

For Gilbert and Alena Wright, Grandpa and Daughter.

One lost, the other born, during the course of this work.

May your gardens grow well and the sunlight shine in your souls.

With all my love. E

1907-2006    &    2008-

Summary

This portfolio of music, both notated and recorded, focuses on the tensions between electroacoustic and instrumental music. Within this commentary there is discussion of the topics raised by this combination of forces both at a philosophical and a practical level. A number of pieces of music have been composed, realised and performed in an attempt to explore and resolve some of these issues in a practical context.

These works fall within the broader remit of western ‘classical’ music, the instrumental resources ranging in scope from solo performers to six part choir and up to symphony orchestra. The musical language remains rooted in this idiom although allowances for exploration in line with the nature of experimental composition have been made. The electronic element is also largely ‘classical’ in style drawing from the heritage of European experimentation, making reference to the concrète and acousmatic methods of working whilst drawing on resources from stereo to 8 channel. Live processing is also included by way of software design and utilisation, and one of the works contains a pre-recorded visual element in the form of animation.

This portfolio comprises all the performance materials required, recordings and realisations of the works; as well as listing the technical resources used in the creation process, data sheets of software to enable future reconstruction, and a list of all known performances up to summer 2010.

(Supporting audio works can be found at http://www.virtual440.com and http://www.myspace.com/virtual440 when not available in the text and are available to purchase from http://www.blipfonica.com)

Contents:

             1.         Introduction

                         i           Extension of the repertoire, a number of conclusions, and yet more questions.

                         ii          Towards transformation…

                         iii         Causality and magic

                         iv         The visual element, space, and its redefinition.

                         v          Alchemy versus architecture

             2.         Commentaries

                         i           The Way I Saw It

                         ii          Enough~?

                         iii        Harp Set

                         iv         Postcards from Home

                         v          Con-chords

                         vi        Seasons

                         vii        Polarities: Concerto grosso for orchestra and electronics

                        

                        

             3.         Conclusions

             4.         Public output

                         i           Live performances

                         ii          Broadcast works

                         iii         Other academic / peer reviewed work

             5.         Technical Resources

             6.         Bibliography

                         i           Written Material

                         ii          Websites

                         iii         Scores

             7.        Appendix

1.         Introduction

             1.i        Extension of the repertoire, a number of conclusions, and yet more questions.

When beginning this body of work, the intention was to extend the repertoire of the violin through the medium of electronically manipulated sound, focusing largely within the field of electroacoustic composition. The lack of ambition, and limitations of scope associated with this remit soon became apparent. The primary difficulty arose from attempting to unify such differing and seemingly mutually exclusive vehicles of musical expression. This body of work follows a paradigm shift, away from the split axioms of instrumental lattice based  composition (as Wishart would describe it)1 and the idiomatic rigours of the electro-

acoustic/acousmatic fields, towards an attempted integration of these traditions, in an effort to create a singularity of performance and meaning.

This led to a number of interesting of tensions. Some were practical such as the integration of live with electronic music, some were technical, such as the differing methods of handling pitch based material, and still others are more the domain of philosophy and aesthetics. These will be discussed and explored further within the following music, and in the commentaries thereon. The progression towards this understanding is clearly demonstrated by the movement from violin and electronics in the form of a relatively simple ‘tape’ piece, through to the exploration of live interaction in Enough~?, the effect of the visual and the ‘cross-pollination’ of fields culminating in Polarities, a work for symphony orchestra and multichannel live electronics. It may be worth noting that the works in this portfolio represent an attempt to incorporate the instrumental into the electronic. This is not to be viewed as acoustic composition, although there are obvious and significant parallels with such a manner of working. It is hoped that this body of work represents a new contribution to the field, in that it is explicitly stepping out of the philosophical pools of a number of genres with the aim of forming a method or style that is both unified in its forces and (albeit very subjective) rewarding and enjoyable to listen to and interact with.

             1.ii       Towards transformation…

The concept of the transformation of sounds is fundamental to the electroacoustic tradition of composition.2 This  characteristic  operates on at least two levels: the physical and the conceptual. Firstly and most explicitly the physical operates by way of the changing of sounds 3 from one recognisable real world object into those of another, generally as a result of some intermediary process such as convolution or vocoding, or, more historically by the use of a bridging ‘object’ containing aspects of both sounds, forming a temporary link between the two main events. On the conceptual level, sources can be transformed by the alteration of the musical/auditory function of a given sound, as most clearly demonstrated by the acousmatic approach. The sound of a door slamming has a clear causality, even when not implied by such a description, but it can also function as a percussive ‘note’, a cadencial gesture or as a signifier for the raising or release of emotional tension. This realignment of object and inference can be explored to a certain extent with purely concrète sounds, and to a large extent when making use of heavy processing in which causal links can be eroded to the point of annihilation, as shown in Con-Chords. These two levels of auditory conceptualisation are strongly interrelated and it is the manipulation of the tensions between the physical and conceptual cognition of a given sound set and the apparent mutual exclusivity of the transformation process 4 that provides a large amount of the underlying physical tension within the electroacoustic genre.

Sonic transformation,  and  spectro-morphology5 have  few  parallels  within  instrumental music beyond the thematic. Starting from the solo instrument, there are many examples of auditory modification although they are mostly limited to parameters such as pitch or intensity. Within these specialised fields there are many instruments that excel; the myriad of pitches found within the range of the ‘cello are, for practical purposes, infinite, and the subtlety and relative ease of real time volume control and articulation achievable by a skilled performer is astounding. It is when considering timbre that traditional instruments show weaknesses. There are many ways to modify this property, and in numerous instruments the manipulation of tone is a part of basic playing technique. In some cases timbre can be modified to a more radical extent, such as the use of multiphonics in wind instruments, sul pont. in strings and mutes in brass. However this manipulation is limited to a finite pool of timbral possibilities, albeit mediated by the awareness and knowledge of the listener. The crude nature of timbral control and the fixed patterns of performer interface arise largely from the stylistic demands of notated music and its symbiosis with instrument design and playing technique. It is when stepping into the grounds of such a primarily auditory, and a-notational genre such as the acousmatic with its focus on experimentation, that these limitations have the potential to become problematic.

The fixed nature of instrumental timbre becomes less of an issue when working in an ensemble context. Composers as varied as Varèse, Debussy and Mahler are renowned for having used the sound of one instrument to colour another, or different sections of the orchestra in unusual combinations to create new composite textures and even to incorporate ‘new’ instruments. 6  Within  their  remit they were very successful. They were not trying to match the sonic possibilities afforded by early twenty-first century computing. The spectral ‘school’ of composition has arguably come closest to this effect, but remains limited in its success in this respect, whilst creating numerous difficulties in terms of instrumental performance. The spectral possibilities of studio based composition cannot be matched by instrumental performance. Another solution must be found in order to create a union of these two forms of musical expression.

             1.iii      Causality and magic

Throughout its evolution electroacoustic music has been plagued by issues of causality and reality. Musically the works obey the familiar rules of cause and effect with an object remaining at rest until acted upon by an external impulse. Tension and release, high crashing to low and the calm after the storm are all verbal examples of this and are experimented with in Postcards From Home. Whether explicitly, or by way of subversion in order to create musical tension, electroacoustic music is bounded by the experiential norms of the listener. It is also this filter of understanding that has lead in part to the need for diffusion or ‘performance’ of potentially autonomous works, ‘orchestras of loud speakers’ 7 and the questioning of whether it is  possible to  speak  of  the  performance  of electroacoustic music in the traditional understanding of the words.

Conversely instrumental music has a strong causal link, an instrument is a direct physical extension of the musicians’ body and expressive capacity. The inclusion of the instrumental into the electroacoustic space reveals a number of new musical and causal dimensions. The concept of transformation and sonic malleability can be taken much further. With recorded work the listener derives a perception from sound pressure waves created by one or more loudspeakers. There is a clear suspension of disbelief in the understanding of sounds generated in this manner, often brought about by the intentional dismissal of non-auditory sources of information. This is not the case when listening to the performance of a competent musician. The inclusion of the real into the electroacoustic provides a physically tangible causality such as cannot be achieved by ‘tape’ alone (as exploited in The Way I Saw It or the visual manifestation of Harp Set). This is in part dependent on the skill of a performer in working with the rest of the music, and that of the composer in the juxtaposition of the two parts, as is true with any predetermined ensemble.

With careful sound editing, in the live or fixed domains, it is possible to transplant the sound of the performer into the electroacoustic plane. Most simply this can be achieved with pre-recording or routing a microphone into the diffusion system (as is suggested to augment the mix in Seasons). In a more sophisticated scenario, performance sounds can be processed and transformed in a similar manner to create an electroacoustic sequence such as in Enough~?. With forward planning performers can also be recorded prior to an event, and their sounds worked into an electronic part. Transformation is no longer the preserve of the electroacoustic field. Music can oscillate in the metaspace between the real and the electronic, mediated by the suppression of disbelief within the listener, and the technical capacities of the composer and performers.

Human beings share more emotional understanding with other humans than they do with electronic equipment. The presence of a live element within a work can provide a strong empathetic bridge between the composer and audience, both in terms of reinforcing the original intent and catalysing other reactions based upon the material and external environmental factors. Having a live performer may not necessarily make people like the work, but it will draw them in, in a manner that a purely fixed medium work is unlikely to achieve.

The inclusion of a live musical element, be that of one or more musicians into the electroacoustic domain, has the potential to enhance many of the aspects of the electronic pattern of working but there are also obvious dangers in introducing something as apparently contradictory into a genre. If human creative interaction is required, it must be matched to the music in scope and intent, it must carry the listener, and provide both focus and foil to the musical ‘other’ of the electronic; they must be equal in their semiotic dance.

             1.iv      The visual element, space, and its redefinition.

Within any live performance the visual element is generally of great importance to the overall experience of the event (see the Polarities commentary for a photograph of performance). This is often less true within electroacoustic concerts due to their acousmatic nature. Nonetheless the spectacle of amassed speakers, desk and diffusion creates its own impact invoking an impressively modern and yet strangely ritualistic performance space. This forms a largely static environment for music making, which, whilst very impressive to the newcomer, can become familiar over time in much the same way as a concert hall, or even cathedral architecture, can become simply a backdrop for a concert, thus potentially requiring some form of visual focus. Introducing live performers into this equation must be done with care. As discussed above the physical presence of a performer can lend a sense of purpose to a work, however the dominance of our visual perception and the identification with another can very easily skew the balance away from an even dialogue between performer and ‘tape’ towards something that is close to karaoke.

This matter can be addressed in a number of ways. Firstly the work can be staged in such a manner as to ensure that the performer is not visually overpowering within the space, be that due to placing them next to large speakers or further back on the stage than normal in order to imply a non-soloistic role although care is required not to push the balance too far and ‘loose’ the human performer. Secondly, panning and diffusion can be brought into play. In a reverberant space an instrument will create a diffused sound. By definition this will engender a spacial element to the live sound, and, although obviously not as malleable as that of a multitrack piece, this forms auditory and conceptual connections into multi dimensional sound, in spite of arising from an arguably monophonic source. This can be exploited. Careful speaker placement can give the illusion of live and electronic sources emanating from the same point, but also, more importantly, the electronic then has the capacity to move, expand, diversify, interact with and subvert the live; provided that the initial unification has been established.

Musical unification can be achieved in a number of ways; panning the sound into alignment with the performer can achieve this, as can easily be done with the ‘performer + diffused stereo’ works The Way I Saw It and Enough~? and by staging the performers and speakers suitably with the more fixed eight channel works. As a tool for achieving unification panning remains relatively crude when used in isolation. Other aspects such as timbre, pitch, volume and temporal effects can be employed to create stronger links between the two elements. Equally the fading in of one source helps to set up a feeling of emergence from the background of the other. Careful consideration must be given to first entries within the music as this can set a strong precedent to the overall nature of the parts.

Be they battling or symbiotic, the nature of the media implies a relatively stationary live element and a potentially highly mobile electroacoustic counterpart. The conventional audience arrangement of facing the performer provides an obvious focus for the unification of the two parts and a front to the electroacoustic mix.8 The capacity for surround sound grants the visually subservient speaker part a dominance not afforded to the more visually powerful performer. Within this space there is therefore the potential for great flexibility of movement, once the performer has been established as the point of reference. An informed classical audience does not have the expectation of performers moving around the space whilst playing, neither does the electroacoustic audience forgive sedentary work. Once links have been established between the two methods of composition, it is implicit that they are very different and can therefore be expected that they exhibit very different characteristics, albeit with a unity of purpose.

             1.v       Alchemy versus architecture

 

It is clear that both the electroacoustic and instrumental media have differing strengths and weaknesses, and yet they both generate a similar output, that of music or sonic art. The lazy argument that some electroacoustic composition is something other than music, and therefore fundamentally differs from instrumental music, is easy to break down when one considers similar experimentation upon an instrument. Traditional patterns of composition such as form, tonality and harmony can easily be subverted, although to ignore such elements is to neglect the mode of expression for which such devices were created.

There is a large amount of shared gestural capacity between electronic and instrumental music, although at times the methods of creation may be very different, the overall effect can be similar. A timpani roll into an orchestral tutti A minor chord exhibits strong musical similarities with a reversed door slam moving into the frozen9 and filtered sound of a pane of glass shattering. The emotive and dynamic (and potentially the frequency) contents can match. It is largely the modes of delivery that differ, most apparently the presence or absence of notes (in the classical understanding of the term) as shown in the gestural writing in the opening of the third movement of Polarities.

The concepts of pitch and harmony are nonetheless of fundamental importance within both the instrumental and concrète traditions. The use of concrète sounds as opposed to purely synthesised sounds ensures that every note or object possesses its own internal harmonicity due to the physical effects of overtones, resonance and reverberance. This holds true from clear sounds such as a resonating wine glass or oboe, more richly structured sounds, a trumpet or a sea gull cry for example, through to more complex noise based sounds such as a drum, bell, or a passing train. These are all bounded by the same physical laws. Furthermore (as exemplified in Polarities) the implied resonances at the octave, twelfth, sixteenth etc. within sonorous musique concrète events create a degree of harmonic tension and a sense of unification between numerous disparate but consonant sounds.10 These factors enforce great infra-compositional similarities in spectral and harm- onic structure between electroacoustic composition and tonal orchestration in a manner not always perceived, due to the striking dissimilarities of instrumentation, focus, and musical delivery.

There is much that links electroacoustic and instrumental music beyond their etymological classification. There are also profound differences. Both are powerful vehicles of auditory expression, but are nonetheless limited in scope. Orchestral instruments are innately biased towards western notated music not only in terms of repertoire and construction, but also to an extent through the training and cultural norms of their practitioners. This is a fact that should not be overlooked when composing for such resources. Electroacoustic composition can bypass this lattice effect and is capable of a far wider degree of control over timbre, pitch and the temporally detailed elements of modern composition, although it lacks many of the facets such as live performance, perceived interaction, reinterpretation and dynamism so highly valued in the instrumental field. These two elements can, and should be brought together, with care. Both have a long tradition, arguably stemming from the same root, but are also incredibly disparate in their methods. It is this disparity that makes such a union possible, where one mode of working is weaker, the other can contribute, and where one excels the other can recede. It is only when this dynamism is harnessed into a singularity of musical purpose that a portfolio such as this one can hope to achieve meaningful, fulfilling and ultimately enjoyable music.

2.         Commentaries

The following commentaries present the pieces chronologically and I therefore suggest that the pieces of music be listened to in the same order. The pieces of music are meant to stand alone as well as part of a group so a listening order outside of the context of this text could be seen as arbitrary.

             2.i        The Way I Saw It

             Violin, fixed media and MIDI tempo track

The Way I Saw It was composed with a view to satisfying the original violin and electronics remit of this project. It also works as a stepping off point into the wider area of the following works and provides a logical compositional and philosophical basis for later developments. The piece was composed around the idea of a square. At its centre is a point of equilibrium and at each opposing side lies an equally opposing concept. To the left and right we have order and chaos, and at the top and bottom we find real and synthetic. These values could be substituted for many polar extremes such as pitched or unpitched, but the two sets chosen permit enough scope to experiment with many of the ideas discussed in the introduction. The music can be pushed and pulled towards any of the edges or corners of the square, but always pulls back towards the central point of equilibrium.

The way I Saw It ‘square concept’

The sounds used within the piece were recorded largely in the field, working from home to a waterfall in the mountains. A number of other sources have been used such as the sound of the violin, recorded into the fixed track, to provide a form of link between the two parts such as in bar 15 (see 1.iii). The sounds were edited and augmented in the studio and gradually arranged to form the basic shape of the piece. Structurally it follows very ‘classical’ principals, falling into a kind of sonata form. This is a deliberate step, as it permits the use of some otherwise quite unusual sounds whilst nevertheless providing a feeling of comfort and progression to the listener.

The two parts, violin and tape, were developed simultaneously. At times the electroacoustic side came first, and the violin part was crafted around it. At others the violin part was written and recorded and used as a template for studio composition. In addition to this there are also a number of places where the violin has been recorded into the tape part and treated with similar processes to the other sounds. In the instrumental part there are places where the performer is free to improvise on a set of loosely defined pitches, and violin techniques such as sul ponticello distortion draw the instrumental sound closer to that of the tape part (see score). All of which helps to blur the differences between the two parts.

The work was produced in conjunction with a MIDI tempo track. This is in no way a constant click track as used in pop recordings, but it has several advantages. Firstly it permits accurate plotting of the structure over time, and, it facilitates a far greater accuracy of synchronisation between player and recording than could be achieved by familiarity and counting alone. In retrospect this way of working still results in a rather regimented rhythmical performance, hence the adoption and development of other techniques in later works. The score for the player was completed using a combination of notational and graphical techniques. For the most part the violin notation is standard, except where notes of an indeterminate pitch are indicated. These are shown by using only the stem of a note. This clearly indicates time value (in all instances in this work such notes happen to be a dotted minim or shorter) and suggests an approximate pitch to the player. This technique is used for two reasons; firstly in the places where it occurs; rhythm, timbre and texture are far more important than pitch, thus shifting the emphasis away from it. Secondly this notation provides an output for individual performance expression in an otherwise very rigidly ordered piece. The violin score is accompanied by a graphic reduction of the tape part, formatted to work as a two part score. The graphic was generated by using sonogram analysis  of the fixed part, and then mapping bar lines to this.  From this the main perceivable structures and events could then be traced to the score relative to the note spacing of the violin part, as opposed to the absolute timing of the tape. This was done in a codified manner providing more ‘musical’ information that could be perceived from an unmediated sonogram printout. Once the audio part was completed it was then transferred to laptop. Both the audio and the MIDI tempo file were loaded into a sequencing program, in this case Logic. This then provides an accurate playback medium that can be easily triggered from on stage, but with the use of the MIDI clock an accurate tempo display to work to.

Intrinsically The Way I Saw It works as a piece of music or sonic art. This is especially true in its live form with diffusion, as opposed to simply performer plus laptop. However, the rigidity of synchronisation required to perform it well can, potentially, detract from the feeling of a live performance, and, although this is obviously mitigated by improvisational elements and the act of diffusion, many performances with tape carry with them more of the air of a feat of slight of hand and less of a truly musical experience for both listener and performer.11 In  spite of these potential worries the piece has always been well received. See 4.i for a list of performances of this and subsequent works.

             2.ii       Enough~?

             Clarinet and live electronics


Enough~? is a piece for Clarinet and live electronics. It was created through a combination of standard acoustic composition and Max/MSP programming software. There are some fundamental differences between this work and others that are included here. The main one is that, unlike the other electronic pieces Enough~? is a completely live piece, and in this respect presents a far higher degree of flexibility than other fixed medium or mixed works. Not only is the tempo of the piece controlled by the performer, as opposed to the synchronisation to tape required for The Way I Saw It, but the material for the electronic part is generated by the clarinet in performance. In this way the fine details of the electronic part, such as timbre and intonation, change with each performance as well as each performer. Combined with this there are a number of patches designed to interact with the performer controlling processing parameters, and in some very specific cases randomising the processing parameters of events. The effects of these changes are far greater than one would at first expect, as a sensitive player will react to the changes in musical content around them, thus playing will be guided in a specific direction of mood or energy. The same is true of the electronic part to a progressively larger extent as the player cumulatively reacts to the computer’s ‘reaction’ to them.

Enough~? was created on, and partly performed by laptop, running copies of Max/MSP and the Finale notation software. The electronic and acoustic part were worked on literally side by side, with the audio output of Finale being fed directly into Max/MSP to give a passable working impression of the sonic possibilities and results that the processing was giving. The process of composition was helped greatly by clarinettist Sioned Eleri Roberts, who has since performed the work and whose assistance and patience was most helpful.

The clarinet part revolves around a melodic and rhythmic framework set down in the first 32 bars. Within the piece itself there is experimentation using rhythm as a structural element. The work is governed by the movement of the beat. The first section is lively and agitated (a 4/4 9/8 4/4 11/8 cycle) the second is peaceful and tranquil (4/4 thoughout), and the third battles to resolve these two conflicting emotional and rhythmical states changing time inconsistently as the battle towards an equilibrium continues. Finally the work settles into a stable 9/8 which, over four bars, adds up to exactly the same number of quaver beats as the opening passage. Although a metrical rhythmic framework is not strictly necessary for this style of music, it makes performance, reading, and notation less problematic.

Within this framework the melodic material is developed and expanded following the pattern layed down by the rhythm, whilst trying to maintain an air of freedom and spontaneity. As a result there are a number of complex rhythms over, and at times obscuring, the basic pulse. Along with this there are number of quasi improvisatory passages taking previous material and reusing it in a slightly altered manner, for example at the beginning of the third section bar 129-133, as well as the introduction of seemingly completely new material bar (241-end). In addition to this, a number of extended techniques have  been  included  such  as  key  clicks  and  multiphonics12 in order  to  further  extend the sonic range of the clarinet bringing it away from the sound world commonly associated with the instrument.

The Max/MSP programming was designed to augment and complement the sound of the clarinet score. In some places it simply serves a canonic function storing incoming audio data and playing it back when appropriate, in other places it carries out more complex tasks as illustrated in the sub-patch shown below. In this particular patch the sound is going directly into the computer. The incoming data is subject to pitch and volume analysis,13 the results of which are then modified and used to control the parameters of a comb filtering patch. This in turn processes the other half of the audio data for output to the speakers. Some of the outgoing sound is then sent via an attenuated feedback loop back into the comb filter, thus further re-enforcing its effect. This is simply one sub-patch out of a large number of patches involved in this piece.

Comb filtering patch [b154-147e21]

Screen shot showing Enough~? Sub patch

Processing was triggered in a number of ways. The first and perhaps the most subtle was the triggering of patch changes by note analysis with the fiddle~ object. The other two ways employed in this piece were simple timing and MIDI triggering. MIDI triggering was the mainstay of these events. By using a keyboard sustain pedal (chosen because of its light response and easy availability) in conjunction with the midiin object it was possible to make a programme that simply changed patch incrementally every time the pedal was pressed. This makes for a very elegant and simple performance system, only requiring a pedal on stage and an interface to the computer. Events that followed on closely from a synchronised trigger which did not need to be absolutely accurate, such as starting a sound with a very long attack portion, could often be set to trigger automatically after 5000 ms. using count or the delay object. This meant that there was less demand on the player and on the hardware, with very little loss of accuracy or effect if employed judiciously and in close conjunction with synchronised events. In addition to this the computer operator/diffuser can monitor which patch is currently in use, and override the pedal if required. It is entirely possible for a clarinettist to set up and perform this piece with computer, absolutely ‘solo’.

The subservient nature of the electronic part is fundamental to the sampling method of composition, due to the real-time/canonic nature of the act of processing. No matter how intricate the machinations or how powerful the computer, the electronic part is forced into temporal, musical and expressive second place unless the computer is used as a generative synthesiser or plays back pre-recorded material. Enough~? proved to be a useful first step into live processing, and a gateway into object based programming and working with other performers.

             2.iii     Harp Set        

             Sampled harp, octophonic sound and moving image.

Harp Set was my first experiment in working with 8 channels. As such it is an exploration into the use of surround sound, employing instrumental sounds in a purely electronic construct, combined with a strong visual focus as expounded above. It was written specifically for the 2007 Bangor New Music Festival and is designed to take advantage of the octophonic diffusion system that was being developed (2006-7) under the auspices of Electroacoustic Wales. Thanks are due to Dr. Tim Porter for his advice on matters mathematical and fractal.

The piece is based on the ‘carpet’ or ‘square’ devised by Sierpinsky.14 In layman’s terms there are two squares, one 1/3 of the other in size. This smaller square is then duplicated reduced in size by 1/3 again (i.e. to 1/9 of the original) and moved within the space. This shift is defined as a move of 3 times the size of the said new square along any combination of the x – y axis. for example if the new square is 1 cm long (based on a previous set 3 cms. long) then all 8 1 cm. squares will be placed 3 cm away from the original. This process is then repeated on the 8 1/9th squares to make 64, 1/27th squares and so on ad infinitum. Harp Set is restricted to the first 4 generations of this process. See illustration, the notated score, and the mathematical description below.

Harp Set formula

Harp Set main theme before transposition, augmentation or diminution.

 

This construct was then used as a basis for musical composition. The main motif was composed and was then assigned to the central square. This was recorded from sampling software (Kontakt 2) rather than utilising a ‘real’ harp, as in the later material it would be physically impossible to play at the required speed (up to 480 bpm.) or in some cases at the required pitches. By manipulating the MIDI data it was then possible to apply transformations to the auditory material, similar to that which Sierpinsky had to the visual. With each new generation the theme was doubled in speed, and as the squares move higher up the visual field so the melody was transposed up, or down if lower. Given that the spatial generational shift works around a 1:3 ratio so does the pitch shift. The 3 different heights of 2nd generation squares are represented by transpositions of 18, 0 and -18 semitones respectively. The 3rd generation have pitches 24, 18, 12, 6, 0, -6, -12… For every 4th generation step there is a 2 semitone shift in pitch, beyond which the concept starts to become musically, if not physically, untenable. From this it was then quite plausible to build an accurate representation of the Sierpinsky carpet in sound.

Spatially the music reflects the physical movements of the visual image. Making use of the octophonic set-up, it is possible to place the audience in the centre of the visual field, so that any changes in the visual image are analogous to those of the auditory (provided that the audience is situated relatively centrally). If an object goes up the screen, so the sound goes to the front of the hall, if it goes left so does the sound image.

Diagram showing the first four generations of the fractal process.

Diagram showing the relationship between the visual and auditory space.

At 0’56” the main theme is introduced, diffused by pitch around the audience who are sitting centrally within a circle of 8 speakers. The notes are band filtered via MIDI so that each speaker has a range of up to a fourth. As a result each speaker only plays a number of disjointed pitches, however, the effect of them all working simultaneously is to create the sensation of being inside the main theme.

After this statement of the theme the audio field is matched exactly to the visual field. Within the previous diagram the audience would hear 3 sets of melody: the 1st generation in the centre; the low 2nd generation sounds behind them; the originally pitched 2nd generation material beside them and the high 2nd and 3rd generation sounds from in front.

Spatial effects were achieved by using a combination of Pro Tools, Kontakt 2, Rewire, and Max/MSP. Individual ‘squares’ of melody were realised using the Kontakt 2 software and sequenced via Pro Tools. These were then run through a version of the Bangor Audio Tools  (BAT) application   which  was  edited  to  include  a jpeg image of the  carpet over which mouse and cursor movements could be traced. This spatialised information was then fed by the Rewire protocol back into Pro Tools.

Initially the carpet was voiced incrementally from ‘square one’, up to the full four generations. This proved to be as predictable and symmetrical as the initial image. As music occupies a very different experiential time domain to the visual, this proved to be unacceptable. Instead, different aspects of the pattern are used to build up different facets of the piece, working towards a final build up of the entire pattern. Accompanying this, there are a number of more processed sounds all of which are based on the harp. This provides a greater sense of variety in terms of timbre, emotional and spectral content, as well as affording a degree of freedom into a rather contrived system. It is the decisions described in the last few paragraphs that rule the emotional content of the piece. There is an underlying emotional current there. It ebbs and flows and has its own high points and facets but it is not written about any particular set of events or subjects. Rather it is as much of a construct as the rest of the piece, but this time instead of evolving on a strictly mathematical basis the music started from that point and was then guided into places that sounded pleasing or interesting. The material itself covers a number of different ‘tonal’ areas. The main theme itself is very melodic and rhythmical although some of the transpositions are somewhat more abstract or avant-garde in their connotations, such as the step up 18 semitones (a compound tritone). Here we have both ends of the spectrum the dark and the light, just as in the video element. In this respect it is a truly abstract work.

The video element was created in tandem with the audio. As new elements were added within the Pro Tools session, such as the introduction of the second generation squares a note was made of the time code of this, both as a marker within the software and on paper. From this a story board was then compiled. The initial images were created on Photostudio and were animated on iMovie. By necessity the video works around the cutting and crossfading of a large number of static images in a manner not dissimilar to mainstream animation. It is effective largely because of the simplicity and uncluttered representation of the sound and image. The video was then run alongside the audio within Pro Tools and re-edited to ensure the tightest sync possible. The audio was then compressed by element/track and a small amount of reverb was added. This is common practice within most commercial music  however this application is not particularly widespread and even viewed with suspicion within this genre, as it is reasonable to expect the work to sound as the composer intended from the studio. In moderation mastering is a useful ‘finishing’ process. All subsequent works exist in a number of mixes, one as a concert mix with a full dynamic and frequency range and at least one other mix, more attuned to working on a home system. The ‘home’ versions have been created by boosting the RMS by way of parallel compression, and a slight EQ tip up to boost midrange clarity and bass response. Eight channel versions have also been flattened to stereo. This has been achieved by panning them into a 4 channel format (L, R, Ls, Rs) using simple trigonometry to try to ensure ‘correct’ placement, adding a slight reverb and high frequency cut to the satellite speakers to add a greater depth and then summing L+Ls and R+Rs into stereo. In some cases such as that of Postcards from Home there also exists a third mix designed for .mp3 internet streaming via sites such as http://www.myspace.com.

As there is very little software (especially at the time of writing) that reliably plays 8 channel audio and video, it was necessary to create something that did. As a result a program was written that would take a Quicktime movie file (.mov) and play it in sync with an 8 channel interleaved audio file (.aiff) over the a specified audio interface (see Appendix C DVD 1). The application is relatively inflexible but it is gives a very tight synchronisation, has proven to be very stable, and delivers sound and audio to the maximum quality available on the system on which it is running.

Harp Set is an exploration of the instrumental within the electronic and addresses one of the key visual issues of electroacoustic composition discussed above in sections 1. iii and iv, providing a sense of causality (or corroboration at the very least). Musically, it bridges between the tonal and the gestural, something that will be shown to be of great importance for the successful amalgamation of the electronic and instrumental. Intrinsically, Harp Set provides a staged visual element and an apparently connected surround electroacoustic element.

             2.iv      Postcards from Home          

             8 channel fixed medium

Postcards from Home was written as a response to a request for a piece for the Sonic Arts Research Centre ( SARC) in Belfast. It is my first attempt at a fully integrated octophonic acousmatic piece. Harp Set proved a useful foray into the world of multichannel work but by its nature cannot be defined as acousmatic even in the loose sense that it has come to be applied. Postcards from Home is a work intended to develop skills in utilising the forces available in a solely 8 channel (i.e. non-instrumental) scenario. It is reasonable to assume that a degree of technical proficiency is required in both the electronic and instrumental fields in order to take full advantage of their joint possibilities. Postcards from Home works with more traditionally electroacoustic techniques than any of the other pieces in this portfolio but in doing so attempts to create a ‘melodically’ and ‘harmonically’ rich piece from non-instrumental sounds.

The ‘sonic postcard’ concept has been around for some time.15 However the temptation as a result of moving into a new house at the time of composition was too great, and this form was experimented with. The overriding musical aesthetic of the concert was to be more of the concrète/acousmatic persuasion as opposed to ‘soundscape’. It soon becomes apparent that these two schools of thought are not as contradictory as they at first appear. A number of sounds were collected from around the area, except for thunder claps16 which could not be obtained to order, and then used as the basis for composition. The naturalistic feel of the start builds on the experience gained with The Way I Saw It but in this case the focus remains on the recorded sounds rather than shifting away to the instrumental.

Structurally, the piece is designed to open a concert, and as a result can be described as a through-composed wedge shape. This statement can be applied to a number of parameters most notably with the volume and number of voices gradually increasing through the piece to open out into a dramatic fanfare effect with the final rolls of thunder (see Appendix A for sonogram and waveform analysis). Within this, the material oscillates between areas of familiar ‘arranged’ sounds that are often left relatively unprocessed, relying instead on a more concrète style of sound presentation, whilst also pushing out into areas of more electronically generated or heavily processed explorations of the source material. A lot of the processing was achieved through fairly conventional means such as the G.R.M. tools comb filter or freezing plugins, but in many cases large amounts of parameter automation were carried out, be that through programmed event changes, H.U.I. or auto-latch techniques in order to provide a greater sonic variety, and a more natural sensation of causality brought about by physical human control. There are a number of other elements that could be said to accompany these general movements, such as the introduction of a ‘block’ of rhythmical material, many of these are to do with achieving musical balance within the overall shape. The defining interest of the piece is in the sounds used, the transformations that they achieve, and emotional and sonic journey that the piece takes the listener on.

In order to take full advantage of the diffusion system in  SARC a number of elements were recorded in the studio with 8 microphones mirroring the arrangement of a standard circle of 8 speakers. This is successful to a point. A lot of the time it proved impractical to record with microphones far enough apart to create much noticeable separation in sound placement and erecting baffles created problems of reflected sound and EQ coloration. This, coupled with the sheer impossibility of getting a forest into the studio, resulted many of the sounds being recorded in stereo and then processed with the BAT to create 8 channel events.

In situ in  SARC the piece worked well with the five layers of speakers. Performance was essentially an act of balancing, ensuring that the more bass heavy elements had the impression of being routed in the basement and that some of the more high frequency sounds had the space to sparkle overhead, and tutti for the end. The only notable exception to this arrangement was a circle of 8 speakers set up to run from hard left, overhead to hard right and under the feet back to the left again. This was used at the very start where the rotating and nature of the material suits this approach.

Whilst the surround method of using 8 channels is becoming more widespread, and indeed 16 and 24, such localised spatialisation and rigid formatting is not suited to the progressively larger performance theatres we are beginning to see such as SARC and ZKM, and that a more fluid approach may be called for to fully take advantage of such systems.

             2.v      Con-chords

             8 channel fixed medium

Con-chords is an electroacoustic piece which draws exclusively on the following excerpts for orchestral sounds and textures as its compositional material:

BERG: 3 Orchesterstücke, Op. 6

BARTOK: Violin Concerto No. 2, BB117 – Movements. 2 and 3

ELGAR: The Dream Of Gerontius, Op. 38 – Movement 21. Softly & Gently

MESSIAEN: Des canyons aux étoiles – Movement 1. ‘Le Désert’

(permission sought from copyright holders 10/08/09)

These samples were used in order to create an electroacoustic piece that draws on the sound world of the twentieth century symphony orchestra, utilising a wide range of timbre, whilst still retaining many of the traits of standard orchestral writing. As such the sounds are a source in the acousmatic sense rather than being intended as quotes or a form of explicit reference.

Due to the musical and harmonic strength of this material careful handling is required. A large amount of creative input is necessitated on the part of the electroacoustic composer when dealing with such well known sources to validate the intrusion into the musical integrity of these works. It is a perfectly acceptable home musical experience to play back a CD recording of any of the sources above, without any incursion by a third party composer. It is therefore imperative that as basis for composition such resources be handled delicately, respectfully, and yet with enough audacity and technique to convince the listener of the merits of such an undertaking.

As a result of this philosophy the work opens with a largely unprocessed but carefully edited fanfare figure, placing the listener instantly into a strongly acoustic listening scenario. Within this remit the sounds are treated in a number of ways, using techniques one might regard more as editing such as repetition or reversal rather than the more heavy processing. This approach continues throughout the piece where sounds are sometimes found almost in their natural state. Often, on close inspection it becomes apparent that they have been altered quite significantly although still retain some of their intrinsic nature. The violin extract around 2’20” is a case in point. Whilst retaining its signification as an acoustic instrument, the violin sound has been repeated, delayed, pitch shifted and equalised resulting in the phrase being extended out until at around 2’35” when it is fragmented into the ensuing electronic texture.

In a some cases the original extracts have been processed to the point of being unrecognisable. This has been achieved in four main ways: Firstly by drawing on a number of plugins; notably Digidesign’s Recti-fi and Moogerfooger Lowpass and G.R.M.’s FREQ Warp to drastically alter timbral qualities such as at 7’30”. Secondly many objects were edited in the time domain by compressing, expanding or freezing to create ‘new’ material. Sounds have also been spatialised within the octophonic field by placement within Pro Tools or external processing in Max/MSP. In addition to this, processed sounds were edited and triggered via sampler thus creating not only new sounds but new passages such as that underlying the texture from 1’25” to 2’40”.

The act of using sonically very rich but relatively stable sounds for processing led to a number of interesting results. The great variety of information contained within a tightly controlled window of timbre, pitch and dynamic meant that when processing to extremes there were often very coherent results to be found within the fine detail of events. Time expansion by factors of up to 64 times the original length led to the uncovering of musical artefacts such as harmonics and articulations previously unnoticed in the recording, and transpositions and spectral manipulation also yielded unexpectedly pertinent results.

Even when heavily processed the work needed to retain a natural feel. As a result of this, wherever possible, processing automation has been rendered by hand as opposed to with line drawn parameters and has been ‘performed’ on the computer in the same way that it is possible to speak of playing a software sampler. Taking this to a deeper level the work draws on transpositions around classical perfect intervals of 4th, 5th and octave for the reason that they combine well harmonically. This fact couples with the notion that a culturally informed listener would expect to hear such intervalic relationships played on the real instruments depicted in this electronic piece.

Con-chords sets up a tension between the quasi-realistic sound worlds of the edited samples and the alien nature of the processed material. This spirals throughout the piece creating something that could be regarded as sonata form, although it is more the result of conflict and implied resolution between two defined ideas. This sentiment echoes much of the philosophy of the piece. It is intrinsically quite traditional; it draws from established sound worlds, exhibits traits of classical form and carries emotional as well as musical motifs within it. It demonstrates how electroacoustic music can fit into an acoustic context but also how notated music can be understood as a gestural medium as well as the result of notated composition. The work also shows how hard it is to achieve both of these phenomena in one object at one time as expressed in 1.ii.

We are no longer simply dealing with the perceived transformation of a sonic entity from one form to another as outlined by spectro-morphology, but, in addition to this, from one perception of musical discourse to that of a differing standpoint. Con-chords shows that discourse and theoretical structures can be objectified. They can be transformed, they can be strengthened and they can be broken down, provided that there is an alternative system with which to interact.

             2.vi      Seasons

             SSAATB choir and surround electronics (Text by Graeme Harper)

             Download pdf score

Seasons was written as a response to a commission by Llandudno Festival of the Arts. The brief was to create a piece of work for choir and electronics lasting for approximately fifteen minutes. The music is based upon a poem written by Graeme Harper specifically for the piece, and is a very literal reflection of the overall theme of the festival; that of the seasons. Within the writing there was a deliberate attempt to leave leeway for musical creativity, and it presents numerous opportunities and challenges at a variety of levels.

Seasons

                                                  New flocks, sighted on old stones

                                                  Recall a cycle, oceans swelling

                                                  The teetering tones of trees

                                                  And in the blossom ends bloom.

                                                  A child and mother,

                                                  Two beginnings, two endings

                                                  Meet between movements.

                                                  Larger than one life, larger than you

                                                  Or me, together, apart

                                                  Two, or four, or many cycles more.

                                                  Until one day, one day,

                                                  Away it starts, again.

                                                  New flocks on old stones,

                                                  Beginning and ending, many more.

© Graeme Harper 2008.

Duplicated with permission.

The music is based upon this text and this interaction starts at the structural level. The repetition of the ‘new flocks on old stones’ imagery implies a sense of recapitulation being both ‘beginning and ending’. This analogy is further exploited in that it serves to create an arched form, moving from the atonality of bars 47-71 and 217-226 into the fugal ‘summer’ figure around bars 106-121. Implicit within the structural similarity is the concept of a direct literal or causal link between the text, choral part and electronics, or to condense the classifications further, between music and lyrics.

There are a number of places where the electronic part is used to reinforce the reality created by the text. For instance ‘the teetering tones of trees’ are accompanied by bird song, and the winter section has a ‘wind’ sound within it (albeit wind created in the studio due to the practical difficulties of recording such a sound). With the recognition of the source and its identification with the text, the elements of text and music are pulled into a mutually supportive role. In a number of cases such as the two mentioned, the sounds also possess musical as well as environmental implications, the bird song being transformed by the use of lowpass and comb filtering to become a textural element. This demonstrates the movement beyond the structural and mimetic use of concrète sounds into the realms of the more traditionally musical.

Within the ‘musical’ listening space a dialogue is set up between the choir and electronics which is at times complementary, but occasionally antagonistic. Sonically, the two parts start from a very similar point. The opening features an atmospheric and winter-like soundscape based upon a wind sound which focuses on a number of pitches, most especially that of a written E. The choir then enters quietly upon a unison E, and the similarities between the wind of the electronics and the air flow of the voice are clear. From this point of unison the two parts diversify and maintain a relationship not unlike music and image, in that they are symbiotic but not necessarily the same at a particular given moment. This is especially true around the fugue where the timbre of the two parts are vastly different, representing a mid point through the piece, but retaining a unity through their matched energy. The analogy with music and film can be further extended in that both the vocal and the electronic media have their own particular strengths and weaknesses; different effects and intentions can be realised by each, where something cannot be achieved musically, environmentally, structurally or emotively with one, it can often be done, with the other, or by a synergy of both.

Time constraints necessitated that the vocal score was prepared before much of the electroacoustic material was created. This was not excessively problematic in that the score was written with an electroacoustic part in mind, and in many places written on the rough score in a similar manner to The Way I Saw It. Following this pattern of working three MIDI/.wav files were created from the notation of the vocal part. The first of which was to tempo, the second of which was twenty crotchets per minute under tempo and the third, twenty crotchets above the expected performance speed. The first file was used as a template to set the electroacoustic part. The other two were retained to rehearse the performance of the electroacoustic part in the studio at different speeds and to ensure that the various sound files would still maintain musical coherence when triggered at different time intervals. Seasons was created in 8 channel surround but was then converted into 4 for the concert (left, right, left back and right back by utilising the 8 channel file and the 4.0 pan control in Pro Tools). This was necessary owing to the limitations of the performance venue, equipment and budget.

The electroacoustic part was developed in layers of events. These were bounced as individual objects. Having individual sound files for separate events greatly increases the capacity for synchronisation between choir and electronics, due to the fact that a given file can be triggered in line with a given performance and is not fixed in the time domain as it would be in a ‘tape’ piece. This approach proves useful although it does have limitations. Most pressing is the need to ensure that a slow performance does not run out of electronic material. This can be resolved in a number of ways. Firstly, ensuring that sound files overlap gives a greater flexibility of time base, secondly percussive and resonant objects can be manufactured to include reverb tails rather than stopping abruptly all of the time, and thirdly blending the timbre of the electronics with the choir can blur the sound world sufficiently to mask moments of disjuncture. It is still easier than trying to persuade an amateur choir to keep to time with a click track and arguably a more musical undertaking. (See below for screen shot of overlapping regions prior to file export.)

Such an approach required an interface. Were it only 2 sets of stereo audio files running concurrently a quasi DJ approach and two varispeed sound sources could be used. Unfortunately this was not the case. As a result of this the Key Linked Audio Noise Generating System17 was  developed  which  is  a  computer program designed to  facilitate sound file triggering in installation or other live scenarios. It was programmed using Max/MSP and permits the user to assign any given audio file (up to 4 interleaved channels of 64 bit; aiff, sd II, NeXT/SUN(.au), wave, and Raw Data files) to any given letter key of the keyboard for playback. For instance if the sound of a casserole dish were asigned to the letter c, the dish would sound every time the letter c was pressed (in addition to any other sounds already initiated by other keys). Just as any sound file can be started or re-triggered by pressing its related key, in this case c for casserole, so any sound can be stopped by keying alt and the appropriate key for instance, alt+c. Although this is not very important for short duration sound events it becomes a very significant consideration when dealing with longer more drone-based sounds. As the software sources audio directly from the designated hard drive, the upper limits of audio file length become largely governed by system specification and performance. This can lead to very long audio files, coupled with the real possibility of accidental triggering necessitated some form of file specific stop button.

 

Screen shot showing the overlapping of sound files for Seasons.

 

The audio interface structure was for pragmatic reasons built around an Emagic 6 out 2 in interface. Therefore the first 4 channels are used to provide audio outputs for a (non-specialist-event-friendly) 4.0 surround sound system. Whilst this obviously does not take full advantage of more high specification systems, it can provide something close to the overall effect on a fraction of the equipment. This leaves the last 2 outputs of the interface free. These were allocated the role of P.F.L. cueing. Any given sound is summed to stereo and sent to these outputs by simply pressing the shift key and the relevant letter, so that c plays the casserole dish whilst C plays the same sound on the P.F.L. channels only. The ‘rehearsal letters’ in the score show where a given letter is pressed.

The software was first used to create an installation as part of an event – Yr Alban Hefin, in the Gwydr forest, it has also since been adapted to be live triggering, processing and diffusion interface as part of a composition project working with young people with special needs.

In rehearsal and performance with the choir, a number of factors could be noticed. Most apparent was the very differing spectral content between studio based music and live musicians in a performing, rather than recording, environment. Approximately 3-5 dB of attenuation was required in the 4 kHz region to balance the electronics to the choir and to enable the extreme bass and treble capacity to come through without drowning the choir with high-mid range frequencies. Also of note is the narrow crest ratio18 of the choral dynamic range when compared to the capabilities of 24 bit sampling (although this would increase with a larger group), and a boost to the R.M.S. level of signal of the sound files was necessitated after the first rehearsal. Within the live setting the role of reverb played an important part. The natural reverberation of the building helped to blend the vocal and electroacoustic parts into one another in a similar way to its use in recorded music production, hinting at extending the ideas outlined in 1.iv. Far from thinking about the electroacoustic part as an extension of scoring and orchestration techniques it becomes tempting rather to think about instrumental and notated music in terms of mixing techniques, regarding the layering of frequencies, depth and presence, spatialisation and stereo spread as just as vital as, if not more important than, the attention to chord/object voicing or harmonicity.

The work was performed using a four channel set-up including a coincident pair of microphones aimed at the choir to enable some of the choral sound to further blend with the electronic, by sending it at a very low level through the speakers. Little diffusion took place in that the work is spatialised within the sound files. However, the triggering of the synchronous audio files with the choir and the delicate balancing of volume (via faders) and timbre (via EQ and speaker placement) required to make the music work, still leaves much to the electroacoustic performer.

Regrettably, due to battery failure in recording equipment during the concert (except for a video recording made by one of the choir members of poor sound quality) there is no record of the performance. There follows a review / critique by Roy Woods (M.A. electro-acoustic composition student) who was in attendance at the first performance (written at my request as a result of these issues). It is hoped that the work can be recorded with live choir in the future.

Seasons, Ed Wright

St John’s Methodist Church, Llandudno. 1st November 2008.

Seasons, a piece for choir and live surround electronics, received its premiere as part of the autumn session of the Llandudno Festival of Music and the Arts. The piece was specially commissioned for the festival and featured Amici del Canto – a local choir – with the composer, Ed Wright, performing live at the mixing desk. The electroacoustic elements had been pre-recorded and ‘treated’ (to use Brian Eno’s terminology) as part of the compositional process. Nigel Shaw conducted the choir, allowing the composer to concentrate on triggering the electronic sounds and sending them to one (or more) of the five [four ed.] channel speaker system set up around the inner perimeters of the church.

The overall acoustic of St John’s helped enormously to diffuse the sound and add a quantum of reverberation, without overwhelming or muddying the sound. The electronic elements were, therefore, a part of the overall mix, rather than discrete elements poking into the listeners’ consciousness from specific corners of the church. This worked well in creating an overall ambience and an integrated sonic experience.

The piece was, to this listener, impressionistic; it was described as notionally following the four seasons through a year, “….. winter giving way to spring, and a new world gradually emerging to the light….” (according to the programme notes). This it did, inasmuch that the piece was not performed in four separate distinct parts, such as in Vivaldi’s famous work, more as a gradual segue through the impressions of a year unfolding. I thought this worked well, rather than clumsily underlining “This is winter!” etc.

The text sung by the choir was written for the piece and, although difficult to follow from the actual singing, could be read in the programme notes.  I was somewhat ambivalent about the words, feeling that the sound of the choir was sufficient to make the text possibly extraneous. The sound of the choir itself, in the church acoustic, was sufficient to carry the piece. I suspect that the music would have had greater impact with a much larger choir – allowing the music to have greater dynamic range using sections of the choir to build to a greater climax.

The electronic sounds were of interest. They, in the most part, worked well with the choral sound, in some cases underpinning with low bass sounds and in others adding the natural sounds of thunder and birdsong. I think the use of birdsong – perhaps too loud in the mix, should have been more subliminal. In fact once or twice, there was an electronic sound from the rear of the church which didn’t quite fit the homogeneity of the whole work.

I enjoyed the piece. I thought it worked well – apart from the occasional identifiable ‘electronic’ sound from the rear channels. As was said beforehand: “It’ll probably be hated by lovers of choral music and hated by lovers of electronic music – satisfying neither audience.” It certainly was not the case to my ears, integrating well, without the electronics grating on the ear and standing out from the overall perception as a separate identity. In this way it succeeded very well.

Roy Woods

Duplicated with permission.

             2.vii     Polarities: Concerto grosso for orchestra and electronics

             Orchestra, eight channel electroacoustic music and live processing.

             listen to extracts online in stereo at http://www.myspace.com/virtual440 

             Download pdf score

Polarities is the final piece in this portfolio and to some extent represents a culmination of much of the previous work. The work falls into a very classical 3 movement concerto format (i. On Points of Departure – Allegretto, ii. Cycle – Andante, iii. Tarantella – Vivace) and includes solo elements from seven orchestral instruments as well as the electronics, hence the the reference to ‘grosso’ in the title.

The piece is constructed around three main resources; those of the orchestra, live sound processing and eight channel electroacoustic music. Philosophically it is trying to prove (or disprove) many of the concepts and ideas set out in section 1 and therefore this commentary will focus more on the compositional process, as the conceptual framework has for the most part been stated previously. In some ways a lot of the work behind Polarities has already been explained and the work is a result of scaling up, culmination and a refinement of ideas and techniques used earlier in this portfolio.

Like The Way I Saw It this work was created on computer with the electronic and instrumental parts being formed in tandem. In many cases the instrumental part was also worked out on paper with the electronics drawn in as a graphic aide memoir but in addition to this there was the formulation of the processing element within Max/MSP. By necessity this had to be created last in the process, or at the very least, it had to be the final element within any given section due to the fact that the instrumental sound had to be present to be processed. Whilst this is slightly awkward, it is not overly problematic. In much the same way as a composer can mentally hear a given orchestration so it is possible to roughly approximate a given sound process. Thus, the Pro Tools session, Sibelius file and numerous pieces of manuscript paper became heavily annotated in order to bridge the gaps between the worlds.

The instrumental part is deliberately motivically strong. This not only helps to balance some of the strong, real world connotations of the electronic part but also enables the audience to grasp the nature of the work and its methods of development. Within Polarities the instrumental music is mostly tonal in that it relates strongly to the fifth and octave and to other touchstones of western art music, such as the tritone in bar 1. However it is also concerned with development in an electroacoustic manner. The opening of the third movement is a deliberate attempt to mimic downward pitch/time shifting and reversal. The end of the first movement emulates freezing or an ‘inf.’ reverb setting, and the use of the tune from bar 285 and 480 as a timbral and contextual transformation. If the music was too atonal this would could not be as clear, nor as sonically vibrant.

The ‘fixed’ part was created from a number of four channel and stereo recordings one of which was made most memorably standing waist deep in the sea. These were processed in Pro Tools and Max/MSP and orchestrated to create the mainstay of the electronic part of the work. Many of the processing techniques are similar to those used in Con-chords (see above) albeit carried out on largely non-instrumental sounds, with the intention of fitting them into an instrumental framework. These regions were layered using similar techniques to those used in Seasons, so as to create a number of overlapping sound objects to accompany the less temporally rigid orchestral material.

In addition to this a number of processing patches were created in Max/MSP to transform the sounds of a number of solo instruments which in combination with the electronics comprised the concertino part (flute, oboe, saxophone, trumpet, 1st horn, marimba and harp). The overall orchestral output is also processed and added to the ‘tape’ material to provide a tutti ripieno. In the live context the concertino sounds were captured by ‘close-mic’ techniques and the tutti with one microphone set back from the stage. In the studio, the output from Sibelius was used to feed developmental patches in order to check processing output. The instrumental processing consisted of a variety of techniques building on the work of Enough~?, from simple delays up to more complex pitch/volume interactive modulation patches. The processing was designed to augment the sounds from the orchestra and fixed part, so it was kept deliberately unobtrusive so as not to overpower the dialogue already created. On a pragmatic level this also meant that processing intensity could be held down in a live situation.

In order to make the project work in a live situation considerable work was required to build a performance interface. As previously stated the fixed part can be broken down into a number of objects and events; however they were too numerous to be playable with the KLANGS software, as there were more sound files than letters on a qwerty keyboard. Wishing to build on the success of the KLANGS approach another form of hardware interface was required. The answer was found in the form of a MIDI keyboard, providing six octaves (therefore over 66 possibilities and many permutations) and easy availability within a concert scenario. Each sound object was mapped to a key, as was each processing event, and with the use of the midiin and sfplay~ objects could be triggered simply by watching the conductor and playing the appropriate ‘note’ as indicated by the score at the correct time. Accidental triggering can be cancelled by holding down the sustain pedal and pressing the erroneous note. This creates a relatively simple interface for repeat performances with or without the composer present.

In performance Polarities draws strongly on the format first explored in Harp Set, and later developed musically in Postcards from Home and Con-chords. The audience is again seated within a circle of eight speakers with a strong visual/musical stimulus at the front, but in this instance, instead of being an animation apparently connecting with the music there is an orchestra. The orchestra is to be seated just outside the speakers i.e. with the front speakers on the very edge of the stage or close to this position, thus enabling a solid interaction and spatial conjoining of the sound worlds. To avoid excessive warping of mix and balance, speakers should be kept close to the walls and stage and out of the audience area, provided that an angle of 45 degrees is maintained between them. This ensures that rotational elements are not offset to any great degree, and given that the performance space is not too irregular, satisfactory results can be achieved with simple trigonometry. Experiments utilising time delays and volume attenuation have been undertaken but the effects have been found to be of minimal improvement in a ‘normal’ concert hall situation.

At its premiere, two sub woofers were added to improve bass response and impact, one at the front of the hall taking a sum feed from the front four speakers and one at the back, working in a similar manner. Two speakers were also placed on stage for the benefit of the orchestra, relaying a stereo feed of the audience’s eight speakers. This was more for the enjoyment and interaction of the orchestra rather than any synchronisational need for them to hear, however, the psychological effects of such matters should not be understated. As in the case of Seasons a small amount of high frequency attenuation of the fixed part was required. The work has since been given a repeat performance in London 7/11/09 in which similar observations were made inspite of the presence of a different orchestra, venue, speakers and mixing desk. A studio performance of Polarities was released by Blipfonica Records on the same day.

Polarities performance, the diffusion point, audience placement, front 2 speakers, microphone setup, front sub woofer and onstage monitoring are visible.

3. Conclusions

This body of work was, and is, an attempted integration of the fields of electroacoustic and instrumental music, ‘in an effort to create a singularity of performance and meaning’. There are individual works displaying movements towards this goal, and a number clearly showing the combination of forces required to meet it. The success or failure of these works from an artistic perspective is implicitly bound up with the cultural and social norms and expectations of the individual listener. This is not to say that judgement is therefore impossible as works can be viewed by those with expertise or compared to similar works that are held in general high regard. The reactions of concert audiences, radio listeners and repeat performances are also indicative of general success or failure in terms of accessibility and enjoyment (as in part attested to by the list of performances in 4.i).

I have attempted to contribute to the integration process and the development of this ‘mixed’ style; producing works (both experimental and commissioned), giving papers, running schools workshops, setting up music events, teaching and above all performing. It is very rare that musical trends and movements can be catalysed by a single work or composer, rather, history shows us that landmark moments tend to happen as a result of cumulative processes.

The combination of forces runs as a link through this portfolio and has a direct bearing on the sonic techniques and intricacies of each piece, and in many cases the juxtaposition of forces facilitates effects that could not be achieved in isolation. This concept is not an end in itself. Work has begun in synchronising electronic and ‘live’ music, in truly understanding the gestural use of space, and writing for and creating instrument/computer interfaces. Musical culture has yet to codify, normalise and absorb these innovations. There are many others. This is a time of great musical and technological progression and as yet society has not come to terms with the new capabilities afforded to it.

The work in this portfolio shows that unification is possible between these diverse materials and methods of working, also that there is much yet to be discovered in the details of numerous individual and composite elements and techniques, but that these are not an end in themselves. Each piece stands as music and has narrative, emotions and development external to the mode of musical delivery. In this context we have layers of media and methods of delivery which in turn contain other forms of information, these can be brought together, intertwined, juxtaposed and even set against each other ‘the medium is [not] the [only] message’.19

Through this body of work, at times produced under difficult circumstances, this writer has learned not only discipline, but also that in spite of all the rigours of art music and its various formats, that one needs to balance various elements into one entity. Such music requires not only the diversity of forces, intellectual application and cultural understanding, but also those things that drive the basic human condition such as love, fear, hunger and joy. Thus composing becomes more than simply the output vehicle for sounds, be they electronic, instrumental or both. It is when we can achieve this balance that integration and juxtaposition at an instrumental / electronic level can be truly successful in a musical sense. This is my intention.

Edward Wright, Penmaenmawr 2009.

1  Wishart, T. On Sonic Art (Harwood Academic Publishers, 1998),  p. 25.

2  Wishart, T. On Sonic Art (Harwood Academic Publishers, 1998), p. 155.

3  Examples of this are numerous such as Pete Stollery’s Altered Images released on Legacies: Works from BEAST vol. 2 Various artists; (Sargasso, SCD 28046)

4 As is graphically depicted by Wittgenstein’s famous ‘Duck-rabbit’ to be found in Wittgenstein, L. Philosophical Investigations (Blackwell, 1993). Section XI part II.

5 Smalley, D and S. Emmerson (eds.) The Language of Electroacoustic Music (Macmillan, 1986) pp. 61-93.

6 Such as in Varèse E. (rev. 1927) Amériques (Hal Leonard Corp.)

7  Description paraphrased from Robert Normandeau 2008.

8 This should not always be assumed to be the case, as demonstrated by a number of the works of Jonty Harrison in which the eight channels are used more as eight stems or possibilities rather than eight physical locations. Such as can be seen in performances of Streams available on Harrison, J. Environs (empreintes DIGITALes, IMED 0788 DVD-A).

9 The word ‘frozen’ is used here in the temporal sense but also in reference to the freezing process/plugin developed by G.R.M.

10 Such as can be found in Smalley’s Wind Chimes on “Impacts intérieurs” Dennis Smalley empreintes DIGITALes IMED 0409 Tk. 5.

11 This unease is not the sole preserve of this writer, see Emmerson, S. Living Electronic Music (Ashgate, 2007) , pp. 107-8.

12  Fletcher, N. The Physics of Musical Instruments (Springer,1998) p. 144.

13 Analysis carried out using fiddle~ v1.2, and bonk~  v1.2 (University of California, 1994).

14 For further specialised details see: Falconer, K. Fractal Geometry: Mathematical Foundations and Applications (Wiley Blackwell, 1997).

15  e.g. Rob Mackay’s Postcards from the Summer on his as yet unpublished Soundworks CD made available by Mackay as a CD-R short run ‘demo’ disk.

16 Roll of thunder sourced from http://www.therecordist.com.

17 A slightly satirical nod towards Jonty Harrison’s Klang, KLANGS software available to download from http://www.virtual440.com from spring 2010 (Mac OS) .

18 Katz, B. Mastering Audio (Focal Press,  2007), p. 322.

19 Refuting the sentiments of McLuhan, M. Understanding Media (Routledge, 1964).

4.         Public Output

The tables below list performances of works in this portfolio and works relating to them during the time of registration

            4.i        Live performances

Date Piece Event
9/3/05 The Way I Saw It Bangor New Music Festival
17/6/05 The Way I Saw It Sonic Arts Network-Expo
1/7/05 Passage (Sound/image installation) Art Forum, Antwerp
9/3/06 En Masse (stereo electroacoustic) Bangor New Music Festival
8/6/06 Botany (SATB choir) Llandudno Festival of the Arts
26/10/06 Enough~? Electroacoustic Wales
15/3/07 Harp Set Bangor New Music Festival
26/8/07 Broken Glass (String quartet electronics and image)
Passage & Harp Set
S.N.O.G. Fest (Snowdonia Nights Out Gazette)
25/10/07 Postcards from Home SARC – Queens University Belfast
15/11/07 Postcards from Home (revised) Electroacoustic Wales
28/2/08 Con-chords Bangor New Music Festival
13/3/08 Harp Set Risk of Shock
8/4/08 Harp Set PACE at DMU
28/6/08 Klangs project (live triggering/interactive event) Alban Hefin
26/8/08 The Way I Saw It & En Masse S.N.O.G. fest
1/11/08 Seasons Llandudno Festival of the Arts
28/2/09 Polarities Bangor University Symphony Orchestra
5/4/09 Castell (composition project with Ysgol Pendalar) Bangor New Music Festival
11/7/09 Con-Chords, En Masse, Passage & Postcards from Home Blipfonica, Speakers Corner
7/11/09 Polarities Philharmonia Britannica, St. James’ Piccadilly

             4.ii       Broadcast works

Date Piece Method
1/10/07 Botany Storm FM (student radio)
2/3/08 Broken Glass WFMU (internet radio)
5/6/08 Postcards form Home BBC Radio 1 Wales
18/10/08 Con-chords and Postcards from Home Coast/Champion FM with interview
5/4/09 Castell (composition project with BNMF and Ysgol Pendalar) Y Sioe Gelf S4C t.v.

            4.iii      Other academic / peer reviewed work

Date Event
1/9/07- Module co-ordinator / lecturer for UWB ‘Practical Music Technology’ and ‘Recording Techniques’ modules.
8/4/08 Performed and delivered paper on Harp Set in PACE, De Montford University.
25/6/08 Broken Glass performed and delivered as as part of ‘Religion and the Arts’ conference, Bangor University.
2/7/08 Con-chords achieves ‘mention’ at Prix Bourges

5.         Technical Resources

This list is a combination of university equipment and the composers’ home studio.

Function Make/model
Workstations Mac Pro quad core 3 GHz OSX.5 with Digidesign 96 IO PowerBook G4 12″ with eMagic 6|2
Software Pro tools 8 (including GRM tools classic ST, Waves gold TDM ), Max/MSP 5, BAT (Bangor Audio Tools),
Sibelius, iMovie, iDVD, Photo Studio, Logic Audio 6, Finale 2004, Sound Hack, Audio Sculpt.
Monitoring ATC SCM50A (main left and right), ATC SCM20A (surround) Genelec 7070A subwoofer.
Mixing desk Soundcraft Spirit Studio 16-8-2 Soundcraft M12.
Keyboard Yamaha SY99 with MOTU Fastlane USB interface.
Microphones AKG; 414, C1000s, C2000, Schoeps CM5U/MK4 Sure SM57, Rode NT1 and Samson C02.
Location recording Tascam HD-P2 and Zoom H2.

For live events a variety of equipment was used often incorporating some of the above, but for outside events other speakers / desks etc. have been used.

6.         Bibliography

             6.i        Written Material

Cox, C. & Warner, D. (eds.) Audio Culture: Readings in Modern Music (Continuum, 2004).

Bartlett, B. & J. Practical Recording Techniques (Focal Press, 2005).

Emmerson, S. (ed.) The Language of Electroacoustic Music (Macmillan, 1986).

Emmerson, S. Living Electronic Music (Ashgate 2007).

Falconer, K. Fractal Geometry: Mathematical Foundations and Applications (Wiley Blackwell, 1997).

Fauvel, Flood & Wilson (ed.) Music and Mathematics: From Pythagorus to Fractals (Oxford University Press, 2006).

Fletcher, N. The Physics of Musical Instruments (Springer,1998).

Katz, B. Mastering Audio (Focal Press, 2007).

McLuhan, M. Understanding Media (Routledge, 1964).

Rumsey, F. Spacial Audio (Focal Press, 2005).

Schopenhauer, A. The World as Will and Idea (Phoenix, 1995).

Toop, W. Ocean of Sound: Aether talk, ambeint sound and imaginary worlds (Serpent’s Tail, 1995).

Wishart, T. Audible Design: A plain and easy introduction to practical sound composition (Orpheus the Pantomime Ltd, 1994).

Wishart, T. On Sonic Art  (Harwood Academic Publishers, 1998).

Wittgenstein, L. Philosophical Investigations (Blackwell, 1993).

             6.ii       Websites

http://bangor.ac.uk/music/studios/BAT

http://www.digitalmusicarchives.com

http://www.therecordist.com

             6.iii      Scores

Berio, L.  Sequenza VI (Universal, 1967).

Holst, G. The Planets (Dover Publications Inc., 1916).

Ligeti, G. Lux Aeterna (Edition Peters, 1966).

Stravinsky, I. The Rite of Spring (Boosey and Hawkes, 1913).

Varèse, E.  Amériques (Hal Leonard Corp., 1913).

             6.iv      Recordings (CD and Audio DVD)

Boulez, P. Boulez: Sur Incises Ensemble Intercontemporain. (Deutshe Grammophon, 463 475-2).

Elgar, E The Dream Of Gerontius: Boult/LPO (EMI, HMVD 5 72758 2).

Harisson, J. Articles indéfinis (empreintes DIGITALes, IMED 9627).

Harisson, J. Environs (empreintes DIGITALes, IMED 0788 DVD-A).

Mackay, R. Soundworks (unpublished at present).

Messiaen, O. Olivier Messiaen: Des canyons aux étoiles: Leeuw/ASKO Ensemble (Disques Montaigne, MO782 138).

Smalley, D. Impacts intérieurs (empreintes DIGITALes, IMED 0409).

Various artists. Cultures Électroniques: 24éme Concours International de Musique Electroacoustique (Unesco, LCD 278063/64).

Various artists. Legacies: Works from BEAST vol. 2 (Sargasso CD, SCD 28046).

Various artists. Schoenberg, Berg, Webern; Orchestral Works: Karajan/Berlin Philharmonic (Deutsche Gramophon, 419 781-2).

Various artists. Sonic art from… (MPS, MPSCD013).

Wishart, T. Red Bird/Anticredos (EMF, CD022).

Various artists. Stravinsky Bartok – Violin Concertos: Salonen/Mullova/Los Angeles Philharmonic New Music Group (Philips, 456 542-2).

Appendix available at www.symbiosisphd.org