Lots of words and lots of noises…

This is some the text part of my PhD thesis, put up here so as to make sense of some of the music that is floating around on the internet  and in the real world, and possibly prove useful for people who are into this sort of thing. Also this is at the request of friends and students who would like to read it and for whom I’ve not got round to running off another 10 copies for at £28 each (ouch!).

There are other elements to this body of work such as scores, data sheets, software documentation and audio files that were submitted and are available either from me, or in reference form from the National Library of Wales, some are also gradually making their way onto here. If you do find any of this useful, or cite any of it, or copy and paste it into your own work, please email me at edwardcwright@hotmail.com.

© Edward Wright and Bangor University 2013. All Rights Reserved. No part of this website or any of its contents may be reproduced, copied, modified or adapted, without the prior written consent of the author, unless otherwise indicated for stand-alone materials. Commercial use and distribution of the contents of the work is not allowed without express and prior written consent of the author. 


A portfolio of work focusing on the tensions between electroacoustic and instrumental music.

Ed Wright


In no particular order I would like to express my gratitude to: my parents Bryan and Elizabeth; Em, Alena, Ben, Bess & Gizmo; Barbara Clark; Professors Andrew Lewis, Jonty Harrison & Robert Normandeau; Dr Chris Collins, Rosie Skelton & Bangor University Symphony Orchestra, Prof. Graeme Harper, Nigel Shaw, Amici del Canto & Sioned Eleri Roberts; Paul Luke, Marion Garrett, Nigel Gaston, Jean Wild & Egerton Rothesay School; Shelly Oscinberg; Dr. Tim Porter; Rhodri Davies; Edward Davies; Andrew Hill; Bethan Elfyn & Adam Walton at Radio 1, Steve Simms & Owain Llyr at Coast/Champion F.M. (later Heart), Ergo Phizmiz at WFMU; Bill Connor; the Parry Williams Scholarship trust, the Drapers Guild, Bangor University School of Music; All the nurses and staff at Alderhey Children’s Hospital; Liz Dow & Alan Couchman; the staff and patrons of the Alex Penmaenmawr; Roger, Wil, Chris, Mauve, John Lawrence & all of the Snowdonia Nights Out Gazette (SNOG) crew; Malcolm and James Hendre; Jack Corlett & Harry Perry; Blipfonica Records; Rob, Zak, Paul, Stelios and Nikos; and to anyone else who was accidentally left out, it was not intentional!

For Gilbert and Alena Wright, Grandpa and Daughter.

One lost, the other born, during the course of this work.

May your gardens grow well and the sunlight shine in your souls.

With all my love. E

1907-2006    &    2008-


This portfolio of music, both notated and recorded, focuses on the tensions between electroacoustic and instrumental music. Within this commentary there is discussion of the topics raised by this combination of forces both at a philosophical and a practical level. A number of pieces of music have been composed, realised and performed in an attempt to explore and resolve some of these issues in a practical context.

These works fall within the broader remit of western ‘classical’ music, the instrumental resources ranging in scope from solo performers to six part choir and up to symphony orchestra. The musical language remains rooted in this idiom although allowances for exploration in line with the nature of experimental composition have been made. The electronic element is also largely ‘classical’ in style drawing from the heritage of European experimentation, making reference to the concrète and acousmatic methods of working whilst drawing on resources from stereo to 8 channel. Live processing is also included by way of software design and utilisation, and one of the works contains a pre-recorded visual element in the form of animation.

This portfolio comprises all the performance materials required, recordings and realisations of the works; as well as listing the technical resources used in the creation process, data sheets of software to enable future reconstruction, and a list of all known performances up to summer 2010.

(Supporting audio works can be found at http://www.virtual440.com and http://www.myspace.com/virtual440 when not available in the text and are available to purchase from http://www.blipfonica.com)


             1.         Introduction

                         i           Extension of the repertoire, a number of conclusions, and yet more questions.

                         ii          Towards transformation…

                         iii         Causality and magic

                         iv         The visual element, space, and its redefinition.

                         v          Alchemy versus architecture

             2.         Commentaries

                         i           The Way I Saw It

                         ii          Enough~?

                         iii        Harp Set

                         iv         Postcards from Home

                         v          Con-chords

                         vi        Seasons

                         vii        Polarities: Concerto grosso for orchestra and electronics



             3.         Conclusions

             4.         Public output

                         i           Live performances

                         ii          Broadcast works

                         iii         Other academic / peer reviewed work

             5.         Technical Resources

             6.         Bibliography

                         i           Written Material

                         ii          Websites

                         iii         Scores

             7.        Appendix

1.         Introduction

             1.i        Extension of the repertoire, a number of conclusions, and yet more questions.

When beginning this body of work, the intention was to extend the repertoire of the violin through the medium of electronically manipulated sound, focusing largely within the field of electroacoustic composition. The lack of ambition, and limitations of scope associated with this remit soon became apparent. The primary difficulty arose from attempting to unify such differing and seemingly mutually exclusive vehicles of musical expression. This body of work follows a paradigm shift, away from the split axioms of instrumental lattice based  composition (as Wishart would describe it)1 and the idiomatic rigours of the electro-

acoustic/acousmatic fields, towards an attempted integration of these traditions, in an effort to create a singularity of performance and meaning.

This led to a number of interesting of tensions. Some were practical such as the integration of live with electronic music, some were technical, such as the differing methods of handling pitch based material, and still others are more the domain of philosophy and aesthetics. These will be discussed and explored further within the following music, and in the commentaries thereon. The progression towards this understanding is clearly demonstrated by the movement from violin and electronics in the form of a relatively simple ‘tape’ piece, through to the exploration of live interaction in Enough~?, the effect of the visual and the ‘cross-pollination’ of fields culminating in Polarities, a work for symphony orchestra and multichannel live electronics. It may be worth noting that the works in this portfolio represent an attempt to incorporate the instrumental into the electronic. This is not to be viewed as acoustic composition, although there are obvious and significant parallels with such a manner of working. It is hoped that this body of work represents a new contribution to the field, in that it is explicitly stepping out of the philosophical pools of a number of genres with the aim of forming a method or style that is both unified in its forces and (albeit very subjective) rewarding and enjoyable to listen to and interact with.

             1.ii       Towards transformation…

The concept of the transformation of sounds is fundamental to the electroacoustic tradition of composition.2 This  characteristic  operates on at least two levels: the physical and the conceptual. Firstly and most explicitly the physical operates by way of the changing of sounds 3 from one recognisable real world object into those of another, generally as a result of some intermediary process such as convolution or vocoding, or, more historically by the use of a bridging ‘object’ containing aspects of both sounds, forming a temporary link between the two main events. On the conceptual level, sources can be transformed by the alteration of the musical/auditory function of a given sound, as most clearly demonstrated by the acousmatic approach. The sound of a door slamming has a clear causality, even when not implied by such a description, but it can also function as a percussive ‘note’, a cadencial gesture or as a signifier for the raising or release of emotional tension. This realignment of object and inference can be explored to a certain extent with purely concrète sounds, and to a large extent when making use of heavy processing in which causal links can be eroded to the point of annihilation, as shown in Con-Chords. These two levels of auditory conceptualisation are strongly interrelated and it is the manipulation of the tensions between the physical and conceptual cognition of a given sound set and the apparent mutual exclusivity of the transformation process 4 that provides a large amount of the underlying physical tension within the electroacoustic genre.

Sonic transformation,  and  spectro-morphology5 have  few  parallels  within  instrumental music beyond the thematic. Starting from the solo instrument, there are many examples of auditory modification although they are mostly limited to parameters such as pitch or intensity. Within these specialised fields there are many instruments that excel; the myriad of pitches found within the range of the ‘cello are, for practical purposes, infinite, and the subtlety and relative ease of real time volume control and articulation achievable by a skilled performer is astounding. It is when considering timbre that traditional instruments show weaknesses. There are many ways to modify this property, and in numerous instruments the manipulation of tone is a part of basic playing technique. In some cases timbre can be modified to a more radical extent, such as the use of multiphonics in wind instruments, sul pont. in strings and mutes in brass. However this manipulation is limited to a finite pool of timbral possibilities, albeit mediated by the awareness and knowledge of the listener. The crude nature of timbral control and the fixed patterns of performer interface arise largely from the stylistic demands of notated music and its symbiosis with instrument design and playing technique. It is when stepping into the grounds of such a primarily auditory, and a-notational genre such as the acousmatic with its focus on experimentation, that these limitations have the potential to become problematic.

The fixed nature of instrumental timbre becomes less of an issue when working in an ensemble context. Composers as varied as Varèse, Debussy and Mahler are renowned for having used the sound of one instrument to colour another, or different sections of the orchestra in unusual combinations to create new composite textures and even to incorporate ‘new’ instruments. 6  Within  their  remit they were very successful. They were not trying to match the sonic possibilities afforded by early twenty-first century computing. The spectral ‘school’ of composition has arguably come closest to this effect, but remains limited in its success in this respect, whilst creating numerous difficulties in terms of instrumental performance. The spectral possibilities of studio based composition cannot be matched by instrumental performance. Another solution must be found in order to create a union of these two forms of musical expression.

             1.iii      Causality and magic

Throughout its evolution electroacoustic music has been plagued by issues of causality and reality. Musically the works obey the familiar rules of cause and effect with an object remaining at rest until acted upon by an external impulse. Tension and release, high crashing to low and the calm after the storm are all verbal examples of this and are experimented with in Postcards From Home. Whether explicitly, or by way of subversion in order to create musical tension, electroacoustic music is bounded by the experiential norms of the listener. It is also this filter of understanding that has lead in part to the need for diffusion or ‘performance’ of potentially autonomous works, ‘orchestras of loud speakers’ 7 and the questioning of whether it is  possible to  speak  of  the  performance  of electroacoustic music in the traditional understanding of the words.

Conversely instrumental music has a strong causal link, an instrument is a direct physical extension of the musicians’ body and expressive capacity. The inclusion of the instrumental into the electroacoustic space reveals a number of new musical and causal dimensions. The concept of transformation and sonic malleability can be taken much further. With recorded work the listener derives a perception from sound pressure waves created by one or more loudspeakers. There is a clear suspension of disbelief in the understanding of sounds generated in this manner, often brought about by the intentional dismissal of non-auditory sources of information. This is not the case when listening to the performance of a competent musician. The inclusion of the real into the electroacoustic provides a physically tangible causality such as cannot be achieved by ‘tape’ alone (as exploited in The Way I Saw It or the visual manifestation of Harp Set). This is in part dependent on the skill of a performer in working with the rest of the music, and that of the composer in the juxtaposition of the two parts, as is true with any predetermined ensemble.

With careful sound editing, in the live or fixed domains, it is possible to transplant the sound of the performer into the electroacoustic plane. Most simply this can be achieved with pre-recording or routing a microphone into the diffusion system (as is suggested to augment the mix in Seasons). In a more sophisticated scenario, performance sounds can be processed and transformed in a similar manner to create an electroacoustic sequence such as in Enough~?. With forward planning performers can also be recorded prior to an event, and their sounds worked into an electronic part. Transformation is no longer the preserve of the electroacoustic field. Music can oscillate in the metaspace between the real and the electronic, mediated by the suppression of disbelief within the listener, and the technical capacities of the composer and performers.

Human beings share more emotional understanding with other humans than they do with electronic equipment. The presence of a live element within a work can provide a strong empathetic bridge between the composer and audience, both in terms of reinforcing the original intent and catalysing other reactions based upon the material and external environmental factors. Having a live performer may not necessarily make people like the work, but it will draw them in, in a manner that a purely fixed medium work is unlikely to achieve.

The inclusion of a live musical element, be that of one or more musicians into the electroacoustic domain, has the potential to enhance many of the aspects of the electronic pattern of working but there are also obvious dangers in introducing something as apparently contradictory into a genre. If human creative interaction is required, it must be matched to the music in scope and intent, it must carry the listener, and provide both focus and foil to the musical ‘other’ of the electronic; they must be equal in their semiotic dance.

             1.iv      The visual element, space, and its redefinition.

Within any live performance the visual element is generally of great importance to the overall experience of the event (see the Polarities commentary for a photograph of performance). This is often less true within electroacoustic concerts due to their acousmatic nature. Nonetheless the spectacle of amassed speakers, desk and diffusion creates its own impact invoking an impressively modern and yet strangely ritualistic performance space. This forms a largely static environment for music making, which, whilst very impressive to the newcomer, can become familiar over time in much the same way as a concert hall, or even cathedral architecture, can become simply a backdrop for a concert, thus potentially requiring some form of visual focus. Introducing live performers into this equation must be done with care. As discussed above the physical presence of a performer can lend a sense of purpose to a work, however the dominance of our visual perception and the identification with another can very easily skew the balance away from an even dialogue between performer and ‘tape’ towards something that is close to karaoke.

This matter can be addressed in a number of ways. Firstly the work can be staged in such a manner as to ensure that the performer is not visually overpowering within the space, be that due to placing them next to large speakers or further back on the stage than normal in order to imply a non-soloistic role although care is required not to push the balance too far and ‘loose’ the human performer. Secondly, panning and diffusion can be brought into play. In a reverberant space an instrument will create a diffused sound. By definition this will engender a spacial element to the live sound, and, although obviously not as malleable as that of a multitrack piece, this forms auditory and conceptual connections into multi dimensional sound, in spite of arising from an arguably monophonic source. This can be exploited. Careful speaker placement can give the illusion of live and electronic sources emanating from the same point, but also, more importantly, the electronic then has the capacity to move, expand, diversify, interact with and subvert the live; provided that the initial unification has been established.

Musical unification can be achieved in a number of ways; panning the sound into alignment with the performer can achieve this, as can easily be done with the ‘performer + diffused stereo’ works The Way I Saw It and Enough~? and by staging the performers and speakers suitably with the more fixed eight channel works. As a tool for achieving unification panning remains relatively crude when used in isolation. Other aspects such as timbre, pitch, volume and temporal effects can be employed to create stronger links between the two elements. Equally the fading in of one source helps to set up a feeling of emergence from the background of the other. Careful consideration must be given to first entries within the music as this can set a strong precedent to the overall nature of the parts.

Be they battling or symbiotic, the nature of the media implies a relatively stationary live element and a potentially highly mobile electroacoustic counterpart. The conventional audience arrangement of facing the performer provides an obvious focus for the unification of the two parts and a front to the electroacoustic mix.8 The capacity for surround sound grants the visually subservient speaker part a dominance not afforded to the more visually powerful performer. Within this space there is therefore the potential for great flexibility of movement, once the performer has been established as the point of reference. An informed classical audience does not have the expectation of performers moving around the space whilst playing, neither does the electroacoustic audience forgive sedentary work. Once links have been established between the two methods of composition, it is implicit that they are very different and can therefore be expected that they exhibit very different characteristics, albeit with a unity of purpose.

             1.v       Alchemy versus architecture


It is clear that both the electroacoustic and instrumental media have differing strengths and weaknesses, and yet they both generate a similar output, that of music or sonic art. The lazy argument that some electroacoustic composition is something other than music, and therefore fundamentally differs from instrumental music, is easy to break down when one considers similar experimentation upon an instrument. Traditional patterns of composition such as form, tonality and harmony can easily be subverted, although to ignore such elements is to neglect the mode of expression for which such devices were created.

There is a large amount of shared gestural capacity between electronic and instrumental music, although at times the methods of creation may be very different, the overall effect can be similar. A timpani roll into an orchestral tutti A minor chord exhibits strong musical similarities with a reversed door slam moving into the frozen9 and filtered sound of a pane of glass shattering. The emotive and dynamic (and potentially the frequency) contents can match. It is largely the modes of delivery that differ, most apparently the presence or absence of notes (in the classical understanding of the term) as shown in the gestural writing in the opening of the third movement of Polarities.

The concepts of pitch and harmony are nonetheless of fundamental importance within both the instrumental and concrète traditions. The use of concrète sounds as opposed to purely synthesised sounds ensures that every note or object possesses its own internal harmonicity due to the physical effects of overtones, resonance and reverberance. This holds true from clear sounds such as a resonating wine glass or oboe, more richly structured sounds, a trumpet or a sea gull cry for example, through to more complex noise based sounds such as a drum, bell, or a passing train. These are all bounded by the same physical laws. Furthermore (as exemplified in Polarities) the implied resonances at the octave, twelfth, sixteenth etc. within sonorous musique concrète events create a degree of harmonic tension and a sense of unification between numerous disparate but consonant sounds.10 These factors enforce great infra-compositional similarities in spectral and harm- onic structure between electroacoustic composition and tonal orchestration in a manner not always perceived, due to the striking dissimilarities of instrumentation, focus, and musical delivery.

There is much that links electroacoustic and instrumental music beyond their etymological classification. There are also profound differences. Both are powerful vehicles of auditory expression, but are nonetheless limited in scope. Orchestral instruments are innately biased towards western notated music not only in terms of repertoire and construction, but also to an extent through the training and cultural norms of their practitioners. This is a fact that should not be overlooked when composing for such resources. Electroacoustic composition can bypass this lattice effect and is capable of a far wider degree of control over timbre, pitch and the temporally detailed elements of modern composition, although it lacks many of the facets such as live performance, perceived interaction, reinterpretation and dynamism so highly valued in the instrumental field. These two elements can, and should be brought together, with care. Both have a long tradition, arguably stemming from the same root, but are also incredibly disparate in their methods. It is this disparity that makes such a union possible, where one mode of working is weaker, the other can contribute, and where one excels the other can recede. It is only when this dynamism is harnessed into a singularity of musical purpose that a portfolio such as this one can hope to achieve meaningful, fulfilling and ultimately enjoyable music.

2.         Commentaries

The following commentaries present the pieces chronologically and I therefore suggest that the pieces of music be listened to in the same order. The pieces of music are meant to stand alone as well as part of a group so a listening order outside of the context of this text could be seen as arbitrary.

             2.i        The Way I Saw It

             Violin, fixed media and MIDI tempo track

The Way I Saw It was composed with a view to satisfying the original violin and electronics remit of this project. It also works as a stepping off point into the wider area of the following works and provides a logical compositional and philosophical basis for later developments. The piece was composed around the idea of a square. At its centre is a point of equilibrium and at each opposing side lies an equally opposing concept. To the left and right we have order and chaos, and at the top and bottom we find real and synthetic. These values could be substituted for many polar extremes such as pitched or unpitched, but the two sets chosen permit enough scope to experiment with many of the ideas discussed in the introduction. The music can be pushed and pulled towards any of the edges or corners of the square, but always pulls back towards the central point of equilibrium.

The way I Saw It ‘square concept’

The sounds used within the piece were recorded largely in the field, working from home to a waterfall in the mountains. A number of other sources have been used such as the sound of the violin, recorded into the fixed track, to provide a form of link between the two parts such as in bar 15 (see 1.iii). The sounds were edited and augmented in the studio and gradually arranged to form the basic shape of the piece. Structurally it follows very ‘classical’ principals, falling into a kind of sonata form. This is a deliberate step, as it permits the use of some otherwise quite unusual sounds whilst nevertheless providing a feeling of comfort and progression to the listener.

The two parts, violin and tape, were developed simultaneously. At times the electroacoustic side came first, and the violin part was crafted around it. At others the violin part was written and recorded and used as a template for studio composition. In addition to this there are also a number of places where the violin has been recorded into the tape part and treated with similar processes to the other sounds. In the instrumental part there are places where the performer is free to improvise on a set of loosely defined pitches, and violin techniques such as sul ponticello distortion draw the instrumental sound closer to that of the tape part (see score). All of which helps to blur the differences between the two parts.

The work was produced in conjunction with a MIDI tempo track. This is in no way a constant click track as used in pop recordings, but it has several advantages. Firstly it permits accurate plotting of the structure over time, and, it facilitates a far greater accuracy of synchronisation between player and recording than could be achieved by familiarity and counting alone. In retrospect this way of working still results in a rather regimented rhythmical performance, hence the adoption and development of other techniques in later works. The score for the player was completed using a combination of notational and graphical techniques. For the most part the violin notation is standard, except where notes of an indeterminate pitch are indicated. These are shown by using only the stem of a note. This clearly indicates time value (in all instances in this work such notes happen to be a dotted minim or shorter) and suggests an approximate pitch to the player. This technique is used for two reasons; firstly in the places where it occurs; rhythm, timbre and texture are far more important than pitch, thus shifting the emphasis away from it. Secondly this notation provides an output for individual performance expression in an otherwise very rigidly ordered piece. The violin score is accompanied by a graphic reduction of the tape part, formatted to work as a two part score. The graphic was generated by using sonogram analysis  of the fixed part, and then mapping bar lines to this.  From this the main perceivable structures and events could then be traced to the score relative to the note spacing of the violin part, as opposed to the absolute timing of the tape. This was done in a codified manner providing more ‘musical’ information that could be perceived from an unmediated sonogram printout. Once the audio part was completed it was then transferred to laptop. Both the audio and the MIDI tempo file were loaded into a sequencing program, in this case Logic. This then provides an accurate playback medium that can be easily triggered from on stage, but with the use of the MIDI clock an accurate tempo display to work to.

Intrinsically The Way I Saw It works as a piece of music or sonic art. This is especially true in its live form with diffusion, as opposed to simply performer plus laptop. However, the rigidity of synchronisation required to perform it well can, potentially, detract from the feeling of a live performance, and, although this is obviously mitigated by improvisational elements and the act of diffusion, many performances with tape carry with them more of the air of a feat of slight of hand and less of a truly musical experience for both listener and performer.11 In  spite of these potential worries the piece has always been well received. See 4.i for a list of performances of this and subsequent works.

             2.ii       Enough~?

             Clarinet and live electronics

Enough~? is a piece for Clarinet and live electronics. It was created through a combination of standard acoustic composition and Max/MSP programming software. There are some fundamental differences between this work and others that are included here. The main one is that, unlike the other electronic pieces Enough~? is a completely live piece, and in this respect presents a far higher degree of flexibility than other fixed medium or mixed works. Not only is the tempo of the piece controlled by the performer, as opposed to the synchronisation to tape required for The Way I Saw It, but the material for the electronic part is generated by the clarinet in performance. In this way the fine details of the electronic part, such as timbre and intonation, change with each performance as well as each performer. Combined with this there are a number of patches designed to interact with the performer controlling processing parameters, and in some very specific cases randomising the processing parameters of events. The effects of these changes are far greater than one would at first expect, as a sensitive player will react to the changes in musical content around them, thus playing will be guided in a specific direction of mood or energy. The same is true of the electronic part to a progressively larger extent as the player cumulatively reacts to the computer’s ‘reaction’ to them.

Enough~? was created on, and partly performed by laptop, running copies of Max/MSP and the Finale notation software. The electronic and acoustic part were worked on literally side by side, with the audio output of Finale being fed directly into Max/MSP to give a passable working impression of the sonic possibilities and results that the processing was giving. The process of composition was helped greatly by clarinettist Sioned Eleri Roberts, who has since performed the work and whose assistance and patience was most helpful.

The clarinet part revolves around a melodic and rhythmic framework set down in the first 32 bars. Within the piece itself there is experimentation using rhythm as a structural element. The work is governed by the movement of the beat. The first section is lively and agitated (a 4/4 9/8 4/4 11/8 cycle) the second is peaceful and tranquil (4/4 thoughout), and the third battles to resolve these two conflicting emotional and rhythmical states changing time inconsistently as the battle towards an equilibrium continues. Finally the work settles into a stable 9/8 which, over four bars, adds up to exactly the same number of quaver beats as the opening passage. Although a metrical rhythmic framework is not strictly necessary for this style of music, it makes performance, reading, and notation less problematic.

Within this framework the melodic material is developed and expanded following the pattern layed down by the rhythm, whilst trying to maintain an air of freedom and spontaneity. As a result there are a number of complex rhythms over, and at times obscuring, the basic pulse. Along with this there are number of quasi improvisatory passages taking previous material and reusing it in a slightly altered manner, for example at the beginning of the third section bar 129-133, as well as the introduction of seemingly completely new material bar (241-end). In addition to this, a number of extended techniques have  been  included  such  as  key  clicks  and  multiphonics12 in order  to  further  extend the sonic range of the clarinet bringing it away from the sound world commonly associated with the instrument.

The Max/MSP programming was designed to augment and complement the sound of the clarinet score. In some places it simply serves a canonic function storing incoming audio data and playing it back when appropriate, in other places it carries out more complex tasks as illustrated in the sub-patch shown below. In this particular patch the sound is going directly into the computer. The incoming data is subject to pitch and volume analysis,13 the results of which are then modified and used to control the parameters of a comb filtering patch. This in turn processes the other half of the audio data for output to the speakers. Some of the outgoing sound is then sent via an attenuated feedback loop back into the comb filter, thus further re-enforcing its effect. This is simply one sub-patch out of a large number of patches involved in this piece.

Comb filtering patch [b154-147e21]

Screen shot showing Enough~? Sub patch

Processing was triggered in a number of ways. The first and perhaps the most subtle was the triggering of patch changes by note analysis with the fiddle~ object. The other two ways employed in this piece were simple timing and MIDI triggering. MIDI triggering was the mainstay of these events. By using a keyboard sustain pedal (chosen because of its light response and easy availability) in conjunction with the midiin object it was possible to make a programme that simply changed patch incrementally every time the pedal was pressed. This makes for a very elegant and simple performance system, only requiring a pedal on stage and an interface to the computer. Events that followed on closely from a synchronised trigger which did not need to be absolutely accurate, such as starting a sound with a very long attack portion, could often be set to trigger automatically after 5000 ms. using count or the delay object. This meant that there was less demand on the player and on the hardware, with very little loss of accuracy or effect if employed judiciously and in close conjunction with synchronised events. In addition to this the computer operator/diffuser can monitor which patch is currently in use, and override the pedal if required. It is entirely possible for a clarinettist to set up and perform this piece with computer, absolutely ‘solo’.

The subservient nature of the electronic part is fundamental to the sampling method of composition, due to the real-time/canonic nature of the act of processing. No matter how intricate the machinations or how powerful the computer, the electronic part is forced into temporal, musical and expressive second place unless the computer is used as a generative synthesiser or plays back pre-recorded material. Enough~? proved to be a useful first step into live processing, and a gateway into object based programming and working with other performers.

             2.iii     Harp Set        

             Sampled harp, octophonic sound and moving image.

Harp Set was my first experiment in working with 8 channels. As such it is an exploration into the use of surround sound, employing instrumental sounds in a purely electronic construct, combined with a strong visual focus as expounded above. It was written specifically for the 2007 Bangor New Music Festival and is designed to take advantage of the octophonic diffusion system that was being developed (2006-7) under the auspices of Electroacoustic Wales. Thanks are due to Dr. Tim Porter for his advice on matters mathematical and fractal.

The piece is based on the ‘carpet’ or ‘square’ devised by Sierpinsky.14 In layman’s terms there are two squares, one 1/3 of the other in size. This smaller square is then duplicated reduced in size by 1/3 again (i.e. to 1/9 of the original) and moved within the space. This shift is defined as a move of 3 times the size of the said new square along any combination of the x – y axis. for example if the new square is 1 cm long (based on a previous set 3 cms. long) then all 8 1 cm. squares will be placed 3 cm away from the original. This process is then repeated on the 8 1/9th squares to make 64, 1/27th squares and so on ad infinitum. Harp Set is restricted to the first 4 generations of this process. See illustration, the notated score, and the mathematical description below.

Harp Set formula

Harp Set main theme before transposition, augmentation or diminution.


This construct was then used as a basis for musical composition. The main motif was composed and was then assigned to the central square. This was recorded from sampling software (Kontakt 2) rather than utilising a ‘real’ harp, as in the later material it would be physically impossible to play at the required speed (up to 480 bpm.) or in some cases at the required pitches. By manipulating the MIDI data it was then possible to apply transformations to the auditory material, similar to that which Sierpinsky had to the visual. With each new generation the theme was doubled in speed, and as the squares move higher up the visual field so the melody was transposed up, or down if lower. Given that the spatial generational shift works around a 1:3 ratio so does the pitch shift. The 3 different heights of 2nd generation squares are represented by transpositions of 18, 0 and -18 semitones respectively. The 3rd generation have pitches 24, 18, 12, 6, 0, -6, -12… For every 4th generation step there is a 2 semitone shift in pitch, beyond which the concept starts to become musically, if not physically, untenable. From this it was then quite plausible to build an accurate representation of the Sierpinsky carpet in sound.

Spatially the music reflects the physical movements of the visual image. Making use of the octophonic set-up, it is possible to place the audience in the centre of the visual field, so that any changes in the visual image are analogous to those of the auditory (provided that the audience is situated relatively centrally). If an object goes up the screen, so the sound goes to the front of the hall, if it goes left so does the sound image.

Diagram showing the first four generations of the fractal process.

Diagram showing the relationship between the visual and auditory space.

At 0’56” the main theme is introduced, diffused by pitch around the audience who are sitting centrally within a circle of 8 speakers. The notes are band filtered via MIDI so that each speaker has a range of up to a fourth. As a result each speaker only plays a number of disjointed pitches, however, the effect of them all working simultaneously is to create the sensation of being inside the main theme.

After this statement of the theme the audio field is matched exactly to the visual field. Within the previous diagram the audience would hear 3 sets of melody: the 1st generation in the centre; the low 2nd generation sounds behind them; the originally pitched 2nd generation material beside them and the high 2nd and 3rd generation sounds from in front.

Spatial effects were achieved by using a combination of Pro Tools, Kontakt 2, Rewire, and Max/MSP. Individual ‘squares’ of melody were realised using the Kontakt 2 software and sequenced via Pro Tools. These were then run through a version of the Bangor Audio Tools  (BAT) application   which  was  edited  to  include  a jpeg image of the  carpet over which mouse and cursor movements could be traced. This spatialised information was then fed by the Rewire protocol back into Pro Tools.

Initially the carpet was voiced incrementally from ‘square one’, up to the full four generations. This proved to be as predictable and symmetrical as the initial image. As music occupies a very different experiential time domain to the visual, this proved to be unacceptable. Instead, different aspects of the pattern are used to build up different facets of the piece, working towards a final build up of the entire pattern. Accompanying this, there are a number of more processed sounds all of which are based on the harp. This provides a greater sense of variety in terms of timbre, emotional and spectral content, as well as affording a degree of freedom into a rather contrived system. It is the decisions described in the last few paragraphs that rule the emotional content of the piece. There is an underlying emotional current there. It ebbs and flows and has its own high points and facets but it is not written about any particular set of events or subjects. Rather it is as much of a construct as the rest of the piece, but this time instead of evolving on a strictly mathematical basis the music started from that point and was then guided into places that sounded pleasing or interesting. The material itself covers a number of different ‘tonal’ areas. The main theme itself is very melodic and rhythmical although some of the transpositions are somewhat more abstract or avant-garde in their connotations, such as the step up 18 semitones (a compound tritone). Here we have both ends of the spectrum the dark and the light, just as in the video element. In this respect it is a truly abstract work.

The video element was created in tandem with the audio. As new elements were added within the Pro Tools session, such as the introduction of the second generation squares a note was made of the time code of this, both as a marker within the software and on paper. From this a story board was then compiled. The initial images were created on Photostudio and were animated on iMovie. By necessity the video works around the cutting and crossfading of a large number of static images in a manner not dissimilar to mainstream animation. It is effective largely because of the simplicity and uncluttered representation of the sound and image. The video was then run alongside the audio within Pro Tools and re-edited to ensure the tightest sync possible. The audio was then compressed by element/track and a small amount of reverb was added. This is common practice within most commercial music  however this application is not particularly widespread and even viewed with suspicion within this genre, as it is reasonable to expect the work to sound as the composer intended from the studio. In moderation mastering is a useful ‘finishing’ process. All subsequent works exist in a number of mixes, one as a concert mix with a full dynamic and frequency range and at least one other mix, more attuned to working on a home system. The ‘home’ versions have been created by boosting the RMS by way of parallel compression, and a slight EQ tip up to boost midrange clarity and bass response. Eight channel versions have also been flattened to stereo. This has been achieved by panning them into a 4 channel format (L, R, Ls, Rs) using simple trigonometry to try to ensure ‘correct’ placement, adding a slight reverb and high frequency cut to the satellite speakers to add a greater depth and then summing L+Ls and R+Rs into stereo. In some cases such as that of Postcards from Home there also exists a third mix designed for .mp3 internet streaming via sites such as http://www.myspace.com.

As there is very little software (especially at the time of writing) that reliably plays 8 channel audio and video, it was necessary to create something that did. As a result a program was written that would take a Quicktime movie file (.mov) and play it in sync with an 8 channel interleaved audio file (.aiff) over the a specified audio interface (see Appendix C DVD 1). The application is relatively inflexible but it is gives a very tight synchronisation, has proven to be very stable, and delivers sound and audio to the maximum quality available on the system on which it is running.

Harp Set is an exploration of the instrumental within the electronic and addresses one of the key visual issues of electroacoustic composition discussed above in sections 1. iii and iv, providing a sense of causality (or corroboration at the very least). Musically, it bridges between the tonal and the gestural, something that will be shown to be of great importance for the successful amalgamation of the electronic and instrumental. Intrinsically, Harp Set provides a staged visual element and an apparently connected surround electroacoustic element.

             2.iv      Postcards from Home          

             8 channel fixed medium

Postcards from Home was written as a response to a request for a piece for the Sonic Arts Research Centre ( SARC) in Belfast. It is my first attempt at a fully integrated octophonic acousmatic piece. Harp Set proved a useful foray into the world of multichannel work but by its nature cannot be defined as acousmatic even in the loose sense that it has come to be applied. Postcards from Home is a work intended to develop skills in utilising the forces available in a solely 8 channel (i.e. non-instrumental) scenario. It is reasonable to assume that a degree of technical proficiency is required in both the electronic and instrumental fields in order to take full advantage of their joint possibilities. Postcards from Home works with more traditionally electroacoustic techniques than any of the other pieces in this portfolio but in doing so attempts to create a ‘melodically’ and ‘harmonically’ rich piece from non-instrumental sounds.

The ‘sonic postcard’ concept has been around for some time.15 However the temptation as a result of moving into a new house at the time of composition was too great, and this form was experimented with. The overriding musical aesthetic of the concert was to be more of the concrète/acousmatic persuasion as opposed to ‘soundscape’. It soon becomes apparent that these two schools of thought are not as contradictory as they at first appear. A number of sounds were collected from around the area, except for thunder claps16 which could not be obtained to order, and then used as the basis for composition. The naturalistic feel of the start builds on the experience gained with The Way I Saw It but in this case the focus remains on the recorded sounds rather than shifting away to the instrumental.

Structurally, the piece is designed to open a concert, and as a result can be described as a through-composed wedge shape. This statement can be applied to a number of parameters most notably with the volume and number of voices gradually increasing through the piece to open out into a dramatic fanfare effect with the final rolls of thunder (see Appendix A for sonogram and waveform analysis). Within this, the material oscillates between areas of familiar ‘arranged’ sounds that are often left relatively unprocessed, relying instead on a more concrète style of sound presentation, whilst also pushing out into areas of more electronically generated or heavily processed explorations of the source material. A lot of the processing was achieved through fairly conventional means such as the G.R.M. tools comb filter or freezing plugins, but in many cases large amounts of parameter automation were carried out, be that through programmed event changes, H.U.I. or auto-latch techniques in order to provide a greater sonic variety, and a more natural sensation of causality brought about by physical human control. There are a number of other elements that could be said to accompany these general movements, such as the introduction of a ‘block’ of rhythmical material, many of these are to do with achieving musical balance within the overall shape. The defining interest of the piece is in the sounds used, the transformations that they achieve, and emotional and sonic journey that the piece takes the listener on.

In order to take full advantage of the diffusion system in  SARC a number of elements were recorded in the studio with 8 microphones mirroring the arrangement of a standard circle of 8 speakers. This is successful to a point. A lot of the time it proved impractical to record with microphones far enough apart to create much noticeable separation in sound placement and erecting baffles created problems of reflected sound and EQ coloration. This, coupled with the sheer impossibility of getting a forest into the studio, resulted many of the sounds being recorded in stereo and then processed with the BAT to create 8 channel events.

In situ in  SARC the piece worked well with the five layers of speakers. Performance was essentially an act of balancing, ensuring that the more bass heavy elements had the impression of being routed in the basement and that some of the more high frequency sounds had the space to sparkle overhead, and tutti for the end. The only notable exception to this arrangement was a circle of 8 speakers set up to run from hard left, overhead to hard right and under the feet back to the left again. This was used at the very start where the rotating and nature of the material suits this approach.

Whilst the surround method of using 8 channels is becoming more widespread, and indeed 16 and 24, such localised spatialisation and rigid formatting is not suited to the progressively larger performance theatres we are beginning to see such as SARC and ZKM, and that a more fluid approach may be called for to fully take advantage of such systems.

             2.v      Con-chords

             8 channel fixed medium

Con-chords is an electroacoustic piece which draws exclusively on the following excerpts for orchestral sounds and textures as its compositional material:

BERG: 3 Orchesterstücke, Op. 6

BARTOK: Violin Concerto No. 2, BB117 – Movements. 2 and 3

ELGAR: The Dream Of Gerontius, Op. 38 – Movement 21. Softly & Gently

MESSIAEN: Des canyons aux étoiles – Movement 1. ‘Le Désert’

(permission sought from copyright holders 10/08/09)

These samples were used in order to create an electroacoustic piece that draws on the sound world of the twentieth century symphony orchestra, utilising a wide range of timbre, whilst still retaining many of the traits of standard orchestral writing. As such the sounds are a source in the acousmatic sense rather than being intended as quotes or a form of explicit reference.

Due to the musical and harmonic strength of this material careful handling is required. A large amount of creative input is necessitated on the part of the electroacoustic composer when dealing with such well known sources to validate the intrusion into the musical integrity of these works. It is a perfectly acceptable home musical experience to play back a CD recording of any of the sources above, without any incursion by a third party composer. It is therefore imperative that as basis for composition such resources be handled delicately, respectfully, and yet with enough audacity and technique to convince the listener of the merits of such an undertaking.

As a result of this philosophy the work opens with a largely unprocessed but carefully edited fanfare figure, placing the listener instantly into a strongly acoustic listening scenario. Within this remit the sounds are treated in a number of ways, using techniques one might regard more as editing such as repetition or reversal rather than the more heavy processing. This approach continues throughout the piece where sounds are sometimes found almost in their natural state. Often, on close inspection it becomes apparent that they have been altered quite significantly although still retain some of their intrinsic nature. The violin extract around 2’20” is a case in point. Whilst retaining its signification as an acoustic instrument, the violin sound has been repeated, delayed, pitch shifted and equalised resulting in the phrase being extended out until at around 2’35” when it is fragmented into the ensuing electronic texture.

In a some cases the original extracts have been processed to the point of being unrecognisable. This has been achieved in four main ways: Firstly by drawing on a number of plugins; notably Digidesign’s Recti-fi and Moogerfooger Lowpass and G.R.M.’s FREQ Warp to drastically alter timbral qualities such as at 7’30”. Secondly many objects were edited in the time domain by compressing, expanding or freezing to create ‘new’ material. Sounds have also been spatialised within the octophonic field by placement within Pro Tools or external processing in Max/MSP. In addition to this, processed sounds were edited and triggered via sampler thus creating not only new sounds but new passages such as that underlying the texture from 1’25” to 2’40”.

The act of using sonically very rich but relatively stable sounds for processing led to a number of interesting results. The great variety of information contained within a tightly controlled window of timbre, pitch and dynamic meant that when processing to extremes there were often very coherent results to be found within the fine detail of events. Time expansion by factors of up to 64 times the original length led to the uncovering of musical artefacts such as harmonics and articulations previously unnoticed in the recording, and transpositions and spectral manipulation also yielded unexpectedly pertinent results.

Even when heavily processed the work needed to retain a natural feel. As a result of this, wherever possible, processing automation has been rendered by hand as opposed to with line drawn parameters and has been ‘performed’ on the computer in the same way that it is possible to speak of playing a software sampler. Taking this to a deeper level the work draws on transpositions around classical perfect intervals of 4th, 5th and octave for the reason that they combine well harmonically. This fact couples with the notion that a culturally informed listener would expect to hear such intervalic relationships played on the real instruments depicted in this electronic piece.

Con-chords sets up a tension between the quasi-realistic sound worlds of the edited samples and the alien nature of the processed material. This spirals throughout the piece creating something that could be regarded as sonata form, although it is more the result of conflict and implied resolution between two defined ideas. This sentiment echoes much of the philosophy of the piece. It is intrinsically quite traditional; it draws from established sound worlds, exhibits traits of classical form and carries emotional as well as musical motifs within it. It demonstrates how electroacoustic music can fit into an acoustic context but also how notated music can be understood as a gestural medium as well as the result of notated composition. The work also shows how hard it is to achieve both of these phenomena in one object at one time as expressed in 1.ii.

We are no longer simply dealing with the perceived transformation of a sonic entity from one form to another as outlined by spectro-morphology, but, in addition to this, from one perception of musical discourse to that of a differing standpoint. Con-chords shows that discourse and theoretical structures can be objectified. They can be transformed, they can be strengthened and they can be broken down, provided that there is an alternative system with which to interact.

             2.vi      Seasons

             SSAATB choir and surround electronics (Text by Graeme Harper)

             Download pdf score

Seasons was written as a response to a commission by Llandudno Festival of the Arts. The brief was to create a piece of work for choir and electronics lasting for approximately fifteen minutes. The music is based upon a poem written by Graeme Harper specifically for the piece, and is a very literal reflection of the overall theme of the festival; that of the seasons. Within the writing there was a deliberate attempt to leave leeway for musical creativity, and it presents numerous opportunities and challenges at a variety of levels.


                                                  New flocks, sighted on old stones

                                                  Recall a cycle, oceans swelling

                                                  The teetering tones of trees

                                                  And in the blossom ends bloom.

                                                  A child and mother,

                                                  Two beginnings, two endings

                                                  Meet between movements.

                                                  Larger than one life, larger than you

                                                  Or me, together, apart

                                                  Two, or four, or many cycles more.

                                                  Until one day, one day,

                                                  Away it starts, again.

                                                  New flocks on old stones,

                                                  Beginning and ending, many more.

© Graeme Harper 2008.

Duplicated with permission.

The music is based upon this text and this interaction starts at the structural level. The repetition of the ‘new flocks on old stones’ imagery implies a sense of recapitulation being both ‘beginning and ending’. This analogy is further exploited in that it serves to create an arched form, moving from the atonality of bars 47-71 and 217-226 into the fugal ‘summer’ figure around bars 106-121. Implicit within the structural similarity is the concept of a direct literal or causal link between the text, choral part and electronics, or to condense the classifications further, between music and lyrics.

There are a number of places where the electronic part is used to reinforce the reality created by the text. For instance ‘the teetering tones of trees’ are accompanied by bird song, and the winter section has a ‘wind’ sound within it (albeit wind created in the studio due to the practical difficulties of recording such a sound). With the recognition of the source and its identification with the text, the elements of text and music are pulled into a mutually supportive role. In a number of cases such as the two mentioned, the sounds also possess musical as well as environmental implications, the bird song being transformed by the use of lowpass and comb filtering to become a textural element. This demonstrates the movement beyond the structural and mimetic use of concrète sounds into the realms of the more traditionally musical.

Within the ‘musical’ listening space a dialogue is set up between the choir and electronics which is at times complementary, but occasionally antagonistic. Sonically, the two parts start from a very similar point. The opening features an atmospheric and winter-like soundscape based upon a wind sound which focuses on a number of pitches, most especially that of a written E. The choir then enters quietly upon a unison E, and the similarities between the wind of the electronics and the air flow of the voice are clear. From this point of unison the two parts diversify and maintain a relationship not unlike music and image, in that they are symbiotic but not necessarily the same at a particular given moment. This is especially true around the fugue where the timbre of the two parts are vastly different, representing a mid point through the piece, but retaining a unity through their matched energy. The analogy with music and film can be further extended in that both the vocal and the electronic media have their own particular strengths and weaknesses; different effects and intentions can be realised by each, where something cannot be achieved musically, environmentally, structurally or emotively with one, it can often be done, with the other, or by a synergy of both.

Time constraints necessitated that the vocal score was prepared before much of the electroacoustic material was created. This was not excessively problematic in that the score was written with an electroacoustic part in mind, and in many places written on the rough score in a similar manner to The Way I Saw It. Following this pattern of working three MIDI/.wav files were created from the notation of the vocal part. The first of which was to tempo, the second of which was twenty crotchets per minute under tempo and the third, twenty crotchets above the expected performance speed. The first file was used as a template to set the electroacoustic part. The other two were retained to rehearse the performance of the electroacoustic part in the studio at different speeds and to ensure that the various sound files would still maintain musical coherence when triggered at different time intervals. Seasons was created in 8 channel surround but was then converted into 4 for the concert (left, right, left back and right back by utilising the 8 channel file and the 4.0 pan control in Pro Tools). This was necessary owing to the limitations of the performance venue, equipment and budget.

The electroacoustic part was developed in layers of events. These were bounced as individual objects. Having individual sound files for separate events greatly increases the capacity for synchronisation between choir and electronics, due to the fact that a given file can be triggered in line with a given performance and is not fixed in the time domain as it would be in a ‘tape’ piece. This approach proves useful although it does have limitations. Most pressing is the need to ensure that a slow performance does not run out of electronic material. This can be resolved in a number of ways. Firstly, ensuring that sound files overlap gives a greater flexibility of time base, secondly percussive and resonant objects can be manufactured to include reverb tails rather than stopping abruptly all of the time, and thirdly blending the timbre of the electronics with the choir can blur the sound world sufficiently to mask moments of disjuncture. It is still easier than trying to persuade an amateur choir to keep to time with a click track and arguably a more musical undertaking. (See below for screen shot of overlapping regions prior to file export.)

Such an approach required an interface. Were it only 2 sets of stereo audio files running concurrently a quasi DJ approach and two varispeed sound sources could be used. Unfortunately this was not the case. As a result of this the Key Linked Audio Noise Generating System17 was  developed  which  is  a  computer program designed to  facilitate sound file triggering in installation or other live scenarios. It was programmed using Max/MSP and permits the user to assign any given audio file (up to 4 interleaved channels of 64 bit; aiff, sd II, NeXT/SUN(.au), wave, and Raw Data files) to any given letter key of the keyboard for playback. For instance if the sound of a casserole dish were asigned to the letter c, the dish would sound every time the letter c was pressed (in addition to any other sounds already initiated by other keys). Just as any sound file can be started or re-triggered by pressing its related key, in this case c for casserole, so any sound can be stopped by keying alt and the appropriate key for instance, alt+c. Although this is not very important for short duration sound events it becomes a very significant consideration when dealing with longer more drone-based sounds. As the software sources audio directly from the designated hard drive, the upper limits of audio file length become largely governed by system specification and performance. This can lead to very long audio files, coupled with the real possibility of accidental triggering necessitated some form of file specific stop button.


Screen shot showing the overlapping of sound files for Seasons.


The audio interface structure was for pragmatic reasons built around an Emagic 6 out 2 in interface. Therefore the first 4 channels are used to provide audio outputs for a (non-specialist-event-friendly) 4.0 surround sound system. Whilst this obviously does not take full advantage of more high specification systems, it can provide something close to the overall effect on a fraction of the equipment. This leaves the last 2 outputs of the interface free. These were allocated the role of P.F.L. cueing. Any given sound is summed to stereo and sent to these outputs by simply pressing the shift key and the relevant letter, so that c plays the casserole dish whilst C plays the same sound on the P.F.L. channels only. The ‘rehearsal letters’ in the score show where a given letter is pressed.

The software was first used to create an installation as part of an event – Yr Alban Hefin, in the Gwydr forest, it has also since been adapted to be live triggering, processing and diffusion interface as part of a composition project working with young people with special needs.

In rehearsal and performance with the choir, a number of factors could be noticed. Most apparent was the very differing spectral content between studio based music and live musicians in a performing, rather than recording, environment. Approximately 3-5 dB of attenuation was required in the 4 kHz region to balance the electronics to the choir and to enable the extreme bass and treble capacity to come through without drowning the choir with high-mid range frequencies. Also of note is the narrow crest ratio18 of the choral dynamic range when compared to the capabilities of 24 bit sampling (although this would increase with a larger group), and a boost to the R.M.S. level of signal of the sound files was necessitated after the first rehearsal. Within the live setting the role of reverb played an important part. The natural reverberation of the building helped to blend the vocal and electroacoustic parts into one another in a similar way to its use in recorded music production, hinting at extending the ideas outlined in 1.iv. Far from thinking about the electroacoustic part as an extension of scoring and orchestration techniques it becomes tempting rather to think about instrumental and notated music in terms of mixing techniques, regarding the layering of frequencies, depth and presence, spatialisation and stereo spread as just as vital as, if not more important than, the attention to chord/object voicing or harmonicity.

The work was performed using a four channel set-up including a coincident pair of microphones aimed at the choir to enable some of the choral sound to further blend with the electronic, by sending it at a very low level through the speakers. Little diffusion took place in that the work is spatialised within the sound files. However, the triggering of the synchronous audio files with the choir and the delicate balancing of volume (via faders) and timbre (via EQ and speaker placement) required to make the music work, still leaves much to the electroacoustic performer.

Regrettably, due to battery failure in recording equipment during the concert (except for a video recording made by one of the choir members of poor sound quality) there is no record of the performance. There follows a review / critique by Roy Woods (M.A. electro-acoustic composition student) who was in attendance at the first performance (written at my request as a result of these issues). It is hoped that the work can be recorded with live choir in the future.

Seasons, Ed Wright

St John’s Methodist Church, Llandudno. 1st November 2008.

Seasons, a piece for choir and live surround electronics, received its premiere as part of the autumn session of the Llandudno Festival of Music and the Arts. The piece was specially commissioned for the festival and featured Amici del Canto – a local choir – with the composer, Ed Wright, performing live at the mixing desk. The electroacoustic elements had been pre-recorded and ‘treated’ (to use Brian Eno’s terminology) as part of the compositional process. Nigel Shaw conducted the choir, allowing the composer to concentrate on triggering the electronic sounds and sending them to one (or more) of the five [four ed.] channel speaker system set up around the inner perimeters of the church.

The overall acoustic of St John’s helped enormously to diffuse the sound and add a quantum of reverberation, without overwhelming or muddying the sound. The electronic elements were, therefore, a part of the overall mix, rather than discrete elements poking into the listeners’ consciousness from specific corners of the church. This worked well in creating an overall ambience and an integrated sonic experience.

The piece was, to this listener, impressionistic; it was described as notionally following the four seasons through a year, “….. winter giving way to spring, and a new world gradually emerging to the light….” (according to the programme notes). This it did, inasmuch that the piece was not performed in four separate distinct parts, such as in Vivaldi’s famous work, more as a gradual segue through the impressions of a year unfolding. I thought this worked well, rather than clumsily underlining “This is winter!” etc.

The text sung by the choir was written for the piece and, although difficult to follow from the actual singing, could be read in the programme notes.  I was somewhat ambivalent about the words, feeling that the sound of the choir was sufficient to make the text possibly extraneous. The sound of the choir itself, in the church acoustic, was sufficient to carry the piece. I suspect that the music would have had greater impact with a much larger choir – allowing the music to have greater dynamic range using sections of the choir to build to a greater climax.

The electronic sounds were of interest. They, in the most part, worked well with the choral sound, in some cases underpinning with low bass sounds and in others adding the natural sounds of thunder and birdsong. I think the use of birdsong – perhaps too loud in the mix, should have been more subliminal. In fact once or twice, there was an electronic sound from the rear of the church which didn’t quite fit the homogeneity of the whole work.

I enjoyed the piece. I thought it worked well – apart from the occasional identifiable ‘electronic’ sound from the rear channels. As was said beforehand: “It’ll probably be hated by lovers of choral music and hated by lovers of electronic music – satisfying neither audience.” It certainly was not the case to my ears, integrating well, without the electronics grating on the ear and standing out from the overall perception as a separate identity. In this way it succeeded very well.

Roy Woods

Duplicated with permission.

             2.vii     Polarities: Concerto grosso for orchestra and electronics

             Orchestra, eight channel electroacoustic music and live processing.

             listen to extracts online in stereo at http://www.myspace.com/virtual440 

             Download pdf score

Polarities is the final piece in this portfolio and to some extent represents a culmination of much of the previous work. The work falls into a very classical 3 movement concerto format (i. On Points of Departure – Allegretto, ii. Cycle – Andante, iii. Tarantella – Vivace) and includes solo elements from seven orchestral instruments as well as the electronics, hence the the reference to ‘grosso’ in the title.

The piece is constructed around three main resources; those of the orchestra, live sound processing and eight channel electroacoustic music. Philosophically it is trying to prove (or disprove) many of the concepts and ideas set out in section 1 and therefore this commentary will focus more on the compositional process, as the conceptual framework has for the most part been stated previously. In some ways a lot of the work behind Polarities has already been explained and the work is a result of scaling up, culmination and a refinement of ideas and techniques used earlier in this portfolio.

Like The Way I Saw It this work was created on computer with the electronic and instrumental parts being formed in tandem. In many cases the instrumental part was also worked out on paper with the electronics drawn in as a graphic aide memoir but in addition to this there was the formulation of the processing element within Max/MSP. By necessity this had to be created last in the process, or at the very least, it had to be the final element within any given section due to the fact that the instrumental sound had to be present to be processed. Whilst this is slightly awkward, it is not overly problematic. In much the same way as a composer can mentally hear a given orchestration so it is possible to roughly approximate a given sound process. Thus, the Pro Tools session, Sibelius file and numerous pieces of manuscript paper became heavily annotated in order to bridge the gaps between the worlds.

The instrumental part is deliberately motivically strong. This not only helps to balance some of the strong, real world connotations of the electronic part but also enables the audience to grasp the nature of the work and its methods of development. Within Polarities the instrumental music is mostly tonal in that it relates strongly to the fifth and octave and to other touchstones of western art music, such as the tritone in bar 1. However it is also concerned with development in an electroacoustic manner. The opening of the third movement is a deliberate attempt to mimic downward pitch/time shifting and reversal. The end of the first movement emulates freezing or an ‘inf.’ reverb setting, and the use of the tune from bar 285 and 480 as a timbral and contextual transformation. If the music was too atonal this would could not be as clear, nor as sonically vibrant.

The ‘fixed’ part was created from a number of four channel and stereo recordings one of which was made most memorably standing waist deep in the sea. These were processed in Pro Tools and Max/MSP and orchestrated to create the mainstay of the electronic part of the work. Many of the processing techniques are similar to those used in Con-chords (see above) albeit carried out on largely non-instrumental sounds, with the intention of fitting them into an instrumental framework. These regions were layered using similar techniques to those used in Seasons, so as to create a number of overlapping sound objects to accompany the less temporally rigid orchestral material.

In addition to this a number of processing patches were created in Max/MSP to transform the sounds of a number of solo instruments which in combination with the electronics comprised the concertino part (flute, oboe, saxophone, trumpet, 1st horn, marimba and harp). The overall orchestral output is also processed and added to the ‘tape’ material to provide a tutti ripieno. In the live context the concertino sounds were captured by ‘close-mic’ techniques and the tutti with one microphone set back from the stage. In the studio, the output from Sibelius was used to feed developmental patches in order to check processing output. The instrumental processing consisted of a variety of techniques building on the work of Enough~?, from simple delays up to more complex pitch/volume interactive modulation patches. The processing was designed to augment the sounds from the orchestra and fixed part, so it was kept deliberately unobtrusive so as not to overpower the dialogue already created. On a pragmatic level this also meant that processing intensity could be held down in a live situation.

In order to make the project work in a live situation considerable work was required to build a performance interface. As previously stated the fixed part can be broken down into a number of objects and events; however they were too numerous to be playable with the KLANGS software, as there were more sound files than letters on a qwerty keyboard. Wishing to build on the success of the KLANGS approach another form of hardware interface was required. The answer was found in the form of a MIDI keyboard, providing six octaves (therefore over 66 possibilities and many permutations) and easy availability within a concert scenario. Each sound object was mapped to a key, as was each processing event, and with the use of the midiin and sfplay~ objects could be triggered simply by watching the conductor and playing the appropriate ‘note’ as indicated by the score at the correct time. Accidental triggering can be cancelled by holding down the sustain pedal and pressing the erroneous note. This creates a relatively simple interface for repeat performances with or without the composer present.

In performance Polarities draws strongly on the format first explored in Harp Set, and later developed musically in Postcards from Home and Con-chords. The audience is again seated within a circle of eight speakers with a strong visual/musical stimulus at the front, but in this instance, instead of being an animation apparently connecting with the music there is an orchestra. The orchestra is to be seated just outside the speakers i.e. with the front speakers on the very edge of the stage or close to this position, thus enabling a solid interaction and spatial conjoining of the sound worlds. To avoid excessive warping of mix and balance, speakers should be kept close to the walls and stage and out of the audience area, provided that an angle of 45 degrees is maintained between them. This ensures that rotational elements are not offset to any great degree, and given that the performance space is not too irregular, satisfactory results can be achieved with simple trigonometry. Experiments utilising time delays and volume attenuation have been undertaken but the effects have been found to be of minimal improvement in a ‘normal’ concert hall situation.

At its premiere, two sub woofers were added to improve bass response and impact, one at the front of the hall taking a sum feed from the front four speakers and one at the back, working in a similar manner. Two speakers were also placed on stage for the benefit of the orchestra, relaying a stereo feed of the audience’s eight speakers. This was more for the enjoyment and interaction of the orchestra rather than any synchronisational need for them to hear, however, the psychological effects of such matters should not be understated. As in the case of Seasons a small amount of high frequency attenuation of the fixed part was required. The work has since been given a repeat performance in London 7/11/09 in which similar observations were made inspite of the presence of a different orchestra, venue, speakers and mixing desk. A studio performance of Polarities was released by Blipfonica Records on the same day.

Polarities performance, the diffusion point, audience placement, front 2 speakers, microphone setup, front sub woofer and onstage monitoring are visible.

3. Conclusions

This body of work was, and is, an attempted integration of the fields of electroacoustic and instrumental music, ‘in an effort to create a singularity of performance and meaning’. There are individual works displaying movements towards this goal, and a number clearly showing the combination of forces required to meet it. The success or failure of these works from an artistic perspective is implicitly bound up with the cultural and social norms and expectations of the individual listener. This is not to say that judgement is therefore impossible as works can be viewed by those with expertise or compared to similar works that are held in general high regard. The reactions of concert audiences, radio listeners and repeat performances are also indicative of general success or failure in terms of accessibility and enjoyment (as in part attested to by the list of performances in 4.i).

I have attempted to contribute to the integration process and the development of this ‘mixed’ style; producing works (both experimental and commissioned), giving papers, running schools workshops, setting up music events, teaching and above all performing. It is very rare that musical trends and movements can be catalysed by a single work or composer, rather, history shows us that landmark moments tend to happen as a result of cumulative processes.

The combination of forces runs as a link through this portfolio and has a direct bearing on the sonic techniques and intricacies of each piece, and in many cases the juxtaposition of forces facilitates effects that could not be achieved in isolation. This concept is not an end in itself. Work has begun in synchronising electronic and ‘live’ music, in truly understanding the gestural use of space, and writing for and creating instrument/computer interfaces. Musical culture has yet to codify, normalise and absorb these innovations. There are many others. This is a time of great musical and technological progression and as yet society has not come to terms with the new capabilities afforded to it.

The work in this portfolio shows that unification is possible between these diverse materials and methods of working, also that there is much yet to be discovered in the details of numerous individual and composite elements and techniques, but that these are not an end in themselves. Each piece stands as music and has narrative, emotions and development external to the mode of musical delivery. In this context we have layers of media and methods of delivery which in turn contain other forms of information, these can be brought together, intertwined, juxtaposed and even set against each other ‘the medium is [not] the [only] message’.19

Through this body of work, at times produced under difficult circumstances, this writer has learned not only discipline, but also that in spite of all the rigours of art music and its various formats, that one needs to balance various elements into one entity. Such music requires not only the diversity of forces, intellectual application and cultural understanding, but also those things that drive the basic human condition such as love, fear, hunger and joy. Thus composing becomes more than simply the output vehicle for sounds, be they electronic, instrumental or both. It is when we can achieve this balance that integration and juxtaposition at an instrumental / electronic level can be truly successful in a musical sense. This is my intention.

Edward Wright, Penmaenmawr 2009.

1  Wishart, T. On Sonic Art (Harwood Academic Publishers, 1998),  p. 25.

2  Wishart, T. On Sonic Art (Harwood Academic Publishers, 1998), p. 155.

3  Examples of this are numerous such as Pete Stollery’s Altered Images released on Legacies: Works from BEAST vol. 2 Various artists; (Sargasso, SCD 28046)

4 As is graphically depicted by Wittgenstein’s famous ‘Duck-rabbit’ to be found in Wittgenstein, L. Philosophical Investigations (Blackwell, 1993). Section XI part II.

5 Smalley, D and S. Emmerson (eds.) The Language of Electroacoustic Music (Macmillan, 1986) pp. 61-93.

6 Such as in Varèse E. (rev. 1927) Amériques (Hal Leonard Corp.)

7  Description paraphrased from Robert Normandeau 2008.

8 This should not always be assumed to be the case, as demonstrated by a number of the works of Jonty Harrison in which the eight channels are used more as eight stems or possibilities rather than eight physical locations. Such as can be seen in performances of Streams available on Harrison, J. Environs (empreintes DIGITALes, IMED 0788 DVD-A).

9 The word ‘frozen’ is used here in the temporal sense but also in reference to the freezing process/plugin developed by G.R.M.

10 Such as can be found in Smalley’s Wind Chimes on “Impacts intérieurs” Dennis Smalley empreintes DIGITALes IMED 0409 Tk. 5.

11 This unease is not the sole preserve of this writer, see Emmerson, S. Living Electronic Music (Ashgate, 2007) , pp. 107-8.

12  Fletcher, N. The Physics of Musical Instruments (Springer,1998) p. 144.

13 Analysis carried out using fiddle~ v1.2, and bonk~  v1.2 (University of California, 1994).

14 For further specialised details see: Falconer, K. Fractal Geometry: Mathematical Foundations and Applications (Wiley Blackwell, 1997).

15  e.g. Rob Mackay’s Postcards from the Summer on his as yet unpublished Soundworks CD made available by Mackay as a CD-R short run ‘demo’ disk.

16 Roll of thunder sourced from http://www.therecordist.com.

17 A slightly satirical nod towards Jonty Harrison’s Klang, KLANGS software available to download from http://www.virtual440.com from spring 2010 (Mac OS) .

18 Katz, B. Mastering Audio (Focal Press,  2007), p. 322.

19 Refuting the sentiments of McLuhan, M. Understanding Media (Routledge, 1964).

4.         Public Output

The tables below list performances of works in this portfolio and works relating to them during the time of registration

            4.i        Live performances

Date Piece Event
9/3/05 The Way I Saw It Bangor New Music Festival
17/6/05 The Way I Saw It Sonic Arts Network-Expo
1/7/05 Passage (Sound/image installation) Art Forum, Antwerp
9/3/06 En Masse (stereo electroacoustic) Bangor New Music Festival
8/6/06 Botany (SATB choir) Llandudno Festival of the Arts
26/10/06 Enough~? Electroacoustic Wales
15/3/07 Harp Set Bangor New Music Festival
26/8/07 Broken Glass (String quartet electronics and image)
Passage & Harp Set
S.N.O.G. Fest (Snowdonia Nights Out Gazette)
25/10/07 Postcards from Home SARC – Queens University Belfast
15/11/07 Postcards from Home (revised) Electroacoustic Wales
28/2/08 Con-chords Bangor New Music Festival
13/3/08 Harp Set Risk of Shock
8/4/08 Harp Set PACE at DMU
28/6/08 Klangs project (live triggering/interactive event) Alban Hefin
26/8/08 The Way I Saw It & En Masse S.N.O.G. fest
1/11/08 Seasons Llandudno Festival of the Arts
28/2/09 Polarities Bangor University Symphony Orchestra
5/4/09 Castell (composition project with Ysgol Pendalar) Bangor New Music Festival
11/7/09 Con-Chords, En Masse, Passage & Postcards from Home Blipfonica, Speakers Corner
7/11/09 Polarities Philharmonia Britannica, St. James’ Piccadilly

             4.ii       Broadcast works

Date Piece Method
1/10/07 Botany Storm FM (student radio)
2/3/08 Broken Glass WFMU (internet radio)
5/6/08 Postcards form Home BBC Radio 1 Wales
18/10/08 Con-chords and Postcards from Home Coast/Champion FM with interview
5/4/09 Castell (composition project with BNMF and Ysgol Pendalar) Y Sioe Gelf S4C t.v.

            4.iii      Other academic / peer reviewed work

Date Event
1/9/07- Module co-ordinator / lecturer for UWB ‘Practical Music Technology’ and ‘Recording Techniques’ modules.
8/4/08 Performed and delivered paper on Harp Set in PACE, De Montford University.
25/6/08 Broken Glass performed and delivered as as part of ‘Religion and the Arts’ conference, Bangor University.
2/7/08 Con-chords achieves ‘mention’ at Prix Bourges

5.         Technical Resources

This list is a combination of university equipment and the composers’ home studio.

Function Make/model
Workstations Mac Pro quad core 3 GHz OSX.5 with Digidesign 96 IO PowerBook G4 12″ with eMagic 6|2
Software Pro tools 8 (including GRM tools classic ST, Waves gold TDM ), Max/MSP 5, BAT (Bangor Audio Tools),
Sibelius, iMovie, iDVD, Photo Studio, Logic Audio 6, Finale 2004, Sound Hack, Audio Sculpt.
Monitoring ATC SCM50A (main left and right), ATC SCM20A (surround) Genelec 7070A subwoofer.
Mixing desk Soundcraft Spirit Studio 16-8-2 Soundcraft M12.
Keyboard Yamaha SY99 with MOTU Fastlane USB interface.
Microphones AKG; 414, C1000s, C2000, Schoeps CM5U/MK4 Sure SM57, Rode NT1 and Samson C02.
Location recording Tascam HD-P2 and Zoom H2.

For live events a variety of equipment was used often incorporating some of the above, but for outside events other speakers / desks etc. have been used.

6.         Bibliography

             6.i        Written Material

Cox, C. & Warner, D. (eds.) Audio Culture: Readings in Modern Music (Continuum, 2004).

Bartlett, B. & J. Practical Recording Techniques (Focal Press, 2005).

Emmerson, S. (ed.) The Language of Electroacoustic Music (Macmillan, 1986).

Emmerson, S. Living Electronic Music (Ashgate 2007).

Falconer, K. Fractal Geometry: Mathematical Foundations and Applications (Wiley Blackwell, 1997).

Fauvel, Flood & Wilson (ed.) Music and Mathematics: From Pythagorus to Fractals (Oxford University Press, 2006).

Fletcher, N. The Physics of Musical Instruments (Springer,1998).

Katz, B. Mastering Audio (Focal Press, 2007).

McLuhan, M. Understanding Media (Routledge, 1964).

Rumsey, F. Spacial Audio (Focal Press, 2005).

Schopenhauer, A. The World as Will and Idea (Phoenix, 1995).

Toop, W. Ocean of Sound: Aether talk, ambeint sound and imaginary worlds (Serpent’s Tail, 1995).

Wishart, T. Audible Design: A plain and easy introduction to practical sound composition (Orpheus the Pantomime Ltd, 1994).

Wishart, T. On Sonic Art  (Harwood Academic Publishers, 1998).

Wittgenstein, L. Philosophical Investigations (Blackwell, 1993).

             6.ii       Websites




             6.iii      Scores

Berio, L.  Sequenza VI (Universal, 1967).

Holst, G. The Planets (Dover Publications Inc., 1916).

Ligeti, G. Lux Aeterna (Edition Peters, 1966).

Stravinsky, I. The Rite of Spring (Boosey and Hawkes, 1913).

Varèse, E.  Amériques (Hal Leonard Corp., 1913).

             6.iv      Recordings (CD and Audio DVD)

Boulez, P. Boulez: Sur Incises Ensemble Intercontemporain. (Deutshe Grammophon, 463 475-2).

Elgar, E The Dream Of Gerontius: Boult/LPO (EMI, HMVD 5 72758 2).

Harisson, J. Articles indéfinis (empreintes DIGITALes, IMED 9627).

Harisson, J. Environs (empreintes DIGITALes, IMED 0788 DVD-A).

Mackay, R. Soundworks (unpublished at present).

Messiaen, O. Olivier Messiaen: Des canyons aux étoiles: Leeuw/ASKO Ensemble (Disques Montaigne, MO782 138).

Smalley, D. Impacts intérieurs (empreintes DIGITALes, IMED 0409).

Various artists. Cultures Électroniques: 24éme Concours International de Musique Electroacoustique (Unesco, LCD 278063/64).

Various artists. Legacies: Works from BEAST vol. 2 (Sargasso CD, SCD 28046).

Various artists. Schoenberg, Berg, Webern; Orchestral Works: Karajan/Berlin Philharmonic (Deutsche Gramophon, 419 781-2).

Various artists. Sonic art from… (MPS, MPSCD013).

Wishart, T. Red Bird/Anticredos (EMF, CD022).

Various artists. Stravinsky Bartok – Violin Concertos: Salonen/Mullova/Los Angeles Philharmonic New Music Group (Philips, 456 542-2).

Appendix available at www.symbiosisphd.org


My blog/life so far…

I have finally dragged myself into the 21st century. The code for a lot of my work is now up at https://github.com/edwardwright/virtual440

Please excercise caution if you decide to buy this book. I am very flattered that people have lifted things to do with my life from Wikipedia but it is NOT worth buying. The information is all up there on-line. If you buy this they will be ripping down my reputation and your bank balance… Sorry 😦


Interactive sound game, Ed Wright 2013.


Drop! is an interactive sound game based around a marble run toy. When someone drops a marble (in this case a steel bearing) down the run, it triggers audio samples from the computer. The pieces which trigger the audio are wired into the system so that when the setup of the run is changed, so is the order and timing of the sound, thus making a reconfigurable piece of music and fun!

The music is a mix of 11 short audio extracts. These range from close microphone recordings of marbles doing things in the studio; such as rattling in a bowl, or going down the marble run. One percussive sound has been made by taking the sound of a marble and drooping it on a board, slowing it down four times and then modelling the reverberant acoustic of York Minster around it. Other sounds are more mainstream ranging from FM synthesis (the default setting for all good sci-fi film scores!) as well as a more dance-y sounding loop created in Reactor.

The audio was balanced an mixed in 4channel surround sound to enhance the motion and movement of the tracks. The sounds of a bicycle bell and a duck have been thrown in for good measure.

Things happen in the work when the marble rolls over two pieces of aluminium tape and completes a circuit, effectively closing a switch. With the aid of some home made electronics and an Arduino circuit board-chip housed in an old Chinese takeaway box these messages are sent via USB, luck and faery power, to the computer.

In the computer the interface which de-codes all these signals, brings up pictures of the active part of the marble run and plays back the audio has been written in Max MSP. Hopefully that makes sense for those that want it to, the full code is available here.

So there you are. An in depth and subtle composition that questions the nature of sequence in music with some technical rough science thrown in…
Maybe sometimes it’s more important to have some fun!

Many thanks to Alena Grace Wright for her assistance and forebearance with this project.


Today I’ve been at code control in Leicester, good to catch up with the gang such as Andrew Hill & Richard Garrett. Roughly 5 years ago to the day I delivered my first paper at SSSP DeMonfort Uni here. I was due to become a Dad pretty much on the day and in true biblical style we trekked over here. It would have been one hell of a ‘keynote snub’ but didn’t happen until a few days later!

Due to the stupid amount of un-seasonal snow the zip of my Mammut winter jacket (and the rest of it!) has finally broken. New coat, a visit to a dead monarch… and yes Richard iii’s carpark is very pretty as car parks go. Gig tomorrow/today, good night all


Going out recording source sound objects with 21 year 6 pupils from Ysgol y Graig Llangefni for Bangor New Music Festival. The sun is out, the grass is green, let the madness commence!


RIP Dr. David Evans

A sad day, the loss of a teacher, friend, fantastic story teller and an encyclopaedic mind. A truly gentle and generous soul. Thank you David.


Having fun recording a game of snakes and ladders in 6 channel surround sound. I am very grateful to Alena for letting me use the wooden game set that Granny and Granddad gave her for Christmas, and for Mike’s help in setting up, playing, de-rigging and being up for madness!


Snakes and ladders sound recording


I have finally got around to producing a flattened copy of this, it was part of the Bangor New Music Festival last year and the festival workshops are coming up again soon! Enjoy.


Happy Christmas and New Year to all those who celebrate such things, and tons of love to those who don’t. It’s the end of a long year, hopefully the world will not end tomorrow! I have been seriously lucky this year and have had reason to travel to: London, Leeds, Edinburgh, Ljubljana and Toronto to name but a few. I’ve played about making seascape installations, played the violin, diffused acousmatic works, been exhibited in galleries, taught and learned a hell of a lot more. Thanks to all that have helped, listened, watched, (those who’ve read how I’ve staggered on through this blog) put up with, and even to those who’ve been down right hostile, it’s been a blast. Good luck for Christmas, the New Year and the end of the world tomorrow. 🙂

8 bit christmas mario mushroom

First steps in proper Sonification…. Here we go with one I made earlier!

Sonification Max Patch

Revving up for random family Christmas carols! Nice to see the spirit of wonder about the inside of a piano still carries on!

Upright piano open hammers and strings

Dear All, I don’t think I have ever put up a concert on here that I am not playing in, but this is one that I am helping to set up for in Bangor and by a man who I have amazing amounts of respect for, both as a person, for the fanatical precision that goes into his work and a great sense of humour. So come along if you can!


Trevor Wishart: © R.H.Y. Yau / 23five Incorporated

Andrew Lewis (director)

Renowned English composer Trevor Wishart presents his newest work Encounters in the Republic of Heaven, an invisible ‘sonic opera’ in which recordings of everyday stories told by adults and children from north-east England are computer-transformed into a fantastic surround-sound experience.

Trevor Wishart

Tickets: £10 full price; £7 concessions; £5 students. One child admitted free with every accompanying adult. Additional children £2.

Date: 29/11/2012
Time: 08:00:00 PM
Location: Powis Hall, Main Arts Building, Bangor University LL57 2DG

For more info click here

After the rain, floods and high winds of yesterday, this is the road outside Hendre Hall where my studio is.
Flooding WalesFlood damage hendre hall
Given the size of the boulders (no exaggeration) washed down the hill and the 2 ft of mud that has drifted onto the side of the road I am amazed that the water didn’t get into the studio. Sometimes having to carry stuff up steps can be a blessing after all. My thoughts and sympathies go out to all affected by the weather of the last few days, and with more to come, keep safe.

Still pushing forward, proud to be playing for this back in Bangor with a load of friends, and a super-sexy poster!

Blipfonica underground fat cat

Back to lecturing, it seems we have a very nice bunch of students and it is quite refreshing to be teaching on ways to use stuff such as Max/MSP rather than just how to use it! 🙂

On return to the U.K. it seems the entire world has a cold, oh dear, here we go!

In Ljubljana for ICMC 2012, beautiful city crappy hostel, but it does have internet access and therefore the not quite qwerty (qwertz) keyboards have brilliant ‘new’ letter keys!


Ok I’ll stop now.

Taking a moment out from writing to watch stuff that has been sent to me. Thanks to Andrew Hill for the link to this and to John Richards and the Dirty Electronics crowd. Its a really nice little portrait of what they do and the overall feel of what goes on. It’s not overly techy which is probably both a good and a bad thing but, they are lovely people and I’ve stuck it up here so that hopefully a few more people can get to see their work too!

Charge/Discharge (2012) by John Richards [Documentary & Performance] from Andrew Hill on Vimeo.

Back from Toronto, what a lovely place and people, if they had tea it would be even better! Now hiding re-grouping and getting set for ICMC!

CN tower toronto tourist


Playing violin for a symposium in Toronto whilst staying in a backpacking hostel, whatever I do I seem to always be massively under/over dressed!

Sorting out last odds and ends to fly to Canada in the morning. This will be my first ever trans-Atlantic flight, super excited! It’s for this…

NAISA toronto

Toronto Electroacoustic Symposium 2012
New Adventures in Sound Art (NAISA) and The Canadian Electroacoustic Community (CEC) are pleased to announce the 6th annual Toronto Electroacoustic Symposium 2012. At the heart of this 6th annual symposium will be a keynote lecture by Sound Travels featured artist Trevor Wishart (UK).

All events at the Artscape Wychwood Barns, 601 Christie St in the Christie Studio (#170)
except evening concerts which are in the Wychwood Theatre (#176)

Symposium Concert #1: 8-10 PM Wednesday 15th August 2012
Kerry Hagan – Morphons and Bions
Ethan Hayden – bats with baby faces in the violet light
Yota Kobayashi -Shikisou
Donna Hewitt – One
Ed Wright – Crosswire
Andrew Connor – No. 3: Foghorn Abstraction



Having battled my way along the coast via car, foot, train and bicycle I’m now in the process of mixing down for ICMC in Slovenia yeeeeehaaaa!

For more details and program see http://www.icmc2012.si/program.html

Pro Tools ICMC 2012

Here’s some footage of a piece by Richard Worth entitled Green Man Blue that uses some programming I put together for the electronics. Enjoy!


Recovering… and gearing up for next weekend! It was nice to see my parents though this weekend 🙂


Gearing up for a mad one… Running and performing in Risk of Shock in Bangor (N. Wales) on Thursday evening, then throwing a load of kit in the car and driving down to do stuff at BEAM Festival at Uxbridge (London) Friday – Sunday…. Wish me luck!

Risk of Shock

BEAM Brunel 2012


WOOOOOOOO! Got through to both Toronto Electroacoustic Symposium AND ICMC2012. A little giddy right now! And thankful to Prof. Thomas for helping to fins a bit of cash to cover travel!!!


Just discovered this flyer, you can just make out ‘Crosswire’ on there, yep that’s me in Edinburgh this time next week! 🙂

Red Note ICC Jam house


Playing at IFIMPAC at Leeds College of Music.

Ed Wright Dale Perkins IFIMPAC Leeds

A great few days! Blogging on the train home, stuck at Manchester Piccadilly waiting for the train to Chester. It’s now 11:15 on Friday, I got to sleep around 3 am thurs, up at 5:30, bed last night at 2am gigged today and wont be home till gone 2pm Sat….. IF THE TRAINS ALL MAKE THEIR CONNECTIONS!!! Listening to Sqarepusher Ultravisitor as a come down to chill out and show my age. Nos da pawb;)


In Leeds waiting to rehearse…

25/4/12 cont.

Also gearing up for Ifimpac in Leeds tomorrow. Lots of work to do on the train…



Proper geek moment…. Is it wrong to get excited when someone else tweets a picture of them running your software several hundred miles away? Oh well guess it’s just me being sad then, but hey, Richard Worth and the Red Note Ensemble are currently working on a piece of his based on my Crosswire software to be performed in Edinburgh next month.

Crosswire mac


Noisy Nights with ICC
When Wed, May 9, 20:00 – 22:00
Where The Jam House, Queen St, Edinburgh
Description Over 30 expert designers from around the world have contributed a variety of instruments, interfaces and sound generators for the seven resident composers from the University of Edinburgh to work with. These range from 3D-printed spheres that respond to touch, to software modelled on neurons firing inside a human brain. The composers will create chamber works written specifically for their chosen instrument, to be performed by Red Note at a special edition of Noisy Nights at The Jam House, 5 Queen Street, Edinburgh at 8pm on Wednesday 9 May.

Spent most of today preparing 2 lectures; one on logic and it’s written form’ the other on interfacing Arduino boards with Max/msp. I wonder how much I’ll remember tomorrow?!

Very happy, to be being part of this. Congratulations to all the artists and organisers for a wonderful few days!

Interactions Bangor


Hello all, sorry for not being here for a while. It has been frantically busy with the Bangor New Music Festival, Y Galeri Caernarfon, Cauldrons and Furnaces, and Interactions to name but a few! Anyway, trying to stay focused and also relaxed, so if you see me running around like a fool please, you have my written permission, tell me to calm down!

My piece Con-chords is playing today, almost yesterday/tomorrow given time differences as part of W.E.A.L.R. (World ElectroAcoustic Listening Room) an element of California State Universitys’ New Music Festival. Drop in if you can! See EVENTS for more details.

CSUF World Electroacoustic listening room


Amazing moment! I ran the Who Can Hear the Sea? installation in the studio overnight. Wonderful to walk back in to find that, firstly it hadn’t crashed, and, it actually worked. The different loops are all re-triggering correctly and therefore the sounds are continuously shifting in time. It’s like an entirely new piece this morning. YES!


One piece finished… time for pancakes!


Nearing completion with Who Can Hear The Sea? I’ve had to build a player to make the looping stuff work and realised that it may have ‘normal’ people looking at it so decided I really should make it pretty!

Who Can Hear the Sea? Ed Wright

6/2/12 – cont.

Complete Bricks, and Parties


I have spent the evening working on Who Can Hear the Sea. There is something there now, I’m not sure what, it still needs a lot of refining but it is growing into being a useful installation piece. To keep the work open and moving it is based around the idea of 8 surround sound audio loops running at once with sound objects/musical gestures on each of them. The trick is that when each loop gets to the end it simply starts again, but no two loops are quite the same length. The loops last for:

401, 397, 389, 383, 379, 373, 367 & 359 seconds.

All of these figures by my reckoning are prime numbers. Once the piece has been started it will take around 1800 years for it to all come back into sync. This is all well and good and fits with the idea of water and the image it is never the same even though it may be the same stretch of water.

From a musical and composing point of view making this work is becoming more interesting. In order to retain much musical sense, at some level each loop must be self contained. As the loops become staggered the different elements of composite sounds will obviously go out of sync, therefore to create a piece that does not simply sound out of sync there must be complete musical gestures contained within each loop. Obviously these can interact with other loops much like phasing in minimalism or fugue and countersubject, but in much the same way there needs to be completeness to each element.

Moving up from this level we get into context and causality, trying to introduce the various elements and gestures in a self contained way. Rather like going to a party and having to introduce yourself to everyone, it is hard to get a system to introduce itself: firstly at all (how do you prepare for something with that something?) and then how to achieve this without becoming oddly repetitive (Hello I’m X and I work in Y, how are you? Have you met Z?)

Once this hull is breached how does one create structure?

I am placing faith in two apparently contradictory points of view. Firstly the minimalism/fugue analogy, in this work is at some level very similar in that there are complex elements arranged within a very strict (if somewhat less apparent) time structure. Secondly, that I am emulating the stretch of water image. Once this is realised the rest falls into place. Perhaps I am not creating ‘musical’ structure but rather a different, time based structure on which I am hanging various sonic artefacts that help the listener to navigate through it by mapping time, the relationships of the cyclical loops and augmenting the aesthetic experience of this passage of time and memory.

Just a thought. Night night all!

Working towards this:

Blipfonica Caernarfon

I have a poorly dog and a tax return to do 😦

Lots of marking, lecturing and running of open mic night, hard work paying off though, got listed for icc http://people.ace.ed.ac.uk/students/s9809024/icc/crosswire.htm

Setting up for a long one in Studio 1 Bangor Uni tonight!

can now blog via mobile…

Just been appointed ‘Teaching Fellow in Music’ at Bangor University. Ooooooh

Happily processing sea sound to make new sound objects 🙂

New work in the pipeline to be premiered 10/3/12 at Bangor New Music Festival.

Who Can Hear The Sea?

Sound Installation for: Surround sound audio and max/msp programming.

Who Can Hear The Sea? is a sound installation commissioned by the Bangor New Music Festival. The sea is a major theme of this year’s festival as well as being a fascinating subject for recording. In developing this piece I tried to record the sound of the sea, free from outside influences such as: wind, road noise and sea gulls, but gradually became aware of a number of issues that arise.

Beside the practical issues of trying to find a silent beach on a windless day, with big waves, water by itself makes little or no noise; it is the matter and energy around it, in it, acting on it and contained in it that makes it sonorous. In addition to this there is the dynamic but structured nature of the sea, it is constantly changing, in an incredibly complex, yet ordered way.

Who Can Hear The Sea? Uses recordings of sea waves as its basis and then uses computer processing and techniques such as subtractive synthesis to shape confine and restrict these sounds to create a number of sonic objects or ‘musical’ statements.

These sound objects have then been edited together to create 8 audio loops, the longest of which is 7’02” and the shortest of which is 5’59”. These loops all play simultaneously and repeat as they reach the end of each play through. Thus the different loops move in and out of phase. Not only does this make for a constantly changing piece, it means that just like the sea you can listen to it and hear ongoing similarities but it will never be quite the same again; as for all the loops to come back to the start and line up exactly it would take 1875 years, 1 day, and 16 seconds (roughly)!



Happy New Year (and late Christmas!) to all! Best wishes to those who don’t subscribe to the same calendar as I do too! I hope the next 12 months will be even better than the last and that everyone gets what they need, even if not all that they want. Take care.


Starting work on a new sound installation commission for Bangor New Music Festival, to be premiered 10/3/2012, based (loosely) around the theme of the sea. XD



Over the last few days I’ve been thinking about composition and analysing what it is we do, how I want to do it and attempting to clarify my thoughts. So. Here goes….

This is where I am coming from and the direction in which I want to travel.


Manifesto 5 Dec 2011

1. I want to create works utilising sound; for the development and creation of artistic, aesthetic, moral and philosophical structures.

2. These will aim to be of high quality.

3. Where possible the technology used or developed will be open source and carry the potential for reverse engineering.

4. The output will be accessible to listeners, performers and other composers; in terms of performances, materials and syntax.

5. The works must be artistically and economically sustainable.

Ed Wright



Very horrible and rainy out there so have tried to compile a load of program notes click here to see them! PROGRAM NOTES



A couple of videos from the Blipfonica Live concert last weekend.

If you can’t view them here try http://www.youtube.com/user/edwright440



Played my first public event ever on the Theremin last night. Wooop woop!

Click here to view older blog posts.

Going into hibernation as far as new works go for a while. There are several gigs coming up and lots of things I’m working on behind the scenes, but for once there is no instant deadline for a new piece coming up. Several new pieces are in the pipeline and will be unleashed soon. I’m off to the shed with a new set of violin strings and a theremin, ‘dum de dum’.


Wendy Dawsons’ recording of the projection of Amser/Time by Tim Pugh Wendy Dawson and Ed Wright on the side of Conwy castle for Blinc Festival on Saturday night. We could give you the glossy studio version but you’d miss the mad and massive atmosphere there. Current estimates put it at 1500 people on the Quay alone!



Well done to all at Risk of Shock! And yes I will take the School of Music kettle back!
Photo of me in mid scribble performing ‘Live Study’ taken by Huw Williams


Running around like a loon trying to sort out for three days of events; Risk of Shock and Blinc (see EVENTS for further info. But it turns out I’m not the only one as Joel’s photo below will prove. This shows a roof, yes a roof, being put on top of one of the towers in the town walls of Conwy in order to house ‘This is Tomorrow’ Dominic McGill’s sculpture and my audio.

This is basically civil engineering in a world heritage site! Feeling a little humble right now. And yes, it will be taken down after!

Great to be part of so many good teams for the upcoming days. Bangor Uni, Blinc and Blipfonica, thank you. And also anyone else who should have been on the list, even if you didn’t begin with ‘B’.

Risk of Shock – Powis Hall Bangor Fri 7:30pm
Blinc – Conwy sat-sun all day.
***All free!***

A wonderful sketch by Dominic mapping out the text and Moebuis patterning of the audio for ‘This is Tomorrow’ I’m not quite sure if he did it before or after I played some of the work to him, but it seems to match up with what’s going on very well!

‘This is Tomorrow’ (with audio) will premiere at Blinc Festival 22/10/11

Putting final touches to the next release. A 7:30 CD single of the electroacoustic work done for Helen and Rhys’ installation.

JACKDAWS – Ed Wright Release date 22/10/11 BLIP006

For the amazing price of £1.00!!! Available on CD (and download soon)

Stuff for the 3 Blinc commissions is completed *sigh* now gearing up for Risk of Shock on the Friday and Blinc on Saturday/Sun. z z z z z z

A wonderful photo montage put together by Tim of the work that he and Wendy have been doing on the Amser (“Time” in Welsh) stop motion animation. I’ve just completed the audio for it and it’s going to be projected on the side of Conwy Castle as part of the Blinc digital arts festival later this month. 🙂
Photo courtesy of Tim’s facebook page!




Photo from the Good Robot ‘Interplay Room’, complete with sofa and ‘Sound Games’ on a big screen (along with ‘Hopscotch’) for people to come and tinker with 🙂 I was honoured to share the room with Mike Cook and Zac De Santos and their wonderful invertions…

I’ve decided I actually quite like collaborating. I’m currently working on a piece with Dominic McGill for Blinc Festival (also as Tim and Wendy haven’t finished their stop motion animation yet it seemed sensible to get on with something. Anyway, here is a photo of Dominic’s piece:

‘This is Tomorrow’ by Dominic McGill

It’s great the way that working with other people forces you into new ways of doing things. Use Hachiya’s ‘Hiroshima Diary’ as the basis for a sonic morbius loop…. Hmmm, yep this can be done, and I’m now in love with ProTools elastic audio for speeding things up and down to boot!


In Bangor University’s Studio 2 setting up to do some recording with Tim Pugh and Wendy Dawson. We are collaborating on stuff for the Blinc Festival next month. Recording some very quiet ticking clocks. Having got in early to set up, I’m now ready and bored, happily reflecting on the fact that whilst ideas and talent are absolute essentials; a sound proof room, 96kHz sampling rate and a matched pair of Shoeps microphones will hopefully help a lot too!

A few mins later and here we are a nice analogue clock, probably rendered completely rubbish by mp3-if-ication, but it was lovely and clear to start with, and also quite nicely irregular if you listen long enough!

Strangely it’s quite nice to be back in the place, I’ll also (apparently) be doing work for the festival with Dominic McGill & Helen Booth…. Busy busy busy!



Sound Games Ed Wright 2011

Structured improvisation for computer and Wii controllers

Sound Games is an improvisational/compositional environment (programming EW). At its heart lies the idea of gesture as fundamental to our understanding of music. This functions on a number of levels; firstly there is the innate physicality of playing an instrument, banging a drum harder to make a louder sound or breathing in to prepare to sing. At the other end of the process there is the aesthetic recognition of the gestural properties of a piece of music, as exemplified in the phrases ‘a descending tune’ or a ‘swirling cacophony’.

Computers have very simple methods of communicating with the outside world. On an average PC one tends to find a keyboard and a mouse as ways of entering information. Whilst this is perfectly acceptable in most cases it does little to reflect or for that matter enhance the music making process, indeed it is almost the digital equivalent of trying to make music with a pencil and paper or various pieces of office hardware!

Sound Games is an attempt to break down some of these barriers.

The program uses two Wii remotes, one to control the left half of the screen and one to control the right. Each long white rectangle represents a sound file indicated by the name e.g. . The position of the cursor with that box (controlled by the remote) dictates the pitch and left/right positioning of the audio, in a hopefully intuitive, left makes it go left, up makes it go higher fashion. In addition to this rotating the controller inwards makes the sound louder (conversely outwards lower), the ‘A’ button locks the cursor to the current vertical position, the ‘down’ button disables pitch shifting and the ‘B’ trigger starts or stops overall playback.

It is an inherently simple instrument, but a lot more expressive than a QWERTY keyboard. If one then sets the tolerances fairly fine it becomes harder to control but with a far greater range of expression, you can do more things and in so doing the potential for doing ‘wrong’ things is also increased. It is exciting to find a way of creating music with a computer that requires similar levels of practice and motor coordination as you would expect on an acoustic instrument.



Working all day and then set up for “Live Study” (see score below in 4/9/11) until 10pm :&
It’s shaping up to be a lovely environment to be in, come along on the 9th it should be ace and it’s free entry!


Now showing in exhibition at Penmaenan community centre.

Folksong: Ed Wright, 2011

(Soundsculpture from found object, fishing line and piezo transducer)

Folksong is the re-imagining of an old framework. Retaining its innate structure it draws from other resources in its surroundings and incorporates new ideas and materials into itself….

P.S. This hopefully starts to make sense of some of the ideas from the 1/8/11 and 15/7/11 (see below)… I’m working on a massive 3D version of this idea, a cube made out of 15mm steel, roughly 2m x 2m x 2m a similar idea with the fishing line and piezo mics but extending from the edges, less regular and big enough for people to easily get inside and move around in…..



To be premiered on Friday….

LIVE STUDY: Ed Wright 2011
Performance Score

Live Study is an electroacoustic work performed by one person. The performer is called upon to create pictures drawings and sculptures of their choosing using the materials provided. These actions provide the audio material for the electroacoustic work. The performer must balance the needs of creating visual artifacts with the realization of the musical work, maintaining as high a sound and ‘composition’ quality as possible throughout.


2.1 Materials:
12 HB Pencils with rubbers
3 Permanent markers various colors
3 Sheets of A3 cartridge paper
2 Sheets of clear OHP acetate
1 Pair of Scissors
1 Retractable craft knife
1 30 cm Steel ruler
1 Cutting mat
1 Roll Sellotape
1 Glass jar to store pencils in

2.2 Technical:

2 Tables
2 Boundary microphones
1 DAW running a recording program (cubase or similar) and some form of live DSP (PD or similar)
1 mixing desk
2 (or more) powered loudspeakers for playback
1 Set of headphones
Necessary cables and power.


Place 2 tables in an L shape, one is the DAW Table and the other the Visual Table.
2 microphones are placed in a stereo configuration on the Visual Table and fed to the DAW.
Microphones need to be set at an appropriate level so as not to feed back. Due to the differing nature of the sounds to be recorded in the performance mic gain control must be easily accessible.
Headphones are needed for PFL monitoring.
The mixing desk should be set up so that the audio recorded and played back in sections 4.1 – 4.3 can be faded in or out independently of the processed sounds of section 4.5.
The desk should then send sound out to the loudspeakers.
The ‘materials’ are stored on the DAW table until needed.
Soundcheck and rehearse


4.1 Recording
Set the DAW recording, audio from the DAW can be fed to playback speakers if required although feedback may be problematic.

4.2 Visual
Move the materials from the DAW Table to the Visual Table and create visual images in the paper and acetate. This process will create audio that is recorded by the computer. The artifacts that are created are mainly at the performers discretion, however these actions must be ripping a piece of paper in half. The more varied the sonic material is, the better. The performer needs to ensure that the sounds are of good audio quality and recorded at suitable volume levels.

4.3 Playback
As the performer is drawing to the end of stage 2 the recorded audio needs to be normalized and played back thorough the speakers, this can then remain constant through the rest of the performance if desired.

4.4 Visual and Editing

As per section 2. The performer augments the playback sound by creating more visual work. This can be carried out intermittently until the end of the performance. As playback from section 3 continues the performer needs to extract interesting ‘sound objects’ from the recorded audio. This work should be interspersed with more drawing, cutting folding etc. at the Visual Table.

4.5 Manipulation and Development
Sound objects are then transformed in real time. These actions can include manipulations both in the time and frequency domains. Audio must be triggered, mixed and balanced to provide a suitable counterpart to the visual artifacts that the performer must continue to produce.

4.6 Conclusion

The work is closed by the triggering of an un-manipulated paper rip sound at which point all other audio is faded out. This is to be triggered in the following situations; when there are no more materials to create with, or the musical potential of the audio has been exhausted, the performer feels it is time to stop, or the work has extended over 30 minutes.

(c) Ed Wright Blipfonica 2011


Doing this today!



Making noise and visual stuff in Llanfairfechan this morning, and helping to set up an exhibition in the Penmaenan community centre this afternoon as part of Helfa Gelf. Helfa Gelf is a series of events taking place across N. Wales through September. The idea is to have artists working in their studios but to have the studios open to members of the public to pop in and visit (kind of like a friendly cultural zoo!) There are also a number of exhibitions running such as the Penmaenan one (opening times at www.blipfonica.com). My studio will be open weekends in September from 11-4 and I’ll be performing as part of the general festivities on 9th Sept in the Penmaenan Centre and on 16th in Y Galeri Caernarfon.




Rain again today! I guess that is what I get for living by the sea. So no work going ahead on the steel sculpture work as it has to be done outside (owing to the fact I’m trying not to go to the studio over the holiday and it won’t fit in the living room). However, with sidelines on sidelines going on, work is therefore progressing on a Magnificat and Nunc Dimits based around the base sequence of the chromosome of bateriophage phiX174 (the first complete mapped genome). On first inspection the code appears rather unpromising

gagttttatc gcttccatga cgcagaagtt aacactttcg gatatttctg atgagtcgaa…

I’ve re-mapped the sequence to span an octave and imply rhythmic duration, and, oddly with a little bit of tweaking the system it comes out with some unexpectedly beautiful results. The table can be read like any ordinary table. Along the top is the genome ‘letter’ and down the side is how many times it comes in succession eg. aaaaaa would relate to A x 6.


1 D A# G D#
2 C B E F
3 A# C C# G
4 G# D A G#
5 F# F A
6 E B
7 C#

According to the table this would result in the note E being sounded. Duration is also mapped by the number of repetitions although a lot more simply, the above Ax6 example would mean that note would last six quavers Tx3 for the quavers and so on. This gives us:


the original opening of the string:


the string is the re-transcribed to show individual ‘events’:

g a g tttt a t c g c tt cc a t g a c g c a g aa g tt aa…

the letter and repetition data is then mapped and transformed as per the grid above giving:

G1 D1 G1 G#4 D1 D#1 A#1 G1 A#1 F2 B2 D1 D#1 G1 D1 A#1 G1 A#1 D1 G1 C2 G1 F2 C2…

This can be then translated into standard notation by simply taking the letter and # to indicate pitch and the number to signify the duration in units of a quaver.


The notation shows the resulting opening bars. The resulting monophonic line will form the basis of the organ part, tempo dynamics, rests and bar lengths can then be composed around this as the choral lines and text are worked into this sub-structure.

Philosophically and ideologically I’m quite excited by the idea of combining the word setting and the DNA form. There feels as if there is something a bit subversive and yet also praising about the idea, a tug of war between evolutionary chemistry and the work of divine precision and creation. I can’t quite put my finger on it but working the piece out feels like it may be one amazing ride!


Thanks for all the birthday wishes!


August is here, which means an attempt at a holiday. So I’m kind of ‘not doing music’ but am chopping up lengths of steel (in a very occasional manner) to build into a sound installation/sculpture. As it’s closer to the role of blacksmith than “Musician” I feel it’s not really cheating….

As a result of a 3 day masterclass/workshop on Sound Art with Manuel Rocha Iturbide, this is my contribution: collaboration with Andrew Hodges, Scott Flesher and Alistair Attard for Bangor Cathedral.

It’s called ‘Wordless Prayer’ and uses a simple relay driven from an Arduino board (click here to view CODE) to make a prayer bowl vibrate, which is in turn fed to a microphone and reverberates and feeds back through the space of the cathedral. It resonates the rhythm of the Lord’s Prayer, which oddly enough seems to have approximately the same intonation and syllable structure whatever language it is in.

Being pretty site-specific and conceptual I probably haven’t done it much credit here, so come along and see it in Bangor Cathedral 27-30th July 2011 (free entry). It’s running all the time except when services are happening. There is a 2 min pause in the loop so if you pop your head around the door on a trip through the city and nothing seems to be happening, please come in, pause and reflect, it probably is running!

On a three day Sound Art master class the Manuel Rocha Iturbide courtesy of Bangor Sound City. Nice!

On an aside…
Following on from an initial rush of excitement and inspiration at BEAM festival about a month ago I’ve been playing around with learning a bit more about the electronics side of electronic music. It’s odd that most people just assume that any decent musician must know how to (if not repair, at least) maintain their instrument in good working order but that this somehow does not apply to those working in the field of electronic music. Anyway, to cut a long story short I’m trying to fill in a few gaps in my knowledge as well as experimenting (in a very basic and amateur fashion) with an Arduino board (pictured above with breadboard and LEDs) and attempting to learn ‘stuff’ that will be of use in the long run for making music, performance interfaces and installation mechanisms. Or at least that’s the plan. At the minute I have 3 happy flashy lights, an approximate idea of how to code for an Arduino and a big chart with resistor colours on it. Any advice or tips to the usual email would be great! Thanks Ed

Steel for installation update! 🙂
I’ve got some now, courtesy of Williams the Steel Fabricators in Llandudno. Let battle commence! Thank you all.


Working on developing software to blur the edges a bit and make live performance more ‘live’…. Currently it’s a step sequencer but watch this space.


Starting work on a new piece…. Does anyone know where I can get 10 x 2m lengths of box steel tubing from?!


Recovering, tiding studio, drinking tea and reading Douglas R Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid


Not strictly a creative endeavour but graduation from PhD in music. Big thank you my supervisor Andy Lewis for all his help and support!


Lots of people playing ‘Hopscotch’ at Frakture’s Altar Pieces in the Bombed Out Church in Liverpool. Hopscotch is a sound installation piece I created a few months ago. It uses motion tracking to trigger audio and hopefully a fair amount of fun!