New music in the pipeline…

Recently I was given a mixtape on cassette, the first ‘new’ cassette to come into my possession in well over a decade. The sentiment was amazing and it’s a wonderful thing but listening to it the sound quality was less than great being a generational copy compiled from several other tapes and recordings from the radio. Oddly the fluctuations in pitch, EQ colouration and stop start edits did not seem to matter as in the intervening years they had taken on a charm; linked to a time, objects and set of actions which have been technologically superseded which now makes this a creative choice.

The wow and flutter on the tape have become like the adze marks on the beams of rustic cottages, once rough but now a mark of the time, the technology and the interaction between human intent and facilitation.

Artefacts is a piece of music made by pushing digital and analogue audio equipment into extremes of its range while feeding a “silent” or null signal. After repeating this process a number of times the quirks of the system gradually come into focus with the glitches and biases of the equipment becoming amplified by repetition, sometimes with surprising results. These experiments were recorded and then used as material to create this piece.


Space To Think

In about September I was commissioned by Bangor Music Festival to compose a piece of electroacoustic music for their February 2018 event, along with a series of education workshops. I really wanted to do this and it was looking like it was going to be an amazing autumn and early spring of creating cool stuff and having fun; then the floor almost literally gave way.

Following a period of ill health my Dad took his own life in mid October and unsurprisingly this hit me really hard. It is not so much the sadness which is debilitating but the feelings of numbness, rage and lethargy that suck the capacity for creativity away. In my case my Dad and I got on really well, he was a role model and someone who had a massive influence on me throughout my life, when something so seemingly at odds with everything you have ever known happens all the basic assumptions that you make in life come into question. I would even look at my feet when walking down stairs, not through shock or physical instability but because I no longer trusted the assumption that I knew where my feet and where the steps where. It was certainly no mindset to take creative decisions in, they are so vague, so intangible and impossible to verify that the simplest starting impetus threw up paralyzing indecision.


It was at this point that I sadly informed Guto the festival director that I couldn’t fulfill the commission. I have never had to do this before and it left me feeling awful, but also slightly relieved.  There followed a period of calm, I got back to doing some work and I managed to get off the antidepressants (citalopram) which had been prescribed to help me sleep, level me out and stop catching things out of the corner of my eye. In late December I got a phone call from Guto offering to take some of the festival work back, but once again asking if I would like to compose ‘something’ for the finishing concert.


I find it really hard to sit down and just make noises or compose, some people start from an initial sound or a feeling, I tend to find some sort of idea or framework to hang something on and then can go from there. I though about this for about 24 hours, it was an incredibly kind offer which Guto had made, and my head was clearing. I went for a run in the hills, it happened to be early as I wanted to make the summit of a mountain near to us to catch the winter solstice sunrise and on the way up the ideas just struck me.

The theme of the event this year is space and I am happy to say that the work shared a stage with Birmingham Ensemble for Electroacoustic Research (BEER). BEER had worked in collaboration with the Art@CMS project at CERN in Switzerland, using real-time sonification of data streams from the Large Hadron Collider, the world’s largest and most complex particle accelerator. This is something which it is foolish to compete against; that, and the fact that I literally have Scott Wilson (of BEER)’s book on coding in Supercollider sat on my desk. Thus I chose to take a different tack and rather than approach it from an analytical and scientific angle I went for something closer to home.

Space To Think – Ed Wright 2018 (8 channel surround sound audio)

pianostringsA lot of what is in the popular imagination about space and space travel is precisely that, imagination. From the Barron’s Forbidden Planet through to the electronic squelch of radio communication a lot of what we think of as space related is a very human construct. What fascinates me is how much of what we believe sounds as if it comes from outer space or under the sea (or for that matter any environment out of our direct experience) is actually a result of dubbing and sound design in the media. As a culture we have bought into the idea of rockets rumbling as they go past, even though there may be almost nothing in the void of space to transmit the sound and the glockenspiel twinkle of stars is almost as real as the piano wire scrape of the Tardis. This provides a fantastic palate of subverted fantasy with which to create and explore a rich and varied sound world. Apart from the use of two pieces of NASA archive; launch and countdown, the rest of the sounds used I have recorded and shaped myself.

26232759_10155810057685631_1698340499798078925_oGreat delight was taken in recreating a few iconic ‘other worldly’ sound objects and effects along the way, 50 pence pieces were rubbed down piano strings to provide the basis for a tardis noise before looping and adding spring reverb. Humming strip lights were close mic-ed to create light-sabres, and some generative coding brought about drone progressions similar to the opening of Star Trek. These and many other sounds were used as the raw materials of the piece and then developed as the different timbre interact and evolve. The result is an acousmatic work utilizing a wide variety of sounds from analogue synthesis through to simple dislocation and out to generative algorithms creating an apparently extra-terrestrial environment in which our earthbound ears and minds can roam.


Many thanks to Guto Puw and the Bangor Music Festival for their kindness, understanding and faith.

In memory of Brian Wright

Ricercar (for electric violin and stochastic step-sequencer) 2016

Ricercar was created as a way to create evolving polyphonic textures in a ‘solo’ violin piece. It uses a midi foot controller (currently a Behringer FCB1010) to control an electric violin into a piece of software created specifically for this piece.


Testing at home before the first performance
foot controller, violin and laptop visible in the
chaos of pre-event preparations

The work is essentially fugal, relying on live sampling to create the layers around the instrumental part and uses a 3 layered sequencer to achieve this. Each layer contains a series of buffers which can be recorded into. The top layer has seven 5 second samples, the middle has 5×17 and the lower has 3×71 second samples. These all prime durations so when left to their own devices they will take a long time to ever repeat in precisely the same configuration. Using a foot controller to trigger when to start recording and which buffer to record to or overwrite, thus it is possible to gradually build up a four part texture using the three sampler ‘lines’ and the live violin.


Overlapping prime blocks 5,17 and 71

In itself this is passable, but could be rather recursive and very quickly get repetitive unless loops are overwritten regularly, which in itself could become an oft-repeated task. This has been overcome in 2 ways. The first is that each time a sample is due to be played there is a 50% chance that it will not be in which case the sequencer just moves on and waits until the next step is ready and then re-rolls its virtual dice. When scaled up over each of the 3 layers of fast, medium and slow samples this means that there will probably always be something playing (depending on how many samples have had audio recorded into them!) but the thickness of the texture will vary.  The second way that diversity is added is by changing the playback speed of each sample. As a sample is triggered the sequencer takes a decision as to what pitch/speed to play the audio. This varies over four octaves (two up and two down) and results in a shift in playback of between quarter speed and four times as fast. The speed changes are in discrete even tempered steps and thus a broader sense of harmony and polyphonic/contrapuntal line can be achieved.pitch.JPG

In addition to this audio focused processing there are 2 global controls. The first is how fast the sequencer triggers each step. This in itself does not affect the length of individual samples so at extreme speeds loops will overlap in a densely fired texture or conversely there may be a lot of space and silence at very slow speeds. The last of the controls is volume of playback, which when used carefully can not only help to balance the live and processed material but can greatly increase the sensation of articulation and dialogue.

Within this framework the performer is free to improvise, guided by the constraints and freedoms of the instrument, themselves the hardware and software.


Video extract taken by Diane Evans at OscilloScope 28/6/15


Max MSP Basics

Here is some stuff I put together a while back. It may be useful for some people 🙂

ground loop

Contents (click to jump to section):

What is Max/MSP?
Working with audio
Numbers, messages and lists
Programme flow and control
Making patches simpler
Playing audio samples
Simple MIDI control
Enveloping and cross-fading
Designing the user interface
Soundfile playback and timing.
Refining soundfile playback control and timing.
More elegant approaches to additive synthesis.
A better way to create polyphony: poly~
Breaking out; physical I/O and Arduino.
Working with acoustic instruments
Audio processing
Further audio manipulation
Jitter II
Javascript in Max
Algorithmic composition

What is Max/MSP?

Max/MSP (often just called ‘Max’) is a ‘multimedia programming environment’ which will allow you to create pretty much any kind of music or audio software you can think of. It can also handle video using a built-in extension called ‘Jitter’.

To get more of an idea of what Max can do, visit the website and click on the ‘projects made with Max’

View original post 24,603 more words


If you need the loo, please go before listening to this piece!

Anatomy of a Mountain Stream: quadraphonic fixed media, binaural version.

Field recording & processing programming: Edward Wright


We often take things for granted until we look carefully at them. During the cold snap of the winter of 2011 when the snow had dampened all the sounds I bravely/foolishly balanced a surround sound recorder on a rock in the middle of the stream that runs down from the Carneddau, through Parc Plas Mawr in Penmaenan on its way to the sea.

Through manipulating the playback speed of the resulting sound-file it was possible to expose many details of the sound that are often obscured by our ‘normal’ way of listening. By simply changing the sampling rate, we can enter what feel like different sonic dimensions where the very large, and minutely small sonic structures become far more apparent.

flickbookEffectively this means that I have slowed down and sped up the audio. More precisely a computer or CD player reads through audio in a number of steps much like you would go through pages in a flick book but a lot faster (CD reads at 44100 ‘steps’ per second).

All that I have changed is how quickly those steps are taken. What this does is to not only change the pitch of the audio as the sound waves become compressed or stretched out, but also to chance the perception of events. Things that seem a blur become a series of discrete happenings and events that seemed slow and random start to show striking regularity when viewed over a longer (or more compressed) time frame.

In this piece the original 8 minute sound-file is played in its entirety, but the speed has been mapped and composed to create the framework of the piece. The audio was recorded on a Zoom H2 at 4chan x 48kHz x 24 bit, processed in Max/MSP 5.

Screen Shot 2013-04-18 at 17.27.51This version of the work had been mixed to give an approximation of the quadraphonic set-up of the speakers (i.e. 2 in-front and 2 behind) on headphones.

This has been achieved by playing a sound impulse from each speaker and measuring how it gets to a listeners ears. A sound from the extreme left will take slightly longer to reach the right ear than it takes to get to the closer left ear. Also the journey that the sound waves have to take around the head will absorb some of the energy, although not evenly across all pitches, changing fractionally changing the tone of the sound by the time it reaches the further ear.


Image from

4mic in streamcopyrightIf you played a sound from the right front speaker (for instance) and recorded that sound from microphones placed in the ears of a listener you can compare the signal from the speaker with those that have arrived at the ears. From this point we can work out what the change is and can also apply that change to any other sound.

This has been scaled up to the 4 separate audio channels and processed (impulse convolved) with the spatial data collected from the in ear microphones hence why you need to listen on headphones to get the full effect



Tuning a violin….

This is from a piece I have written for hopefully it will be of use to some people as I know it is a cause of much difficulty and frustration. Also any feedback from those more experienced violinists out there would be great!

* * *


Tuning your violin is one of the most important things to get right as if the instrument is not in tune it makes getting a good sound a lot harder than it already is. So here are a few hints and ideas for those of you starting out.

STAY SAFE! Don’t over tighten your strings as this can cause them to snap. Keep your face, and especially eyes, out of the range of a potentially snapping string.

Look after your violin! Only try to tune one string at a time and always maintain some tension on at least one string. The bridge of your violin, and to a lesser extent the sound post (inside the violin roughly under the E side foot of the bridge) are largely held in place by the pressure of the strings. If you slacken everything off they can fall down!

Tuning a stringed instrument relies on some basic physics, in that the tighter a string is pulled the higher pitch it will give you, think of it like a very espensive rubber band.

Two possible ways to tune

First there are the main tuning pegs, which are very useful especially if your instrument is a long way out of tune.

Secondly your violin may have one or more fine adjusters attached, or built into, the tail-piece. These are arguably easier to use in that the control is a lot more precise and have the additional bonus of being less likely to cause string breakages because they can only tension so far before you run out of thread.

From this point the rest is simple in theory; if the string is too low in pitch (flat) you tighten it, if it is too high (sharp) you loosen it. However, putting this into practise can sometimes require a few practical skills.

Reference Pitches

In order to get your violin in tune it helps to know what you are aiming at. The four strings of the violin are G D A & E.

Here are the four notes that the violin is tuned to. G is the lowest sounding note and usually has the thickest string; with the violin placed on your shoulder it is the string furthest to the left. The strings are then arranged G D A E from left to right.

You can check these against anything that you know to be in tune, such as a piano, tuning fork or for that matter an electronic tuner.

Click the letter to hear each pitch and check that the sound coming out of your violin matches that of the clip. I have included two different types of sounds, one being a piano, given that is what is often tuned to in the ‘real world’ and the other being a sine tone which is theoretically the purest pitch reference.

Sine Tones:




Click the letter to hear the audio.

Assuming that your violin gives notes that sound like those above then your don’t really need to read on right now, get on and play!

Tuning with adjusters

Assuming that your violin is only slightly out it is probably easiest to tune it with the fine adjusters at the tail-piece. Pluck or bow the string to determine if the string needs to be made higher or lower and tighten or loosen the screw head of adjuster connected to that string. It is probably better to only move it by small increments between re-checking the note to prevent over tightening. When you are happy with the note proceed to the next string.

Tuning with the pegs

Often tuning with the pegs will be the only option as before it is imperative to only move it by small increments between re-checking the note to prevent over tightening and snapping the string. It is best to check the pitch by plucking, or bowing the string whilst turning the peg to the right position. Pegs are a very old method of controlling the tension on a string and rely on friction and the fact that they are tapered to make them work, so the trick to making them stick is to twist and push, like you would when using a screwdriver or cork screw.

Bowing vs. Plucking

In order to see if your violin is in tune you need to hear it, often as you are tuning it. There are two ways to do this; plucking the string or bowing it. Plucking is far easier in that you can rest the instrument on your knee or on your lap and turn the peg/adjuster with one hand whilst plucking with the other. Bowing requires that you hold the violin under your chin and turn the peg/adjuster whilst bowing with your remaining free hand! This is not for the inexperienced or faint hearted and should be tried out either with assistance or over a very soft landing as if it slips from under your chin it is almost impossible to catch by yourself.

The reason why so many people bother with learning how to bow and tune at once it that it is far more reliable. Not only can a bowed sound give a more sustained volume, but it the plucked tone is harmonically less stable and tends to get noticeably lower in pitch as the sound decays.

However until you need the level of performance that bowing whilst tuning provides it is probably not worth the risk of dropping your violin!

Problem Solving/Other Considerations

Once you have got your strings in tune it is best to check each in turn. As the strings are all tensioned on the same instruments the pull acts over the entire instrument, so big changes to one string, can result in smaller changes to others.

If a peg does not stick it could be for three key reasons:

  1. The string is pulling the peg out. This is by far the most common! Look at where the string meets the end of the nut. If the string meets the peg significantly further towards the narrow end of the taper of the peg, it will effectively be pulling the peg out of the peg box. To remedy this simply slacken the string off by a couple of turns and then tension back up, whilst winding on a fatter part of the taper closer to the wood of the peg box.
  2. You may not be pushing the peg far enough in, please be delicate, but it does need enough force for friction to do its thing.
  3. The peg does not fit the peg box properly, in which case ask a reputable specialist to look at it.

If your run out of thread on your adjuster simply loosen the main peg, twist the adjuster back so that the screw thread is to about 50% depth, then take up the main slack on the tuning peg. You may have to fine tune with the adjuster a little to complete the process.

Make sure the bridge stays vertical. Its feet should be approximately between the cross bars of the f-holes, and the top of it should be in line too. Repeated tuning and especially changing of strings tends to drag the top of the bridge up the instrument. This can be detrimental to the sound and playing and in extreme cases can lead to the bridge falling over. If your bridge does start to lean, then simply slide it back into place, loosening the strings slightly if required.


Crosswire paper and piece…



Edward Wright

Studio demo version.


This paper documents a piece of practice-based research, enquiring into the extent that certain computer programs can be regarded as musical compositions in themselves. I will argue that the works set down by composers are sets of parameters or restrictions within which performers work and create new, fresh interpretations. As a computer program Crosswire builds on these phenomena; rather than being a computer program designed as a way to facilitate the creation or execution of a piece of music, I shall argue, it is the composition, and the act of improvising upon that framework is the work’s musical realisation.


Composition has been traditionally defined as:

1a, the act of putting together, formation or construction…

2a, a literary or musical work b, the act of producing such a work.

This rather abstract definition does not adequately express the nuances of the various methods of generating and shaping material. A composition need not be written down to still be recognisable as such; a brief foray into the fields of traditional or acousmatic music will uphold this axiom. Secondly, composition defines a set of restrictions. These may be as obvious as defining which instrument should play, the dynamics and pitches that should be performed, or they may be more subtle such as the cultural expectation that dynamics should be performed in baroque music even when there are often very few, if any, notated.

It would be deceptive to view composition simply as the act of setting up restrictions. It can really only be understood in the context of musical realisation and performance and in trying to define what is or is not a composition, an antithesis may useful. Below is a diagram of the composition-performance chain. It is greatly simplified from reality but hopefully will help to define some of the key stages.

Set definition → Realisation → Perception

‘Set definition’ covers the act and artefact of composition. Intrinsically (music) composition defines a set of events that must be executed as part of its realisation. Thus a performance may be a correct realisation of the composition or alternatively may contain errors such as wrong notes. None the less a composition is rather more than a list of systematically quantified auditory events.

By its nature, a composition (especially notated works) cannot define every specific element to an infinite degree of accuracy. When asked to ‘give an A’ an oboist may play loudly or more quietly, with a greater or lesser attack, ever so slightly more sharp or flat, or possibly even sing it. Such actions would still constitute giving an A.

It becomes clear that rather than creating a defined event list, composition delineates a set of narrow pathways to which a realisation must adhere. Some of these pathways are wider than others and allow more room for interpretation or indeed error. For example in Classical music pitch is strictly defined in terms of the correct note and intonation however vibrato is a notable exception to this. Volume is rather loosely defined. In punk meanwhile it can be observed that far less emphasis is placed on specific intonation, but, interestingly glissandi form an integral part of many vocal deliveries.

Therefore composition defines a multidimensional set of musical trajectories which realisation must follow. These can shape low level elements such pitch, timbre and volume but also more complex aspects such as interpretation and style. In some cases these trajectories must be adhered to very strictly and in others they simply provide a guide as to the intentions of the composer.

Based on this concept of trajectory, it is becomes clear that all music is to a greater, or far lesser extent, improvised. This goes as far as apparently fixed ‘tape’ pieces where the art of diffusion has blurred and reinforced the intended spatial and dynamic trajectories, let alone inconstancies of speaker placement. Even if it is the case that a piece of music is improvised it can still be viewed as composed provided that a set of parameters has been defined. If a performer were to play a Bach sonata with sympathetic dynamics even though there were none in the score it would still be Bach; equally Thelonious Monk’s jazz standard Round Midnight is recognisable from the chord progression alone, even when there is some debate as to the fine detail of what that progression is.

This suggests that rather than producing an empirical list of operations, a composition defines a set of boundaries. At a given time a realisation of these boundaries can be perceived to be adhering to the rules closely, or not, depending upon the acuity and cultural norms of the listener. When perceived through time this produces music based on a set of multi-dimensional trajectories. This is not what the line graph accuracy of notated music would seem to imply.

‘Realisation’ covers the act of taking the composition and performing it, either by traditional instrumental methods, loudspeaker diffusion or indeed simply playing back a recording. Realisation is an increasingly vital part of the communication of composition. Based upon the preceding paragraphs it is clearly fundamental to our understanding of music. There are many people who can read notated scores in their heads and gain a working understanding of the eventual sound. This may not be possible with more codified methods of sound creation such as a large script written in SuperCollider or indeed visual inspection of a simple CD-R.

‘Perception’ includes the hearing, and in many cases seeing, of the realisation of the set, by an observer. The observer may be the performer, who in turn may also be the composer, or alternatively a listener may be an audience member largely outside the rest of the chain. While there are obvious points of feedback in the composition-performance dialogue, such as the effect that a performer’s perception of their work, or their perception of an audience’s reaction to it will have on their realisation, the flow of information is overwhelmingly one way.

To illustrate this point, an instrument such as a piano would be viewed as a catalyst for realisation in the composition-performance chain. It is very unlikely that during a performance that the change to a different piano would radically alter the piece of music (except to trigger a slight pause in proceedings), however to swap compositions mid way through would be striking!

I would therefore argue that in order to judge whether an entity is a composition or not, the location of it’s primary function in the composition-performance chain must be determined. A piano in this instance is clearly a vehicle for realisation, even though its use may be a compositional concern in that the perception of the composition be through a piano ‘filter’. It is primarily sited in the realisation part of the chain, consider trying to perform a piece written for piano without the instrument…


There are many examples of innovation in composition, especially in its content. Content evolutions such as the development of sonata form or the inclusion of glissandi are numerous, examples of progress in composition itself are less easy to find.

One such example would the development of Musique Concrète. Through the development of new technology the early Concrète composers were able to work physically with analogue sound recordings. This is undeniably a different approach to composition but it also presents us with a duality of function. Musique Concrète defines a very strict composition (being created from and set down in a recorded form), but also strongly guides its own realisation. Ironically, for a genre that promises so much freedom with the possibility of any sound as the basis for musical material it has become one of the least flexible performance formats. To follow the composition-perception model, the realisation is so bound up with the composition that it is largely one and the same. A lot of the ‘performance’ is carried out in the studio at the point of composition, far from the observation of the audience, leaving only the spatial interpretation through the act of diffusion. This blurring of the strict definitions between set definition, realisation and especially instrument/performer becomes more acutely apparent as we move through the last century of music making towards the present day.

In keeping with this trend, the field of Algorithmic Composition again crosses the boundaries between being a strict composition and often times providing its own realisation. Whilst being a very broad category, arguably ranging from the works of Dufay’s Nuper rosarum flores to Steve Reich’s phase works, these particular phenomena are abundantly clear in works such as Nikos Stavropoulos’ and Kingsley Ash’s Livecell. The digital implementation of algorithmic processes enables the musical results to be generated in real time and in the performance sphere. In the case of Livecell many of the compositional decisions are made by the computer (albeit guided and mediated by a controller) the resulting output is then scored by the program and fed to a ‘traditional’ string quartet for realisation. In this case there is a distinct malleability of roles. The Livecell software takes a large number of the compositional decisions and provides the score; however the controller also has a lot of control over the higher level actions of how and when Livecell is triggered and also how it interprets its output when creating a score. In this case the Livecell program and interface partly defines the composition set and is partly realisation tool; the controller is part composer and part instrumental performer.

This blurring of composition and realisation can be seen further in Cage’s Sonatas and Interludes. The use of the prepared piano as an extended instrument is well documented but in this particular case (perhaps the most well known of the prepared piano canon) something different is happening. In this series of works Cage clearly sets out which notes and rhythms should be played, but also gives strict instructions as to how the piano should be prepared. This precise specification of the timbre and amplitude of individual notes is as much a part of composition as the marking of a given note tenuto or marcato, or the voicing between different instruments of a given chord.

The use of a piano as a basis for these pieces can cause confusion. The piano is one of the iconic instruments of western music; it has a wide (non-continuous) frequency range, it is cable of a large range of dynamics, creating a number of polyphonic lines and can be controlled by one or more players. Most importantly a wide variety of composed works can be performed on it. Fundamentally it appears to be an instrument. However, Cage’s preparation directions are so specific to his pieces, that they are tied far more closely to his composition than to any transferable evolution of piano technology. It is the context specific that keeps the piano preparations for Sonatas and Interludes as part of set of compositions rather than becoming an extended instrument.

The tension between new instrument and new composition is further demonstrated by Marco Donnarumma’s Xth Sense (hailed as the ‘world’s most innovative musical instrument’). Clearly when a new instrument is developed, new music can be composed with it as is the case with Donnarumma’s Music for Flesh which presented the Xth Sense in its initial form. Since then the Xth Sense has been revised and improved, as could be expected with any design. The next composition was Music for Flesh II which is remarkable in a number of ways. Firstly this truly sets the Xth Sense outside of the remit of composition and thoroughly into the instrumental sphere having overcome any specificity that could have been linked to the first Music for Flesh.More interestingly the pre-set ‘scenes’ within the software of the Xth Sense have been modified to accommodate the new work. This does not necessitate low level changes to the program but rather a change in a set of variables and therefore a different reaction to the input from the hardware and performer. It is still debatable as to what extent this is a change of ‘set definition’/composition or a modification of the instrument.

Perhaps one of the clearest deliberate breaks from the rigid differentiation of descrete composition can be found in Bloom. This was created by Brian Eno and Peter Chilvers for the iPhone/iPad and claims to be ‘part instrument, part composition and part artwork’. Within this the user has a limited control over a self-generating ambient A/V work, with the option for automated play. It is a clear example of a work that can be realised with two different, albeit virtually identical controllers, namely the iPhone or iPad. I also provide and can within reason modify its compositional definition set and through the hardware create its own realisation.

The exploration outlined above would suggest that an instrument or sound controller can be defined as part of a composition by a specificity to a particular musical work. In the case of a traditional instrument the expressive qualities and gestural mapping can be transferred to a different definition set. This is not without limitations such as in the fields of tessitura and timbre, but within reason the instrument remains a vehicle for output realisation and is easily substituted without changing compositional content.


It is no longer possible to consider instruments and compositions as two separate entities, rather as points along a continuum. The specificity of an instrument is innately bound to the composition, a work can be composed idiomatically for an instrument, or at the opposite end of the spectrum, the construction of an ‘instrument’ (be that hardware or software) can be so symbiotically linked to a composition as to make it unsuitable for use with another work.

* * *

An entity can be viewed as part of the composition if it is active in the set definition and realisation links of the composition-performance chain, and if it is specific to that work to such an extent that it cannot be used to its full potential in a large number of other situations.


Crosswire is a software composition created to test these ideas. The set-up requires a DAW with a minimum ofmono in and stereo out as well as a projector feed. An electric violin must be plugged into the mono input, the Crosswire standalone patch running, feeding the stereo out to speakers and video out to the projector. The violinist is expected to improvise within the remit of the reactions of the patch. The audio going into the patch is processed based upon parameters defined by that audio. The performer and audience can hear the resulting audio and see the processing decisions as a result of the projector feed. Processing decisions are taken based upon data relating to the pitch, timbre and volume content of the current sound. Some of the processing settings are implemented to be very sensitive so as to encourage variety and development in performance and in some cases lack of human repeatability.

The following text is an extract from the program note to the first performance:

“Crosswire is a piece of music, manifested as a computer program. The violin part is ‘freely’ improvised, and the output of the instrument is fed into the computer. Within Crosswire the sound of the violin is analysed to provide information on: the pitch of the note being played, how loud it is, and its harmonic content. This is then used to turn on or off a number of different types of processors and form or break links between them. This is displayed to the performer and audience as a hexagonal constellation (see below). Each dot represents a processor; when a dot is small the processor is off, when it is large it is on, and lines between dots symbolise links sending audio out of one process to another.

Fig 1. Crosswire performance display.

From top left clockwise: clean output (no processing, shown here on), delays, additive synth, panning, sampler and comb filter. A number of links can also be seen for instance from panning to the sampler and comb filter. Each of these processors is in turn controlled by the pitch/volume/harmonic analysis. In the case of the delays the delay time and feedback amount are in real time as a result of the analysis. Within this system the violin is played and its output is processed. The performer makes judgements as to how to progress based upon the sound coming out and the visual display, thus entering a ‘strange loop’ of interaction as the music played not only creates sonic material but also controls how that material is transformed and presented to the world.”

This program note would therefore suggest that the violin part is improvised, and in some senses it is, but within a very strict, albeit reactive, framework. In another sense it simply realises and works within these parameters as mediated by the cultural norms of the performer. I would therefore argue that Crosswire sits primarily in the ‘Set Definition’ area of the chain and is therefore a composition.

Crosswire is currently implemented in Max/MSP and makes use of the fiddle~ external object.


Anecdotally Crosswire functions well as a self-contained piece. The work provides an environment in which the performer can create numerous and varied gestures. The inclusion of sampling and synthesis help to provide some sort of accompaniment or duet feel, while the comb filter extends the timbral range as well as the pitch potential. The tessitura is further extended by the use of variable playback speed within the sampler; this effect and the effects of other processors can also be heightened by the use of feedback delays. There are also a number of more global controls, for instance playing between 328-332 Hz, arguably the closest thing the work has to a key note, disengages all the processing except the clean output.

The player therefore has control over a lot of different parameters and the potential to make a very beautiful performance, but also to create something very bad. This is important as this allows the work the flexibility to evolve and progress ‘as mediated by the cultural norms of the performer’ in that the performer can chose to perform in one way and discard other (to them less palatable) options.

The work has been performed several times throughout and has undergone revisions and improvements.These performances have lasted between 5 and 20 minutes and so far have been exclusively in a concert environment.

An exception from the normal format of performance was as part of the ICCoaction event in Edinburgh (Scotland) for which Richard Worth composed and notated Green Man Blue. This was for the Red Note Ensemble and Crosswire. The performance used Crosswire to process the summed microphone signals of an ensemble (violin, bass clarinet, trombone and double bass). Having initially defined Crosswire as an investigation into composition, and it now being integrated into another composition in an event focused on composition for new instruments seemed to pose some insurmountable problems for the underlying theory or the piece.

A composition is of a low order of flexibility, in that it exists as a relatively fixed form. Moving along the chain we are met with phenomena of increasing flexibility such as performance and eventually perception by an audience. An instrument-performer combination is usually capable of realising more than one composition in spite of a generally rigid architecture, and is therefore possessed of a higher order of flexibility. Equally it is quite possible to perceive numerous instruments at once. Retracing our steps we hit a wall with human processing capacity in that it is not possible to perceive that many compositions played by multiple instruments at once.

Therefore one test of an artefact’s status as a composition would be to test its flexibility within that role. This must also be mitigated by the transferability of the digital form along the composition-performance chain irrespective of inflexibility within the specific function of composition.

As it is contradictory to say that one piece of music is a different piece of music at the same time, I would suggest that in this situation one of following will have happened; either Crosswire has been changed in function to act not like a composition, but rather as an effects processor for Worths’ composition (fitting in with the idea of transferability), or, that there is now an incredibly detailed transcribed improvisation based upon the Crosswire definition set, or that there is something else going on such as its use as an instrument, combinations of the above, or that it never was a composition.

Starting with the easiest argument, Crosswire is not a standalone instrument in that it requires another sound source to activate it. It could be considered an extended or hyper-instrument, but the interaction between the violin input sound and the Crosswire processing combined with the precision to which some of the control data is sampled, make some actions unrepeatable with any degree of accuracy and the system too unpredictable to be regarded as a reliable instrument in the conventional performative sense.

Returning to the idea of specificity, in the case of Green Man Blue the Crosswire software and its definition set were proven to be linked to the wider Crosswire realisation environment. The nature of the written material could not fully exploit the available range of the processing. This becomes clear when considering decisions based upon pitch. Many of Crosswire’s settings are based on microtonal differentiations which make use of one of the great strengths of a solo (especially ‘improvised’) violin. This is far harder to achieve with standard notation and especially with more than one instrument. Equally the measured time that is so helpful for group performance is not necessarily suited to a working with software that works in ms. and a very fluid time base rather than in bars.

This is not to take anything away from Worth’s piece as it is a good piece, rather that only a part of Crosswire was suited to his purposes.Crosswire ‘can be viewed as part of the composition [as it is] active in the set definition and realisation regions of the composition-performance chain, [and it] is specific to that work to such an extent that it cannot be used to it’s full potential in a large number of other situations.’

Clearly one work does not classify as ‘a large number’ of compositions but it is at the least a corroboration of the theory.

In conclusion the boundaries between what constitutes composition and what constitutes realisation seem to be becoming less polarised, to the point where there is room for a lot of creativity within this new space. It may not be long until it becomes meaningless to talk of composition and performance, in our traditional understanding of them, as two separate entities. Rather I predict that we will see a growing trend towards a more unified method of composition-performance as styles and facilities continue to evolve.

* * *

My thanks to Richard Worth and Lauren Hayes for their patience and help in my first attempt at making the Crosswire patch suitable for use by people other than myself.


This extract is from the definition in the Concise Oxford Dictionary.

One such example would be the recorded vocals on The Scream by Siouxsie and the Banshees.

Debate still rages as to the superiority of the Miles Davis version or the Monk version, and Cootie Williams and Bernie Hanighen have received co-credits for their contributions. The addition of numerous simplified versions further complicates the issue. However, all of these versions are popularly acknowledged as ‘round Midnight

Although not necessarily ‘algorithmic’ in the modern understanding of the phrase these works rely very heavily upon the use of number and transformation as a compositional element.

This quote was said of Xth Sense as part of the Margaret Guthman Musical Instrument Competition (Georgia Tech, US 2012).

This quotation is from the iPhone‘App store’ marketing text.

In many cases playability of a given instrument depends upon its links to physical gestures and their repeatability. Playing a quiet note on the piano has a direct physical gesture, and there is an obvious correlation with playing the same note forte. Equally on many instruments you move in one direction to raise the pitch and in the opposite to lower it.

In this context strange loop refers to when moving up or down through a hierarchical system, one finds oneself back at the start as popularised by Hofstadter.

fiddle~ is an object for pitch following and sinusoidal decomposition and was devised by Miller Puckette, MSP port by Ted Apel and David Zacaerilli

For full details of performances see



Collins, N, Cottle, D & Wilson, S. (eds.) 2011. The SuperCollider Book. Cambridge, MA: MIT Press.

Gourse, L. 1998. Straight, No Chaser: The Life and Genius of Thelonious Monk. New York: Schirmer Books.

Hofstadter, D. R. (1999). Gödel, Escher, Bach: An eternal golden braid (20th anniversary ed.). New York: Basic Books.

Adams, R., Levitin, D. J. & McAdams, S. Control parameters for musical instruments: a foundation for new mappings of gesture to sound. Organised Sound.7(2):171 -189.

Warren, C. Brunelleschi’s Dome and Dufay’s Motet. The Musical Quarterly 59:92–105.

Donnarumma, M. Accessed on 29/5/12. Xth Sense .

Wright, E. Accessed on 1/06/12.

Bloom app store page. Accessed on 14/5/12. .

Crosswire on composers’ channel. Accessed on 2/5/12

fiddle~ Accessed on 29/5/12.

ICCoacation. Accessed on 15/5/12.

Livecell Accessed on 15/2/12. 15/2/12

Round Midnight forum debate. Accessed on 10/5/12.

Ash, K. M. & Stavropoulos, N. 2011. Livecell: Real-Time Score Generation Through Interactive Generative Composition. Proceedings of the 2011 International Computer Music Conference. Huddersfield.

Donnarumma, M. 2011 XTH SENSE: sense: researching biological sounds of human body for an experimental paradigm of musical performance. . Proceedings of the 2011 International Computer Music Conference. Huddersfield

Bach, J.S. 19–?. Sonaten und Partiten, violin BWV 1001-1006 . London : Augener, score 7943.

Cage, J. 1960. Sonatas and interludes : prepared piano. New York : Henmar Press, score Edition Peters6755.


TheloniusMonk. 1944. Round Midnight (1999). On Round Midnight (re-issue). Spain: Definitive, DRCD 11133 – CD.

Siouxsie and the Banshees. 1978 Jigsaw Feeling. On The Scream.London: Polydor POLD5009-Vinyl.


A tribute

A touching tribute in the Guardian today to my teacher and friend David Evans. I was fortunate enough to have not only studied harmony with him, but in recent years have regularly been treated to cake, orchestration texts, a rhubarb plant, infinite good humour and kindness. Thank you David.


A binaural mix…

This is me dipping my toe into the world of binaural mixing of electroacoustic music…

so please do wear headphones to listen to this otherwise it will sound awful 🙂

Jackdaws was written for the first Blinc Digital festival (situated in Conwy, Wales U.K.).

It was originally made in four channel (quad) surround sound and was designed to compliment the visual work of Helen Booth and poetry by Rhys Trimble see for more details.

This mix has been made to go on headphones and is my fist proper venture into the field of binaural mixing, so any feedback (especially on the spatialisation) would be appreciated.

Thanks to Lorenzo Picinali for helping me into this insane 3D sound world.

I hope you enjoy it:)

P.S. having listened back to it I made a mistake in the coding you may or may not notice a hole in the middle of the mix, once I get onto a computer were I can sort this I will. Sorry, call me picky, but!


My blog/life so far…

I have finally dragged myself into the 21st century. The code for a lot of my work is now up at

Please excercise caution if you decide to buy this book. I am very flattered that people have lifted things to do with my life from Wikipedia but it is NOT worth buying. The information is all up there on-line. If you buy this they will be ripping down my reputation and your bank balance… Sorry 😦


Interactive sound game, Ed Wright 2013.


Drop! is an interactive sound game based around a marble run toy. When someone drops a marble (in this case a steel bearing) down the run, it triggers audio samples from the computer. The pieces which trigger the audio are wired into the system so that when the setup of the run is changed, so is the order and timing of the sound, thus making a reconfigurable piece of music and fun!

The music is a mix of 11 short audio extracts. These range from close microphone recordings of marbles doing things in the studio; such as rattling in a bowl, or going down the marble run. One percussive sound has been made by taking the sound of a marble and drooping it on a board, slowing it down four times and then modelling the reverberant acoustic of York Minster around it. Other sounds are more mainstream ranging from FM synthesis (the default setting for all good sci-fi film scores!) as well as a more dance-y sounding loop created in Reactor.

The audio was balanced an mixed in 4channel surround sound to enhance the motion and movement of the tracks. The sounds of a bicycle bell and a duck have been thrown in for good measure.

Things happen in the work when the marble rolls over two pieces of aluminium tape and completes a circuit, effectively closing a switch. With the aid of some home made electronics and an Arduino circuit board-chip housed in an old Chinese takeaway box these messages are sent via USB, luck and faery power, to the computer.

In the computer the interface which de-codes all these signals, brings up pictures of the active part of the marble run and plays back the audio has been written in Max MSP. Hopefully that makes sense for those that want it to, the full code is available here.

So there you are. An in depth and subtle composition that questions the nature of sequence in music with some technical rough science thrown in…
Maybe sometimes it’s more important to have some fun!

Many thanks to Alena Grace Wright for her assistance and forebearance with this project.


Today I’ve been at code control in Leicester, good to catch up with the gang such as Andrew Hill & Richard Garrett. Roughly 5 years ago to the day I delivered my first paper at SSSP DeMonfort Uni here. I was due to become a Dad pretty much on the day and in true biblical style we trekked over here. It would have been one hell of a ‘keynote snub’ but didn’t happen until a few days later!

Due to the stupid amount of un-seasonal snow the zip of my Mammut winter jacket (and the rest of it!) has finally broken. New coat, a visit to a dead monarch… and yes Richard iii’s carpark is very pretty as car parks go. Gig tomorrow/today, good night all


Going out recording source sound objects with 21 year 6 pupils from Ysgol y Graig Llangefni for Bangor New Music Festival. The sun is out, the grass is green, let the madness commence!


RIP Dr. David Evans

A sad day, the loss of a teacher, friend, fantastic story teller and an encyclopaedic mind. A truly gentle and generous soul. Thank you David.


Having fun recording a game of snakes and ladders in 6 channel surround sound. I am very grateful to Alena for letting me use the wooden game set that Granny and Granddad gave her for Christmas, and for Mike’s help in setting up, playing, de-rigging and being up for madness!


Snakes and ladders sound recording


I have finally got around to producing a flattened copy of this, it was part of the Bangor New Music Festival last year and the festival workshops are coming up again soon! Enjoy.


Happy Christmas and New Year to all those who celebrate such things, and tons of love to those who don’t. It’s the end of a long year, hopefully the world will not end tomorrow! I have been seriously lucky this year and have had reason to travel to: London, Leeds, Edinburgh, Ljubljana and Toronto to name but a few. I’ve played about making seascape installations, played the violin, diffused acousmatic works, been exhibited in galleries, taught and learned a hell of a lot more. Thanks to all that have helped, listened, watched, (those who’ve read how I’ve staggered on through this blog) put up with, and even to those who’ve been down right hostile, it’s been a blast. Good luck for Christmas, the New Year and the end of the world tomorrow. 🙂

8 bit christmas mario mushroom

First steps in proper Sonification…. Here we go with one I made earlier!

Sonification Max Patch

Revving up for random family Christmas carols! Nice to see the spirit of wonder about the inside of a piano still carries on!

Upright piano open hammers and strings

Dear All, I don’t think I have ever put up a concert on here that I am not playing in, but this is one that I am helping to set up for in Bangor and by a man who I have amazing amounts of respect for, both as a person, for the fanatical precision that goes into his work and a great sense of humour. So come along if you can!


Trevor Wishart: © R.H.Y. Yau / 23five Incorporated

Andrew Lewis (director)

Renowned English composer Trevor Wishart presents his newest work Encounters in the Republic of Heaven, an invisible ‘sonic opera’ in which recordings of everyday stories told by adults and children from north-east England are computer-transformed into a fantastic surround-sound experience.

Trevor Wishart

Tickets: £10 full price; £7 concessions; £5 students. One child admitted free with every accompanying adult. Additional children £2.

Date: 29/11/2012
Time: 08:00:00 PM
Location: Powis Hall, Main Arts Building, Bangor University LL57 2DG

For more info click here

After the rain, floods and high winds of yesterday, this is the road outside Hendre Hall where my studio is.
Flooding WalesFlood damage hendre hall
Given the size of the boulders (no exaggeration) washed down the hill and the 2 ft of mud that has drifted onto the side of the road I am amazed that the water didn’t get into the studio. Sometimes having to carry stuff up steps can be a blessing after all. My thoughts and sympathies go out to all affected by the weather of the last few days, and with more to come, keep safe.

Still pushing forward, proud to be playing for this back in Bangor with a load of friends, and a super-sexy poster!

Blipfonica underground fat cat

Back to lecturing, it seems we have a very nice bunch of students and it is quite refreshing to be teaching on ways to use stuff such as Max/MSP rather than just how to use it! 🙂

On return to the U.K. it seems the entire world has a cold, oh dear, here we go!

In Ljubljana for ICMC 2012, beautiful city crappy hostel, but it does have internet access and therefore the not quite qwerty (qwertz) keyboards have brilliant ‘new’ letter keys!


Ok I’ll stop now.

Taking a moment out from writing to watch stuff that has been sent to me. Thanks to Andrew Hill for the link to this and to John Richards and the Dirty Electronics crowd. Its a really nice little portrait of what they do and the overall feel of what goes on. It’s not overly techy which is probably both a good and a bad thing but, they are lovely people and I’ve stuck it up here so that hopefully a few more people can get to see their work too!

Charge/Discharge (2012) by John Richards [Documentary & Performance] from Andrew Hill on Vimeo.

Back from Toronto, what a lovely place and people, if they had tea it would be even better! Now hiding re-grouping and getting set for ICMC!

CN tower toronto tourist


Playing violin for a symposium in Toronto whilst staying in a backpacking hostel, whatever I do I seem to always be massively under/over dressed!

Sorting out last odds and ends to fly to Canada in the morning. This will be my first ever trans-Atlantic flight, super excited! It’s for this…

NAISA toronto

Toronto Electroacoustic Symposium 2012
New Adventures in Sound Art (NAISA) and The Canadian Electroacoustic Community (CEC) are pleased to announce the 6th annual Toronto Electroacoustic Symposium 2012. At the heart of this 6th annual symposium will be a keynote lecture by Sound Travels featured artist Trevor Wishart (UK).

All events at the Artscape Wychwood Barns, 601 Christie St in the Christie Studio (#170)
except evening concerts which are in the Wychwood Theatre (#176)

Symposium Concert #1: 8-10 PM Wednesday 15th August 2012
Kerry Hagan – Morphons and Bions
Ethan Hayden – bats with baby faces in the violet light
Yota Kobayashi -Shikisou
Donna Hewitt – One
Ed Wright – Crosswire
Andrew Connor – No. 3: Foghorn Abstraction


Having battled my way along the coast via car, foot, train and bicycle I’m now in the process of mixing down for ICMC in Slovenia yeeeeehaaaa!

For more details and program see

Pro Tools ICMC 2012

Here’s some footage of a piece by Richard Worth entitled Green Man Blue that uses some programming I put together for the electronics. Enjoy!


Recovering… and gearing up for next weekend! It was nice to see my parents though this weekend 🙂


Gearing up for a mad one… Running and performing in Risk of Shock in Bangor (N. Wales) on Thursday evening, then throwing a load of kit in the car and driving down to do stuff at BEAM Festival at Uxbridge (London) Friday – Sunday…. Wish me luck!

Risk of Shock

BEAM Brunel 2012


WOOOOOOOO! Got through to both Toronto Electroacoustic Symposium AND ICMC2012. A little giddy right now! And thankful to Prof. Thomas for helping to fins a bit of cash to cover travel!!!


Just discovered this flyer, you can just make out ‘Crosswire’ on there, yep that’s me in Edinburgh this time next week! 🙂

Red Note ICC Jam house


Playing at IFIMPAC at Leeds College of Music.

Ed Wright Dale Perkins IFIMPAC Leeds

A great few days! Blogging on the train home, stuck at Manchester Piccadilly waiting for the train to Chester. It’s now 11:15 on Friday, I got to sleep around 3 am thurs, up at 5:30, bed last night at 2am gigged today and wont be home till gone 2pm Sat….. IF THE TRAINS ALL MAKE THEIR CONNECTIONS!!! Listening to Sqarepusher Ultravisitor as a come down to chill out and show my age. Nos da pawb;)


In Leeds waiting to rehearse…

25/4/12 cont.

Also gearing up for Ifimpac in Leeds tomorrow. Lots of work to do on the train…


Proper geek moment…. Is it wrong to get excited when someone else tweets a picture of them running your software several hundred miles away? Oh well guess it’s just me being sad then, but hey, Richard Worth and the Red Note Ensemble are currently working on a piece of his based on my Crosswire software to be performed in Edinburgh next month.

Crosswire mac

Noisy Nights with ICC
When Wed, May 9, 20:00 – 22:00
Where The Jam House, Queen St, Edinburgh
Description Over 30 expert designers from around the world have contributed a variety of instruments, interfaces and sound generators for the seven resident composers from the University of Edinburgh to work with. These range from 3D-printed spheres that respond to touch, to software modelled on neurons firing inside a human brain. The composers will create chamber works written specifically for their chosen instrument, to be performed by Red Note at a special edition of Noisy Nights at The Jam House, 5 Queen Street, Edinburgh at 8pm on Wednesday 9 May.

Spent most of today preparing 2 lectures; one on logic and it’s written form’ the other on interfacing Arduino boards with Max/msp. I wonder how much I’ll remember tomorrow?!

Very happy, to be being part of this. Congratulations to all the artists and organisers for a wonderful few days!

Interactions Bangor


Hello all, sorry for not being here for a while. It has been frantically busy with the Bangor New Music Festival, Y Galeri Caernarfon, Cauldrons and Furnaces, and Interactions to name but a few! Anyway, trying to stay focused and also relaxed, so if you see me running around like a fool please, you have my written permission, tell me to calm down!

My piece Con-chords is playing today, almost yesterday/tomorrow given time differences as part of W.E.A.L.R. (World ElectroAcoustic Listening Room) an element of California State Universitys’ New Music Festival. Drop in if you can! See EVENTS for more details.

CSUF World Electroacoustic listening room


Amazing moment! I ran the Who Can Hear the Sea? installation in the studio overnight. Wonderful to walk back in to find that, firstly it hadn’t crashed, and, it actually worked. The different loops are all re-triggering correctly and therefore the sounds are continuously shifting in time. It’s like an entirely new piece this morning. YES!


One piece finished… time for pancakes!


Nearing completion with Who Can Hear The Sea? I’ve had to build a player to make the looping stuff work and realised that it may have ‘normal’ people looking at it so decided I really should make it pretty!

Who Can Hear the Sea? Ed Wright

6/2/12 – cont.

Complete Bricks, and Parties


I have spent the evening working on Who Can Hear the Sea. There is something there now, I’m not sure what, it still needs a lot of refining but it is growing into being a useful installation piece. To keep the work open and moving it is based around the idea of 8 surround sound audio loops running at once with sound objects/musical gestures on each of them. The trick is that when each loop gets to the end it simply starts again, but no two loops are quite the same length. The loops last for:

401, 397, 389, 383, 379, 373, 367 & 359 seconds.

All of these figures by my reckoning are prime numbers. Once the piece has been started it will take around 1800 years for it to all come back into sync. This is all well and good and fits with the idea of water and the image it is never the same even though it may be the same stretch of water.

From a musical and composing point of view making this work is becoming more interesting. In order to retain much musical sense, at some level each loop must be self contained. As the loops become staggered the different elements of composite sounds will obviously go out of sync, therefore to create a piece that does not simply sound out of sync there must be complete musical gestures contained within each loop. Obviously these can interact with other loops much like phasing in minimalism or fugue and countersubject, but in much the same way there needs to be completeness to each element.

Moving up from this level we get into context and causality, trying to introduce the various elements and gestures in a self contained way. Rather like going to a party and having to introduce yourself to everyone, it is hard to get a system to introduce itself: firstly at all (how do you prepare for something with that something?) and then how to achieve this without becoming oddly repetitive (Hello I’m X and I work in Y, how are you? Have you met Z?)

Once this hull is breached how does one create structure?

I am placing faith in two apparently contradictory points of view. Firstly the minimalism/fugue analogy, in this work is at some level very similar in that there are complex elements arranged within a very strict (if somewhat less apparent) time structure. Secondly, that I am emulating the stretch of water image. Once this is realised the rest falls into place. Perhaps I am not creating ‘musical’ structure but rather a different, time based structure on which I am hanging various sonic artefacts that help the listener to navigate through it by mapping time, the relationships of the cyclical loops and augmenting the aesthetic experience of this passage of time and memory.

Just a thought. Night night all!

Working towards this:

Blipfonica Caernarfon

I have a poorly dog and a tax return to do 😦

Lots of marking, lecturing and running of open mic night, hard work paying off though, got listed for icc

Setting up for a long one in Studio 1 Bangor Uni tonight!

can now blog via mobile…

Just been appointed ‘Teaching Fellow in Music’ at Bangor University. Ooooooh

Happily processing sea sound to make new sound objects 🙂

New work in the pipeline to be premiered 10/3/12 at Bangor New Music Festival.

Who Can Hear The Sea?

Sound Installation for: Surround sound audio and max/msp programming.

Who Can Hear The Sea? is a sound installation commissioned by the Bangor New Music Festival. The sea is a major theme of this year’s festival as well as being a fascinating subject for recording. In developing this piece I tried to record the sound of the sea, free from outside influences such as: wind, road noise and sea gulls, but gradually became aware of a number of issues that arise.

Beside the practical issues of trying to find a silent beach on a windless day, with big waves, water by itself makes little or no noise; it is the matter and energy around it, in it, acting on it and contained in it that makes it sonorous. In addition to this there is the dynamic but structured nature of the sea, it is constantly changing, in an incredibly complex, yet ordered way.

Who Can Hear The Sea? Uses recordings of sea waves as its basis and then uses computer processing and techniques such as subtractive synthesis to shape confine and restrict these sounds to create a number of sonic objects or ‘musical’ statements.

These sound objects have then been edited together to create 8 audio loops, the longest of which is 7’02” and the shortest of which is 5’59”. These loops all play simultaneously and repeat as they reach the end of each play through. Thus the different loops move in and out of phase. Not only does this make for a constantly changing piece, it means that just like the sea you can listen to it and hear ongoing similarities but it will never be quite the same again; as for all the loops to come back to the start and line up exactly it would take 1875 years, 1 day, and 16 seconds (roughly)!



Happy New Year (and late Christmas!) to all! Best wishes to those who don’t subscribe to the same calendar as I do too! I hope the next 12 months will be even better than the last and that everyone gets what they need, even if not all that they want. Take care.


Starting work on a new sound installation commission for Bangor New Music Festival, to be premiered 10/3/2012, based (loosely) around the theme of the sea. XD



Over the last few days I’ve been thinking about composition and analysing what it is we do, how I want to do it and attempting to clarify my thoughts. So. Here goes….

This is where I am coming from and the direction in which I want to travel.


Manifesto 5 Dec 2011

1. I want to create works utilising sound; for the development and creation of artistic, aesthetic, moral and philosophical structures.

2. These will aim to be of high quality.

3. Where possible the technology used or developed will be open source and carry the potential for reverse engineering.

4. The output will be accessible to listeners, performers and other composers; in terms of performances, materials and syntax.

5. The works must be artistically and economically sustainable.

Ed Wright



Very horrible and rainy out there so have tried to compile a load of program notes click here to see them! PROGRAM NOTES



A couple of videos from the Blipfonica Live concert last weekend.

If you can’t view them here try



Played my first public event ever on the Theremin last night. Wooop woop!

Click here to view older blog posts.

Going into hibernation as far as new works go for a while. There are several gigs coming up and lots of things I’m working on behind the scenes, but for once there is no instant deadline for a new piece coming up. Several new pieces are in the pipeline and will be unleashed soon. I’m off to the shed with a new set of violin strings and a theremin, ‘dum de dum’.


Wendy Dawsons’ recording of the projection of Amser/Time by Tim Pugh Wendy Dawson and Ed Wright on the side of Conwy castle for Blinc Festival on Saturday night. We could give you the glossy studio version but you’d miss the mad and massive atmosphere there. Current estimates put it at 1500 people on the Quay alone!



Well done to all at Risk of Shock! And yes I will take the School of Music kettle back!
Photo of me in mid scribble performing ‘Live Study’ taken by Huw Williams


Running around like a loon trying to sort out for three days of events; Risk of Shock and Blinc (see EVENTS for further info. But it turns out I’m not the only one as Joel’s photo below will prove. This shows a roof, yes a roof, being put on top of one of the towers in the town walls of Conwy in order to house ‘This is Tomorrow’ Dominic McGill’s sculpture and my audio.

This is basically civil engineering in a world heritage site! Feeling a little humble right now. And yes, it will be taken down after!

Great to be part of so many good teams for the upcoming days. Bangor Uni, Blinc and Blipfonica, thank you. And also anyone else who should have been on the list, even if you didn’t begin with ‘B’.

Risk of Shock – Powis Hall Bangor Fri 7:30pm
Blinc – Conwy sat-sun all day.
***All free!***

A wonderful sketch by Dominic mapping out the text and Moebuis patterning of the audio for ‘This is Tomorrow’ I’m not quite sure if he did it before or after I played some of the work to him, but it seems to match up with what’s going on very well!

‘This is Tomorrow’ (with audio) will premiere at Blinc Festival 22/10/11

Putting final touches to the next release. A 7:30 CD single of the electroacoustic work done for Helen and Rhys’ installation.

JACKDAWS – Ed Wright Release date 22/10/11 BLIP006

For the amazing price of £1.00!!! Available on CD (and download soon)

Stuff for the 3 Blinc commissions is completed *sigh* now gearing up for Risk of Shock on the Friday and Blinc on Saturday/Sun. z z z z z z

A wonderful photo montage put together by Tim of the work that he and Wendy have been doing on the Amser (“Time” in Welsh) stop motion animation. I’ve just completed the audio for it and it’s going to be projected on the side of Conwy Castle as part of the Blinc digital arts festival later this month. 🙂
Photo courtesy of Tim’s facebook page!




Photo from the Good Robot ‘Interplay Room’, complete with sofa and ‘Sound Games’ on a big screen (along with ‘Hopscotch’) for people to come and tinker with 🙂 I was honoured to share the room with Mike Cook and Zac De Santos and their wonderful invertions…

I’ve decided I actually quite like collaborating. I’m currently working on a piece with Dominic McGill for Blinc Festival (also as Tim and Wendy haven’t finished their stop motion animation yet it seemed sensible to get on with something. Anyway, here is a photo of Dominic’s piece:

‘This is Tomorrow’ by Dominic McGill

It’s great the way that working with other people forces you into new ways of doing things. Use Hachiya’s ‘Hiroshima Diary’ as the basis for a sonic morbius loop…. Hmmm, yep this can be done, and I’m now in love with ProTools elastic audio for speeding things up and down to boot!


In Bangor University’s Studio 2 setting up to do some recording with Tim Pugh and Wendy Dawson. We are collaborating on stuff for the Blinc Festival next month. Recording some very quiet ticking clocks. Having got in early to set up, I’m now ready and bored, happily reflecting on the fact that whilst ideas and talent are absolute essentials; a sound proof room, 96kHz sampling rate and a matched pair of Shoeps microphones will hopefully help a lot too!

A few mins later and here we are a nice analogue clock, probably rendered completely rubbish by mp3-if-ication, but it was lovely and clear to start with, and also quite nicely irregular if you listen long enough!

Strangely it’s quite nice to be back in the place, I’ll also (apparently) be doing work for the festival with Dominic McGill & Helen Booth…. Busy busy busy!



Sound Games Ed Wright 2011

Structured improvisation for computer and Wii controllers

Sound Games is an improvisational/compositional environment (programming EW). At its heart lies the idea of gesture as fundamental to our understanding of music. This functions on a number of levels; firstly there is the innate physicality of playing an instrument, banging a drum harder to make a louder sound or breathing in to prepare to sing. At the other end of the process there is the aesthetic recognition of the gestural properties of a piece of music, as exemplified in the phrases ‘a descending tune’ or a ‘swirling cacophony’.

Computers have very simple methods of communicating with the outside world. On an average PC one tends to find a keyboard and a mouse as ways of entering information. Whilst this is perfectly acceptable in most cases it does little to reflect or for that matter enhance the music making process, indeed it is almost the digital equivalent of trying to make music with a pencil and paper or various pieces of office hardware!

Sound Games is an attempt to break down some of these barriers.

The program uses two Wii remotes, one to control the left half of the screen and one to control the right. Each long white rectangle represents a sound file indicated by the name e.g. . The position of the cursor with that box (controlled by the remote) dictates the pitch and left/right positioning of the audio, in a hopefully intuitive, left makes it go left, up makes it go higher fashion. In addition to this rotating the controller inwards makes the sound louder (conversely outwards lower), the ‘A’ button locks the cursor to the current vertical position, the ‘down’ button disables pitch shifting and the ‘B’ trigger starts or stops overall playback.

It is an inherently simple instrument, but a lot more expressive than a QWERTY keyboard. If one then sets the tolerances fairly fine it becomes harder to control but with a far greater range of expression, you can do more things and in so doing the potential for doing ‘wrong’ things is also increased. It is exciting to find a way of creating music with a computer that requires similar levels of practice and motor coordination as you would expect on an acoustic instrument.



Working all day and then set up for “Live Study” (see score below in 4/9/11) until 10pm :&
It’s shaping up to be a lovely environment to be in, come along on the 9th it should be ace and it’s free entry!


Now showing in exhibition at Penmaenan community centre.

Folksong: Ed Wright, 2011

(Soundsculpture from found object, fishing line and piezo transducer)

Folksong is the re-imagining of an old framework. Retaining its innate structure it draws from other resources in its surroundings and incorporates new ideas and materials into itself….

P.S. This hopefully starts to make sense of some of the ideas from the 1/8/11 and 15/7/11 (see below)… I’m working on a massive 3D version of this idea, a cube made out of 15mm steel, roughly 2m x 2m x 2m a similar idea with the fishing line and piezo mics but extending from the edges, less regular and big enough for people to easily get inside and move around in…..



To be premiered on Friday….

LIVE STUDY: Ed Wright 2011
Performance Score

Live Study is an electroacoustic work performed by one person. The performer is called upon to create pictures drawings and sculptures of their choosing using the materials provided. These actions provide the audio material for the electroacoustic work. The performer must balance the needs of creating visual artifacts with the realization of the musical work, maintaining as high a sound and ‘composition’ quality as possible throughout.


2.1 Materials:
12 HB Pencils with rubbers
3 Permanent markers various colors
3 Sheets of A3 cartridge paper
2 Sheets of clear OHP acetate
1 Pair of Scissors
1 Retractable craft knife
1 30 cm Steel ruler
1 Cutting mat
1 Roll Sellotape
1 Glass jar to store pencils in

2.2 Technical:

2 Tables
2 Boundary microphones
1 DAW running a recording program (cubase or similar) and some form of live DSP (PD or similar)
1 mixing desk
2 (or more) powered loudspeakers for playback
1 Set of headphones
Necessary cables and power.


Place 2 tables in an L shape, one is the DAW Table and the other the Visual Table.
2 microphones are placed in a stereo configuration on the Visual Table and fed to the DAW.
Microphones need to be set at an appropriate level so as not to feed back. Due to the differing nature of the sounds to be recorded in the performance mic gain control must be easily accessible.
Headphones are needed for PFL monitoring.
The mixing desk should be set up so that the audio recorded and played back in sections 4.1 – 4.3 can be faded in or out independently of the processed sounds of section 4.5.
The desk should then send sound out to the loudspeakers.
The ‘materials’ are stored on the DAW table until needed.
Soundcheck and rehearse


4.1 Recording
Set the DAW recording, audio from the DAW can be fed to playback speakers if required although feedback may be problematic.

4.2 Visual
Move the materials from the DAW Table to the Visual Table and create visual images in the paper and acetate. This process will create audio that is recorded by the computer. The artifacts that are created are mainly at the performers discretion, however these actions must be ripping a piece of paper in half. The more varied the sonic material is, the better. The performer needs to ensure that the sounds are of good audio quality and recorded at suitable volume levels.

4.3 Playback
As the performer is drawing to the end of stage 2 the recorded audio needs to be normalized and played back thorough the speakers, this can then remain constant through the rest of the performance if desired.

4.4 Visual and Editing

As per section 2. The performer augments the playback sound by creating more visual work. This can be carried out intermittently until the end of the performance. As playback from section 3 continues the performer needs to extract interesting ‘sound objects’ from the recorded audio. This work should be interspersed with more drawing, cutting folding etc. at the Visual Table.

4.5 Manipulation and Development
Sound objects are then transformed in real time. These actions can include manipulations both in the time and frequency domains. Audio must be triggered, mixed and balanced to provide a suitable counterpart to the visual artifacts that the performer must continue to produce.

4.6 Conclusion

The work is closed by the triggering of an un-manipulated paper rip sound at which point all other audio is faded out. This is to be triggered in the following situations; when there are no more materials to create with, or the musical potential of the audio has been exhausted, the performer feels it is time to stop, or the work has extended over 30 minutes.

(c) Ed Wright Blipfonica 2011


Doing this today!


Making noise and visual stuff in Llanfairfechan this morning, and helping to set up an exhibition in the Penmaenan community centre this afternoon as part of Helfa Gelf. Helfa Gelf is a series of events taking place across N. Wales through September. The idea is to have artists working in their studios but to have the studios open to members of the public to pop in and visit (kind of like a friendly cultural zoo!) There are also a number of exhibitions running such as the Penmaenan one (opening times at My studio will be open weekends in September from 11-4 and I’ll be performing as part of the general festivities on 9th Sept in the Penmaenan Centre and on 16th in Y Galeri Caernarfon.



Rain again today! I guess that is what I get for living by the sea. So no work going ahead on the steel sculpture work as it has to be done outside (owing to the fact I’m trying not to go to the studio over the holiday and it won’t fit in the living room). However, with sidelines on sidelines going on, work is therefore progressing on a Magnificat and Nunc Dimits based around the base sequence of the chromosome of bateriophage phiX174 (the first complete mapped genome). On first inspection the code appears rather unpromising

gagttttatc gcttccatga cgcagaagtt aacactttcg gatatttctg atgagtcgaa…

I’ve re-mapped the sequence to span an octave and imply rhythmic duration, and, oddly with a little bit of tweaking the system it comes out with some unexpectedly beautiful results. The table can be read like any ordinary table. Along the top is the genome ‘letter’ and down the side is how many times it comes in succession eg. aaaaaa would relate to A x 6.


1 D A# G D#
2 C B E F
3 A# C C# G
4 G# D A G#
5 F# F A
6 E B
7 C#

According to the table this would result in the note E being sounded. Duration is also mapped by the number of repetitions although a lot more simply, the above Ax6 example would mean that note would last six quavers Tx3 for the quavers and so on. This gives us:


the original opening of the string:


the string is the re-transcribed to show individual ‘events’:

g a g tttt a t c g c tt cc a t g a c g c a g aa g tt aa…

the letter and repetition data is then mapped and transformed as per the grid above giving:

G1 D1 G1 G#4 D1 D#1 A#1 G1 A#1 F2 B2 D1 D#1 G1 D1 A#1 G1 A#1 D1 G1 C2 G1 F2 C2…

This can be then translated into standard notation by simply taking the letter and # to indicate pitch and the number to signify the duration in units of a quaver.


The notation shows the resulting opening bars. The resulting monophonic line will form the basis of the organ part, tempo dynamics, rests and bar lengths can then be composed around this as the choral lines and text are worked into this sub-structure.

Philosophically and ideologically I’m quite excited by the idea of combining the word setting and the DNA form. There feels as if there is something a bit subversive and yet also praising about the idea, a tug of war between evolutionary chemistry and the work of divine precision and creation. I can’t quite put my finger on it but working the piece out feels like it may be one amazing ride!


Thanks for all the birthday wishes!


August is here, which means an attempt at a holiday. So I’m kind of ‘not doing music’ but am chopping up lengths of steel (in a very occasional manner) to build into a sound installation/sculpture. As it’s closer to the role of blacksmith than “Musician” I feel it’s not really cheating….

As a result of a 3 day masterclass/workshop on Sound Art with Manuel Rocha Iturbide, this is my contribution: collaboration with Andrew Hodges, Scott Flesher and Alistair Attard for Bangor Cathedral.

It’s called ‘Wordless Prayer’ and uses a simple relay driven from an Arduino board (click here to view CODE) to make a prayer bowl vibrate, which is in turn fed to a microphone and reverberates and feeds back through the space of the cathedral. It resonates the rhythm of the Lord’s Prayer, which oddly enough seems to have approximately the same intonation and syllable structure whatever language it is in.

Being pretty site-specific and conceptual I probably haven’t done it much credit here, so come along and see it in Bangor Cathedral 27-30th July 2011 (free entry). It’s running all the time except when services are happening. There is a 2 min pause in the loop so if you pop your head around the door on a trip through the city and nothing seems to be happening, please come in, pause and reflect, it probably is running!

On a three day Sound Art master class the Manuel Rocha Iturbide courtesy of Bangor Sound City. Nice!

On an aside…
Following on from an initial rush of excitement and inspiration at BEAM festival about a month ago I’ve been playing around with learning a bit more about the electronics side of electronic music. It’s odd that most people just assume that any decent musician must know how to (if not repair, at least) maintain their instrument in good working order but that this somehow does not apply to those working in the field of electronic music. Anyway, to cut a long story short I’m trying to fill in a few gaps in my knowledge as well as experimenting (in a very basic and amateur fashion) with an Arduino board (pictured above with breadboard and LEDs) and attempting to learn ‘stuff’ that will be of use in the long run for making music, performance interfaces and installation mechanisms. Or at least that’s the plan. At the minute I have 3 happy flashy lights, an approximate idea of how to code for an Arduino and a big chart with resistor colours on it. Any advice or tips to the usual email would be great! Thanks Ed

Steel for installation update! 🙂
I’ve got some now, courtesy of Williams the Steel Fabricators in Llandudno. Let battle commence! Thank you all.


Working on developing software to blur the edges a bit and make live performance more ‘live’…. Currently it’s a step sequencer but watch this space.


Starting work on a new piece…. Does anyone know where I can get 10 x 2m lengths of box steel tubing from?!


Recovering, tiding studio, drinking tea and reading Douglas R Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid


Not strictly a creative endeavour but graduation from PhD in music. Big thank you my supervisor Andy Lewis for all his help and support!


Lots of people playing ‘Hopscotch’ at Frakture’s Altar Pieces in the Bombed Out Church in Liverpool. Hopscotch is a sound installation piece I created a few months ago. It uses motion tracking to trigger audio and hopefully a fair amount of fun!