Turbo, Maths and Music

I have already written about some of the creative aspects of Turbo, the piece of music I am currently working on here https://edvirtual440.wordpress.com/2019/07/22/turbo/, but following interest from a number of people here is a bit of a deeper dive into the mathematics of the piece.

Under the gloss and shine of the main body of the work there is a continuous recording of a Ferrari 412 T2 F1 car. The recording starts played back very slowly; gradually getting quicker until it is at normal (breakneck!) speed at the end.
Artistically this was done to draw the listener in, there is a slow but impending inexorability about it but I wanted it to work creatively as more than just a cheap trick.

The playback speed is modelled on a golden spiral (i.e. a logarithmic spiral whose growth factor is φ the golden ratio) a design found not only in some engine turbos but also the humble snail from which the component took its name.

A golden spiral with initial radius 1 has the following polar equation:

\ r = \varphi^{\theta\frac{2}{\pi}}\,

If one were to travel inwards on such a spiral at a constant speed the time it would take to describe a full rotation around the centre decreases. While your speed would stay the same, in terms of RPM you would be accelerating. The manipulated Ferrari recording follows this pattern making 3 full rotations of the spiral appearing to get faster towards the epicentre at the end of the piece.

At a simple level digital audio works a lot like animation. A snapshot of a soundwave is taken numerous times a second (44100 for CD quality) and these snapshots are essentially then played back in order to create a close likeness to the original continuous sound. Just like animation and flip-books you can change how fast you leaf through the individual ‘pages’, and thus gain mastery of speeding up, slowing down or even appearing to reverse time.

The piece of music was to be 8 mins 39 seconds long. I wanted the Ferrari to be at normal playback speed at 8:38. Hence it was a task of mapping the sample rate to time or its equivalent distance along the spiral. Looking at it backwards I was aiming for a sampling rate of 44100 Hz at 8:38 with that rate decreasing by the proportions of the golden spiral as the time elapsed from the beginning of the piece of music trended to 0. 

Several cups of tea later I came up with the following which gives sample rate (Hz) relative to time (s). This is copied from the code which I had left deliberately un-simplified so that I could still almost see what I was doing!

Sample rate =44100(1/(pow(((1+sqrt(5))/2),((LOG(1-((1-pow((1+sqrt(5))/2,(12)))/(1-(1+sqrt(5))/2)-TIME)+(((1+sqrt(5))/2)*((1-pow((1+sqrt(5))/2,(12)))/(1-(1+sqrt(5))/2)-TIME))))/(log((1+sqrt(5))/2)))-1)))

This resulted in a gradual and inevitable winding up of speed but was still very ugly not to mention the processing power it drew before even handling audio. It turns out that trying to feed a digital-analogue converter floating values for sample rate (as opposed to integers) is not a great idea and leads to clicks and glitches. This brought down my need for fine grained resolution by a significant factor. In addition the difference of +/- of half sample is inaudible when 44100 are going past in a second! 

This meant that rather than calculate it all on the fly I stuck the formula into and excel spreadsheet and told it to calculate relative to the rows while rounding to the nearest integer; hence an instant lookup table with a revised sample rate for every second. Incidentally it started off *really* slowly at 221 samples in the first second!

I then cooked up a simple playback and record program (it has had 2 incarnations, being initially built in MAX/Msp and later Supercollider due to cross platform issues). It takes the excel table and looks at the values over time, interpolating between the 1 second breakpoints and applies this to the playback speed of the audio while simultaneously recording the output to disk. 

The net result is a lookup table and a bit of ramping between values; far more computationally efficient than the junk equation above! Sonically if produces a seamless transition from inaudible, through rumbling sub-bass, murmuring rumbles up to urgent build and finally a howling drive-by.

Turbo is due for release in October on Spotify, iTunes, Bandcamp etc. (click the links to go direct to the artist page). For more info see www.virtual440.


Sound-Sculpture commission for Golygfa Gwydyr

Susurrate (Latin): a murmur or whisper of leaves, wind etc.

The piece works with the shapes of the woodland to create a reflective space. Resonant metal bars are suspended from various trees and branches sonically and visually amplifying the apparent characteristics of each.

With the notes being suspended from the trees, the branches dictate something of the visual shape. Thus the impression of a majestic or skeletal specimen becomes heightened.

This is exaggerated by the notes used. In the case of a gnarled tree, if the notes used create something bright, a high pitched and major sound; it all of a sudden feels like a cheerful plucky little tree. Whereas if darker diminished tones were chosen the atmosphere and presence feel more menacing.

By itself the sculpture is largely silent, requiring a strong gust or outside intervention to create a sound. Instead, it remains quiet, highlighting the shapes of the canopy in an alien, almost ritualistic way.


On exploring the space members of the public can find beaters secured in place with which to tap the bars and bring the space to life. If audience members want to hear more than one tree at once in the space then they will have to cooperate and play as an ensemble due to the distances involved.

Each tree has a specific chord cluster, the sounds and appearance of which reflect the artists’ response to working with that particular organism. Some are sunlit and beautiful, others are brooding or foreboding, and everywhere in between.

The work is as much about music as the potential for sound. It is about heightening awareness of the environment, the magnification of space and listening and the pockets of existence between, before and after notes.

The work will be on display in Golygfa Gwydyr Llanrwst from 7th August throughout the Eisteddfod week and for longer if nature permits.


Turbo is a piece of music which I have been working on for a while. It draws on three separate sound sources namely two Formula One cars which raced when I was young, the Lotus 97T and the Ferrari 412 T2, as well as a Scalextric set. These three sources form the basis of the track with the remaining sounds derived from them or synthesised from first principals.


The engines of that era sounded very different. It was a less regulated time, before the advent of widespread electronics, the introduction of hybrid engines or the level of concern about petrochemicals and their environmental impact. While there have been many changes for the better over the last thirty years in F1, the sound of the grid is not one of them.


While working with the audio to create something musical rather than a historical document several things jumped out at me. Firstly it is amazing how melodic these vehicles are. From the chord of D and A created by snapshot sample of a car in the distance to the almost choir-like textures created by zooming right in on only a very narrow set of frequencies it really should not have been a surprise how ‘tuned’ race cars are. Secondly, the sounds are highly cyclical, but in an evolving way which can be exploited to find not only glitchy textures but also built on and augmented.


As well as drawing on the sounds of the turbo engine the idea of a turbo spiral extends into the structure of the work. There are four main sections to the music which repeat and develop with increasing rapidity. If you travelled around a spiral at a constant speed you would find a new section of music every ninety degrees. As the spiral tightens the sections become shorter, leading to a whirling inexorability in the final stages.

This idea of structural acceleration extends to a Ferrari engine sound heard throughout the work. It starts very low and slow; five-thousandths of its original speed and then gradually ramping up to become audible, first as a bassy rumble before transforming into glitchy background antics before flying past at full speed at the end.


Creating this piece has been an interesting challenge; trying to balance something which is intellectually rigorous (e.g. the golden-spiral mapping and computing), along with creating something which is engaging to listen to. I have tried to convey something of the excitement and nostalgia the source sounds bring to me, but also that tranquil space that you find in split seconds of extreme focus, the loudness and brutality of the spectacle, and hopefully something beautiful.

Turbo is dedicated to my Dad who was always a massive fan of motorsport, particularly Formula One and occasionally Scalextric.

Available online on iTunes, Spotify etc. from 15th October 2019.




New music in the pipeline…

Recently I was given a mixtape on cassette, the first ‘new’ cassette to come into my possession in well over a decade. The sentiment was amazing and it’s a wonderful thing but listening to it the sound quality was less than great being a generational copy compiled from several other tapes and recordings from the radio. Oddly the fluctuations in pitch, EQ colouration and stop start edits did not seem to matter as in the intervening years they had taken on a charm; linked to a time, objects and set of actions which have been technologically superseded which now makes this a creative choice.

The wow and flutter on the tape have become like the adze marks on the beams of rustic cottages, once rough but now a mark of the time, the technology and the interaction between human intent and facilitation.

Artefacts is a piece of music made by pushing digital and analogue audio equipment into extremes of its range while feeding a “silent” or null signal. After repeating this process a number of times the quirks of the system gradually come into focus with the glitches and biases of the equipment becoming amplified by repetition, sometimes with surprising results. These experiments were recorded and then used as material to create this piece.

Space To Think

In about September I was commissioned by Bangor Music Festival to compose a piece of electroacoustic music for their February 2018 event, along with a series of education workshops. I really wanted to do this and it was looking like it was going to be an amazing autumn and early spring of creating cool stuff and having fun; then the floor almost literally gave way.

Following a period of ill health my Dad took his own life in mid October and unsurprisingly this hit me really hard. It is not so much the sadness which is debilitating but the feelings of numbness, rage and lethargy that suck the capacity for creativity away. In my case my Dad and I got on really well, he was a role model and someone who had a massive influence on me throughout my life, when something so seemingly at odds with everything you have ever known happens all the basic assumptions that you make in life come into question. I would even look at my feet when walking down stairs, not through shock or physical instability but because I no longer trusted the assumption that I knew where my feet and where the steps where. It was certainly no mindset to take creative decisions in, they are so vague, so intangible and impossible to verify that the simplest starting impetus threw up paralyzing indecision.


It was at this point that I sadly informed Guto the festival director that I couldn’t fulfill the commission. I have never had to do this before and it left me feeling awful, but also slightly relieved.  There followed a period of calm, I got back to doing some work and I managed to get off the antidepressants (citalopram) which had been prescribed to help me sleep, level me out and stop catching things out of the corner of my eye. In late December I got a phone call from Guto offering to take some of the festival work back, but once again asking if I would like to compose ‘something’ for the finishing concert.


I find it really hard to sit down and just make noises or compose, some people start from an initial sound or a feeling, I tend to find some sort of idea or framework to hang something on and then can go from there. I though about this for about 24 hours, it was an incredibly kind offer which Guto had made, and my head was clearing. I went for a run in the hills, it happened to be early as I wanted to make the summit of a mountain near to us to catch the winter solstice sunrise and on the way up the ideas just struck me.

The theme of the event this year is space and I am happy to say that the work shared a stage with Birmingham Ensemble for Electroacoustic Research (BEER). BEER had worked in collaboration with the Art@CMS project at CERN in Switzerland, using real-time sonification of data streams from the Large Hadron Collider, the world’s largest and most complex particle accelerator. This is something which it is foolish to compete against; that, and the fact that I literally have Scott Wilson (of BEER)’s book on coding in Supercollider sat on my desk. Thus I chose to take a different tack and rather than approach it from an analytical and scientific angle I went for something closer to home.

Space To Think – Ed Wright 2018 (8 channel surround sound audio)

pianostringsA lot of what is in the popular imagination about space and space travel is precisely that, imagination. From the Barron’s Forbidden Planet through to the electronic squelch of radio communication a lot of what we think of as space related is a very human construct. What fascinates me is how much of what we believe sounds as if it comes from outer space or under the sea (or for that matter any environment out of our direct experience) is actually a result of dubbing and sound design in the media. As a culture we have bought into the idea of rockets rumbling as they go past, even though there may be almost nothing in the void of space to transmit the sound and the glockenspiel twinkle of stars is almost as real as the piano wire scrape of the Tardis. This provides a fantastic palate of subverted fantasy with which to create and explore a rich and varied sound world. Apart from the use of two pieces of NASA archive; launch and countdown, the rest of the sounds used I have recorded and shaped myself.

26232759_10155810057685631_1698340499798078925_oGreat delight was taken in recreating a few iconic ‘other worldly’ sound objects and effects along the way, 50 pence pieces were rubbed down piano strings to provide the basis for a tardis noise before looping and adding spring reverb. Humming strip lights were close mic-ed to create light-sabres, and some generative coding brought about drone progressions similar to the opening of Star Trek. These and many other sounds were used as the raw materials of the piece and then developed as the different timbre interact and evolve. The result is an acousmatic work utilizing a wide variety of sounds from analogue synthesis through to simple dislocation and out to generative algorithms creating an apparently extra-terrestrial environment in which our earthbound ears and minds can roam.


Many thanks to Guto Puw and the Bangor Music Festival for their kindness, understanding and faith.

In memory of Brian Wright

Ricercar (for electric violin and stochastic step-sequencer) 2016

Ricercar was created as a way to create evolving polyphonic textures in a ‘solo’ violin piece. It uses a midi foot controller (currently a Behringer FCB1010) to control an electric violin into a piece of software created specifically for this piece.


Testing at home before the first performance
foot controller, violin and laptop visible in the
chaos of pre-event preparations

The work is essentially fugal, relying on live sampling to create the layers around the instrumental part and uses a 3 layered sequencer to achieve this. Each layer contains a series of buffers which can be recorded into. The top layer has seven 5 second samples, the middle has 5×17 and the lower has 3×71 second samples. These all prime durations so when left to their own devices they will take a long time to ever repeat in precisely the same configuration. Using a foot controller to trigger when to start recording and which buffer to record to or overwrite, thus it is possible to gradually build up a four part texture using the three sampler ‘lines’ and the live violin.


Overlapping prime blocks 5,17 and 71

In itself this is passable, but could be rather recursive and very quickly get repetitive unless loops are overwritten regularly, which in itself could become an oft-repeated task. This has been overcome in 2 ways. The first is that each time a sample is due to be played there is a 50% chance that it will not be in which case the sequencer just moves on and waits until the next step is ready and then re-rolls its virtual dice. When scaled up over each of the 3 layers of fast, medium and slow samples this means that there will probably always be something playing (depending on how many samples have had audio recorded into them!) but the thickness of the texture will vary.  The second way that diversity is added is by changing the playback speed of each sample. As a sample is triggered the sequencer takes a decision as to what pitch/speed to play the audio. This varies over four octaves (two up and two down) and results in a shift in playback of between quarter speed and four times as fast. The speed changes are in discrete even tempered steps and thus a broader sense of harmony and polyphonic/contrapuntal line can be achieved.pitch.JPG

In addition to this audio focused processing there are 2 global controls. The first is how fast the sequencer triggers each step. This in itself does not affect the length of individual samples so at extreme speeds loops will overlap in a densely fired texture or conversely there may be a lot of space and silence at very slow speeds. The last of the controls is volume of playback, which when used carefully can not only help to balance the live and processed material but can greatly increase the sensation of articulation and dialogue.

Within this framework the performer is free to improvise, guided by the constraints and freedoms of the instrument, themselves the hardware and software.


Video extract taken by Diane Evans at OscilloScope 28/6/15

Max MSP Basics

Here is some stuff I put together a while back. It may be useful for some people 🙂

ground loop

Contents (click to jump to section):

What is Max/MSP?
Working with audio
Numbers, messages and lists
Programme flow and control
Making patches simpler
Playing audio samples
Simple MIDI control
Enveloping and cross-fading
Designing the user interface
Soundfile playback and timing.
Refining soundfile playback control and timing.
More elegant approaches to additive synthesis.
A better way to create polyphony: poly~
Breaking out; physical I/O and Arduino.
Working with acoustic instruments
Audio processing
Further audio manipulation
Jitter II
Javascript in Max
Algorithmic composition

What is Max/MSP?

Max/MSP (often just called ‘Max’) is a ‘multimedia programming environment’ which will allow you to create pretty much any kind of music or audio software you can think of. It can also handle video using a built-in extension called ‘Jitter’.

To get more of an idea of what Max can do, visit the website www.cycling74.com and click on the ‘projects made with Max’

View original post 24,603 more words

If you need the loo, please go before listening to this piece!

Anatomy of a Mountain Stream: quadraphonic fixed media, binaural version.

Field recording & processing programming: Edward Wright


We often take things for granted until we look carefully at them. During the cold snap of the winter of 2011 when the snow had dampened all the sounds I bravely/foolishly balanced a surround sound recorder on a rock in the middle of the stream that runs down from the Carneddau, through Parc Plas Mawr in Penmaenan on its way to the sea.

Through manipulating the playback speed of the resulting sound-file it was possible to expose many details of the sound that are often obscured by our ‘normal’ way of listening. By simply changing the sampling rate, we can enter what feel like different sonic dimensions where the very large, and minutely small sonic structures become far more apparent.

flickbookEffectively this means that I have slowed down and sped up the audio. More precisely a computer or CD player reads through audio in a number of steps much like you would go through pages in a flick book but a lot faster (CD reads at 44100 ‘steps’ per second).

All that I have changed is how quickly those steps are taken. What this does is to not only change the pitch of the audio as the sound waves become compressed or stretched out, but also to chance the perception of events. Things that seem a blur become a series of discrete happenings and events that seemed slow and random start to show striking regularity when viewed over a longer (or more compressed) time frame.

In this piece the original 8 minute sound-file is played in its entirety, but the speed has been mapped and composed to create the framework of the piece. The audio was recorded on a Zoom H2 at 4chan x 48kHz x 24 bit, processed in Max/MSP 5.

Screen Shot 2013-04-18 at 17.27.51This version of the work had been mixed to give an approximation of the quadraphonic set-up of the speakers (i.e. 2 in-front and 2 behind) on headphones.

This has been achieved by playing a sound impulse from each speaker and measuring how it gets to a listeners ears. A sound from the extreme left will take slightly longer to reach the right ear than it takes to get to the closer left ear. Also the journey that the sound waves have to take around the head will absorb some of the energy, although not evenly across all pitches, changing fractionally changing the tone of the sound by the time it reaches the further ear.


Image from http://www.cns.nyu.edu/~david/courses/perception/lecturenotes/localization

4mic in streamcopyrightIf you played a sound from the right front speaker (for instance) and recorded that sound from microphones placed in the ears of a listener you can compare the signal from the speaker with those that have arrived at the ears. From this point we can work out what the change is and can also apply that change to any other sound.

This has been scaled up to the 4 separate audio channels and processed (impulse convolved) with the spatial data collected from the in ear microphones hence why you need to listen on headphones to get the full effect


Tuning a violin….

This is from a piece I have written for internetviolin.com hopefully it will be of use to some people as I know it is a cause of much difficulty and frustration. Also any feedback from those more experienced violinists out there would be great!

* * *


Tuning your violin is one of the most important things to get right as if the instrument is not in tune it makes getting a good sound a lot harder than it already is. So here are a few hints and ideas for those of you starting out.

STAY SAFE! Don’t over tighten your strings as this can cause them to snap. Keep your face, and especially eyes, out of the range of a potentially snapping string.

Look after your violin! Only try to tune one string at a time and always maintain some tension on at least one string. The bridge of your violin, and to a lesser extent the sound post (inside the violin roughly under the E side foot of the bridge) are largely held in place by the pressure of the strings. If you slacken everything off they can fall down!

Tuning a stringed instrument relies on some basic physics, in that the tighter a string is pulled the higher pitch it will give you, think of it like a very espensive rubber band.

Two possible ways to tune

First there are the main tuning pegs, which are very useful especially if your instrument is a long way out of tune.

Secondly your violin may have one or more fine adjusters attached, or built into, the tail-piece. These are arguably easier to use in that the control is a lot more precise and have the additional bonus of being less likely to cause string breakages because they can only tension so far before you run out of thread.

From this point the rest is simple in theory; if the string is too low in pitch (flat) you tighten it, if it is too high (sharp) you loosen it. However, putting this into practise can sometimes require a few practical skills.

Reference Pitches

In order to get your violin in tune it helps to know what you are aiming at. The four strings of the violin are G D A & E.

Here are the four notes that the violin is tuned to. G is the lowest sounding note and usually has the thickest string; with the violin placed on your shoulder it is the string furthest to the left. The strings are then arranged G D A E from left to right.

You can check these against anything that you know to be in tune, such as a piano, tuning fork or for that matter an electronic tuner.

Click the letter to hear each pitch and check that the sound coming out of your violin matches that of the clip. I have included two different types of sounds, one being a piano, given that is what is often tuned to in the ‘real world’ and the other being a sine tone which is theoretically the purest pitch reference.

Sine Tones:




Click the letter to hear the audio.

Assuming that your violin gives notes that sound like those above then your don’t really need to read on right now, get on and play!

Tuning with adjusters

Assuming that your violin is only slightly out it is probably easiest to tune it with the fine adjusters at the tail-piece. Pluck or bow the string to determine if the string needs to be made higher or lower and tighten or loosen the screw head of adjuster connected to that string. It is probably better to only move it by small increments between re-checking the note to prevent over tightening. When you are happy with the note proceed to the next string.

Tuning with the pegs

Often tuning with the pegs will be the only option as before it is imperative to only move it by small increments between re-checking the note to prevent over tightening and snapping the string. It is best to check the pitch by plucking, or bowing the string whilst turning the peg to the right position. Pegs are a very old method of controlling the tension on a string and rely on friction and the fact that they are tapered to make them work, so the trick to making them stick is to twist and push, like you would when using a screwdriver or cork screw.

Bowing vs. Plucking

In order to see if your violin is in tune you need to hear it, often as you are tuning it. There are two ways to do this; plucking the string or bowing it. Plucking is far easier in that you can rest the instrument on your knee or on your lap and turn the peg/adjuster with one hand whilst plucking with the other. Bowing requires that you hold the violin under your chin and turn the peg/adjuster whilst bowing with your remaining free hand! This is not for the inexperienced or faint hearted and should be tried out either with assistance or over a very soft landing as if it slips from under your chin it is almost impossible to catch by yourself.

The reason why so many people bother with learning how to bow and tune at once it that it is far more reliable. Not only can a bowed sound give a more sustained volume, but it the plucked tone is harmonically less stable and tends to get noticeably lower in pitch as the sound decays.

However until you need the level of performance that bowing whilst tuning provides it is probably not worth the risk of dropping your violin!

Problem Solving/Other Considerations

Once you have got your strings in tune it is best to check each in turn. As the strings are all tensioned on the same instruments the pull acts over the entire instrument, so big changes to one string, can result in smaller changes to others.

If a peg does not stick it could be for three key reasons:

  1. The string is pulling the peg out. This is by far the most common! Look at where the string meets the end of the nut. If the string meets the peg significantly further towards the narrow end of the taper of the peg, it will effectively be pulling the peg out of the peg box. To remedy this simply slacken the string off by a couple of turns and then tension back up, whilst winding on a fatter part of the taper closer to the wood of the peg box.
  2. You may not be pushing the peg far enough in, please be delicate, but it does need enough force for friction to do its thing.
  3. The peg does not fit the peg box properly, in which case ask a reputable specialist to look at it.

If your run out of thread on your adjuster simply loosen the main peg, twist the adjuster back so that the screw thread is to about 50% depth, then take up the main slack on the tuning peg. You may have to fine tune with the adjuster a little to complete the process.

Make sure the bridge stays vertical. Its feet should be approximately between the cross bars of the f-holes, and the top of it should be in line too. Repeated tuning and especially changing of strings tends to drag the top of the bridge up the instrument. This can be detrimental to the sound and playing and in extreme cases can lead to the bridge falling over. If your bridge does start to lean, then simply slide it back into place, loosening the strings slightly if required.

Crosswire paper and piece…



Edward Wright

Studio demo version.


This paper documents a piece of practice-based research, enquiring into the extent that certain computer programs can be regarded as musical compositions in themselves. I will argue that the works set down by composers are sets of parameters or restrictions within which performers work and create new, fresh interpretations. As a computer program Crosswire builds on these phenomena; rather than being a computer program designed as a way to facilitate the creation or execution of a piece of music, I shall argue, it is the composition, and the act of improvising upon that framework is the work’s musical realisation.


Composition has been traditionally defined as:

1a, the act of putting together, formation or construction…

2a, a literary or musical work b, the act of producing such a work.

This rather abstract definition does not adequately express the nuances of the various methods of generating and shaping material. A composition need not be written down to still be recognisable as such; a brief foray into the fields of traditional or acousmatic music will uphold this axiom. Secondly, composition defines a set of restrictions. These may be as obvious as defining which instrument should play, the dynamics and pitches that should be performed, or they may be more subtle such as the cultural expectation that dynamics should be performed in baroque music even when there are often very few, if any, notated.

It would be deceptive to view composition simply as the act of setting up restrictions. It can really only be understood in the context of musical realisation and performance and in trying to define what is or is not a composition, an antithesis may useful. Below is a diagram of the composition-performance chain. It is greatly simplified from reality but hopefully will help to define some of the key stages.

Set definition → Realisation → Perception

‘Set definition’ covers the act and artefact of composition. Intrinsically (music) composition defines a set of events that must be executed as part of its realisation. Thus a performance may be a correct realisation of the composition or alternatively may contain errors such as wrong notes. None the less a composition is rather more than a list of systematically quantified auditory events.

By its nature, a composition (especially notated works) cannot define every specific element to an infinite degree of accuracy. When asked to ‘give an A’ an oboist may play loudly or more quietly, with a greater or lesser attack, ever so slightly more sharp or flat, or possibly even sing it. Such actions would still constitute giving an A.

It becomes clear that rather than creating a defined event list, composition delineates a set of narrow pathways to which a realisation must adhere. Some of these pathways are wider than others and allow more room for interpretation or indeed error. For example in Classical music pitch is strictly defined in terms of the correct note and intonation however vibrato is a notable exception to this. Volume is rather loosely defined. In punk meanwhile it can be observed that far less emphasis is placed on specific intonation, but, interestingly glissandi form an integral part of many vocal deliveries.

Therefore composition defines a multidimensional set of musical trajectories which realisation must follow. These can shape low level elements such pitch, timbre and volume but also more complex aspects such as interpretation and style. In some cases these trajectories must be adhered to very strictly and in others they simply provide a guide as to the intentions of the composer.

Based on this concept of trajectory, it is becomes clear that all music is to a greater, or far lesser extent, improvised. This goes as far as apparently fixed ‘tape’ pieces where the art of diffusion has blurred and reinforced the intended spatial and dynamic trajectories, let alone inconstancies of speaker placement. Even if it is the case that a piece of music is improvised it can still be viewed as composed provided that a set of parameters has been defined. If a performer were to play a Bach sonata with sympathetic dynamics even though there were none in the score it would still be Bach; equally Thelonious Monk’s jazz standard Round Midnight is recognisable from the chord progression alone, even when there is some debate as to the fine detail of what that progression is.

This suggests that rather than producing an empirical list of operations, a composition defines a set of boundaries. At a given time a realisation of these boundaries can be perceived to be adhering to the rules closely, or not, depending upon the acuity and cultural norms of the listener. When perceived through time this produces music based on a set of multi-dimensional trajectories. This is not what the line graph accuracy of notated music would seem to imply.

‘Realisation’ covers the act of taking the composition and performing it, either by traditional instrumental methods, loudspeaker diffusion or indeed simply playing back a recording. Realisation is an increasingly vital part of the communication of composition. Based upon the preceding paragraphs it is clearly fundamental to our understanding of music. There are many people who can read notated scores in their heads and gain a working understanding of the eventual sound. This may not be possible with more codified methods of sound creation such as a large script written in SuperCollider or indeed visual inspection of a simple CD-R.

‘Perception’ includes the hearing, and in many cases seeing, of the realisation of the set, by an observer. The observer may be the performer, who in turn may also be the composer, or alternatively a listener may be an audience member largely outside the rest of the chain. While there are obvious points of feedback in the composition-performance dialogue, such as the effect that a performer’s perception of their work, or their perception of an audience’s reaction to it will have on their realisation, the flow of information is overwhelmingly one way.

To illustrate this point, an instrument such as a piano would be viewed as a catalyst for realisation in the composition-performance chain. It is very unlikely that during a performance that the change to a different piano would radically alter the piece of music (except to trigger a slight pause in proceedings), however to swap compositions mid way through would be striking!

I would therefore argue that in order to judge whether an entity is a composition or not, the location of it’s primary function in the composition-performance chain must be determined. A piano in this instance is clearly a vehicle for realisation, even though its use may be a compositional concern in that the perception of the composition be through a piano ‘filter’. It is primarily sited in the realisation part of the chain, consider trying to perform a piece written for piano without the instrument…


There are many examples of innovation in composition, especially in its content. Content evolutions such as the development of sonata form or the inclusion of glissandi are numerous, examples of progress in composition itself are less easy to find.

One such example would the development of Musique Concrète. Through the development of new technology the early Concrète composers were able to work physically with analogue sound recordings. This is undeniably a different approach to composition but it also presents us with a duality of function. Musique Concrète defines a very strict composition (being created from and set down in a recorded form), but also strongly guides its own realisation. Ironically, for a genre that promises so much freedom with the possibility of any sound as the basis for musical material it has become one of the least flexible performance formats. To follow the composition-perception model, the realisation is so bound up with the composition that it is largely one and the same. A lot of the ‘performance’ is carried out in the studio at the point of composition, far from the observation of the audience, leaving only the spatial interpretation through the act of diffusion. This blurring of the strict definitions between set definition, realisation and especially instrument/performer becomes more acutely apparent as we move through the last century of music making towards the present day.

In keeping with this trend, the field of Algorithmic Composition again crosses the boundaries between being a strict composition and often times providing its own realisation. Whilst being a very broad category, arguably ranging from the works of Dufay’s Nuper rosarum flores to Steve Reich’s phase works, these particular phenomena are abundantly clear in works such as Nikos Stavropoulos’ and Kingsley Ash’s Livecell. The digital implementation of algorithmic processes enables the musical results to be generated in real time and in the performance sphere. In the case of Livecell many of the compositional decisions are made by the computer (albeit guided and mediated by a controller) the resulting output is then scored by the program and fed to a ‘traditional’ string quartet for realisation. In this case there is a distinct malleability of roles. The Livecell software takes a large number of the compositional decisions and provides the score; however the controller also has a lot of control over the higher level actions of how and when Livecell is triggered and also how it interprets its output when creating a score. In this case the Livecell program and interface partly defines the composition set and is partly realisation tool; the controller is part composer and part instrumental performer.

This blurring of composition and realisation can be seen further in Cage’s Sonatas and Interludes. The use of the prepared piano as an extended instrument is well documented but in this particular case (perhaps the most well known of the prepared piano canon) something different is happening. In this series of works Cage clearly sets out which notes and rhythms should be played, but also gives strict instructions as to how the piano should be prepared. This precise specification of the timbre and amplitude of individual notes is as much a part of composition as the marking of a given note tenuto or marcato, or the voicing between different instruments of a given chord.

The use of a piano as a basis for these pieces can cause confusion. The piano is one of the iconic instruments of western music; it has a wide (non-continuous) frequency range, it is cable of a large range of dynamics, creating a number of polyphonic lines and can be controlled by one or more players. Most importantly a wide variety of composed works can be performed on it. Fundamentally it appears to be an instrument. However, Cage’s preparation directions are so specific to his pieces, that they are tied far more closely to his composition than to any transferable evolution of piano technology. It is the context specific that keeps the piano preparations for Sonatas and Interludes as part of set of compositions rather than becoming an extended instrument.

The tension between new instrument and new composition is further demonstrated by Marco Donnarumma’s Xth Sense (hailed as the ‘world’s most innovative musical instrument’). Clearly when a new instrument is developed, new music can be composed with it as is the case with Donnarumma’s Music for Flesh which presented the Xth Sense in its initial form. Since then the Xth Sense has been revised and improved, as could be expected with any design. The next composition was Music for Flesh II which is remarkable in a number of ways. Firstly this truly sets the Xth Sense outside of the remit of composition and thoroughly into the instrumental sphere having overcome any specificity that could have been linked to the first Music for Flesh.More interestingly the pre-set ‘scenes’ within the software of the Xth Sense have been modified to accommodate the new work. This does not necessitate low level changes to the program but rather a change in a set of variables and therefore a different reaction to the input from the hardware and performer. It is still debatable as to what extent this is a change of ‘set definition’/composition or a modification of the instrument.

Perhaps one of the clearest deliberate breaks from the rigid differentiation of descrete composition can be found in Bloom. This was created by Brian Eno and Peter Chilvers for the iPhone/iPad and claims to be ‘part instrument, part composition and part artwork’. Within this the user has a limited control over a self-generating ambient A/V work, with the option for automated play. It is a clear example of a work that can be realised with two different, albeit virtually identical controllers, namely the iPhone or iPad. I also provide and can within reason modify its compositional definition set and through the hardware create its own realisation.

The exploration outlined above would suggest that an instrument or sound controller can be defined as part of a composition by a specificity to a particular musical work. In the case of a traditional instrument the expressive qualities and gestural mapping can be transferred to a different definition set. This is not without limitations such as in the fields of tessitura and timbre, but within reason the instrument remains a vehicle for output realisation and is easily substituted without changing compositional content.


It is no longer possible to consider instruments and compositions as two separate entities, rather as points along a continuum. The specificity of an instrument is innately bound to the composition, a work can be composed idiomatically for an instrument, or at the opposite end of the spectrum, the construction of an ‘instrument’ (be that hardware or software) can be so symbiotically linked to a composition as to make it unsuitable for use with another work.

* * *

An entity can be viewed as part of the composition if it is active in the set definition and realisation links of the composition-performance chain, and if it is specific to that work to such an extent that it cannot be used to its full potential in a large number of other situations.


Crosswire is a software composition created to test these ideas. The set-up requires a DAW with a minimum ofmono in and stereo out as well as a projector feed. An electric violin must be plugged into the mono input, the Crosswire standalone patch running, feeding the stereo out to speakers and video out to the projector. The violinist is expected to improvise within the remit of the reactions of the patch. The audio going into the patch is processed based upon parameters defined by that audio. The performer and audience can hear the resulting audio and see the processing decisions as a result of the projector feed. Processing decisions are taken based upon data relating to the pitch, timbre and volume content of the current sound. Some of the processing settings are implemented to be very sensitive so as to encourage variety and development in performance and in some cases lack of human repeatability.

The following text is an extract from the program note to the first performance:

“Crosswire is a piece of music, manifested as a computer program. The violin part is ‘freely’ improvised, and the output of the instrument is fed into the computer. Within Crosswire the sound of the violin is analysed to provide information on: the pitch of the note being played, how loud it is, and its harmonic content. This is then used to turn on or off a number of different types of processors and form or break links between them. This is displayed to the performer and audience as a hexagonal constellation (see below). Each dot represents a processor; when a dot is small the processor is off, when it is large it is on, and lines between dots symbolise links sending audio out of one process to another.

Fig 1. Crosswire performance display.

From top left clockwise: clean output (no processing, shown here on), delays, additive synth, panning, sampler and comb filter. A number of links can also be seen for instance from panning to the sampler and comb filter. Each of these processors is in turn controlled by the pitch/volume/harmonic analysis. In the case of the delays the delay time and feedback amount are in real time as a result of the analysis. Within this system the violin is played and its output is processed. The performer makes judgements as to how to progress based upon the sound coming out and the visual display, thus entering a ‘strange loop’ of interaction as the music played not only creates sonic material but also controls how that material is transformed and presented to the world.”

This program note would therefore suggest that the violin part is improvised, and in some senses it is, but within a very strict, albeit reactive, framework. In another sense it simply realises and works within these parameters as mediated by the cultural norms of the performer. I would therefore argue that Crosswire sits primarily in the ‘Set Definition’ area of the chain and is therefore a composition.

Crosswire is currently implemented in Max/MSP and makes use of the fiddle~ external object.


Anecdotally Crosswire functions well as a self-contained piece. The work provides an environment in which the performer can create numerous and varied gestures. The inclusion of sampling and synthesis help to provide some sort of accompaniment or duet feel, while the comb filter extends the timbral range as well as the pitch potential. The tessitura is further extended by the use of variable playback speed within the sampler; this effect and the effects of other processors can also be heightened by the use of feedback delays. There are also a number of more global controls, for instance playing between 328-332 Hz, arguably the closest thing the work has to a key note, disengages all the processing except the clean output.

The player therefore has control over a lot of different parameters and the potential to make a very beautiful performance, but also to create something very bad. This is important as this allows the work the flexibility to evolve and progress ‘as mediated by the cultural norms of the performer’ in that the performer can chose to perform in one way and discard other (to them less palatable) options.

The work has been performed several times throughout and has undergone revisions and improvements.These performances have lasted between 5 and 20 minutes and so far have been exclusively in a concert environment.

An exception from the normal format of performance was as part of the ICCoaction event in Edinburgh (Scotland) for which Richard Worth composed and notated Green Man Blue. This was for the Red Note Ensemble and Crosswire. The performance used Crosswire to process the summed microphone signals of an ensemble (violin, bass clarinet, trombone and double bass). Having initially defined Crosswire as an investigation into composition, and it now being integrated into another composition in an event focused on composition for new instruments seemed to pose some insurmountable problems for the underlying theory or the piece.

A composition is of a low order of flexibility, in that it exists as a relatively fixed form. Moving along the chain we are met with phenomena of increasing flexibility such as performance and eventually perception by an audience. An instrument-performer combination is usually capable of realising more than one composition in spite of a generally rigid architecture, and is therefore possessed of a higher order of flexibility. Equally it is quite possible to perceive numerous instruments at once. Retracing our steps we hit a wall with human processing capacity in that it is not possible to perceive that many compositions played by multiple instruments at once.

Therefore one test of an artefact’s status as a composition would be to test its flexibility within that role. This must also be mitigated by the transferability of the digital form along the composition-performance chain irrespective of inflexibility within the specific function of composition.

As it is contradictory to say that one piece of music is a different piece of music at the same time, I would suggest that in this situation one of following will have happened; either Crosswire has been changed in function to act not like a composition, but rather as an effects processor for Worths’ composition (fitting in with the idea of transferability), or, that there is now an incredibly detailed transcribed improvisation based upon the Crosswire definition set, or that there is something else going on such as its use as an instrument, combinations of the above, or that it never was a composition.

Starting with the easiest argument, Crosswire is not a standalone instrument in that it requires another sound source to activate it. It could be considered an extended or hyper-instrument, but the interaction between the violin input sound and the Crosswire processing combined with the precision to which some of the control data is sampled, make some actions unrepeatable with any degree of accuracy and the system too unpredictable to be regarded as a reliable instrument in the conventional performative sense.

Returning to the idea of specificity, in the case of Green Man Blue the Crosswire software and its definition set were proven to be linked to the wider Crosswire realisation environment. The nature of the written material could not fully exploit the available range of the processing. This becomes clear when considering decisions based upon pitch. Many of Crosswire’s settings are based on microtonal differentiations which make use of one of the great strengths of a solo (especially ‘improvised’) violin. This is far harder to achieve with standard notation and especially with more than one instrument. Equally the measured time that is so helpful for group performance is not necessarily suited to a working with software that works in ms. and a very fluid time base rather than in bars.

This is not to take anything away from Worth’s piece as it is a good piece, rather that only a part of Crosswire was suited to his purposes.Crosswire ‘can be viewed as part of the composition [as it is] active in the set definition and realisation regions of the composition-performance chain, [and it] is specific to that work to such an extent that it cannot be used to it’s full potential in a large number of other situations.’

Clearly one work does not classify as ‘a large number’ of compositions but it is at the least a corroboration of the theory.

In conclusion the boundaries between what constitutes composition and what constitutes realisation seem to be becoming less polarised, to the point where there is room for a lot of creativity within this new space. It may not be long until it becomes meaningless to talk of composition and performance, in our traditional understanding of them, as two separate entities. Rather I predict that we will see a growing trend towards a more unified method of composition-performance as styles and facilities continue to evolve.

* * *

My thanks to Richard Worth and Lauren Hayes for their patience and help in my first attempt at making the Crosswire patch suitable for use by people other than myself.


This extract is from the definition in the Concise Oxford Dictionary.

One such example would be the recorded vocals on The Scream by Siouxsie and the Banshees.

Debate still rages as to the superiority of the Miles Davis version or the Monk version, and Cootie Williams and Bernie Hanighen have received co-credits for their contributions. The addition of numerous simplified versions further complicates the issue. However, all of these versions are popularly acknowledged as ‘round Midnight

Although not necessarily ‘algorithmic’ in the modern understanding of the phrase these works rely very heavily upon the use of number and transformation as a compositional element.

This quote was said of Xth Sense as part of the Margaret Guthman Musical Instrument Competition (Georgia Tech, US 2012).

This quotation is from the iPhone‘App store’ marketing text.

In many cases playability of a given instrument depends upon its links to physical gestures and their repeatability. Playing a quiet note on the piano has a direct physical gesture, and there is an obvious correlation with playing the same note forte. Equally on many instruments you move in one direction to raise the pitch and in the opposite to lower it.

In this context strange loop refers to when moving up or down through a hierarchical system, one finds oneself back at the start as popularised by Hofstadter.

fiddle~ is an object for pitch following and sinusoidal decomposition and was devised by Miller Puckette, MSP port by Ted Apel and David Zacaerilli

For full details of performances see




Collins, N, Cottle, D & Wilson, S. (eds.) 2011. The SuperCollider Book. Cambridge, MA: MIT Press.

Gourse, L. 1998. Straight, No Chaser: The Life and Genius of Thelonious Monk. New York: Schirmer Books.

Hofstadter, D. R. (1999). Gödel, Escher, Bach: An eternal golden braid (20th anniversary ed.). New York: Basic Books.

Adams, R., Levitin, D. J. & McAdams, S. Control parameters for musical instruments: a foundation for new mappings of gesture to sound. Organised Sound.7(2):171 -189.

Warren, C. Brunelleschi’s Dome and Dufay’s Motet. The Musical Quarterly 59:92–105.

Donnarumma, M. Accessed on 29/5/12. Xth Sensehttp://marcodonnarumma.com/works/xth-sense/ .

Wright, E. Accessed on 1/06/12. http://www.virtual440.com.

Bloom app store page. Accessed on 14/5/12. http://www.generativemusic.com/ .

Crosswire on composers’ channel. Accessed on 2/5/12 www.youtube.com/watch?v=Hv6B4yBSwQ86.

fiddle~ Accessed on 29/5/12. http://crca.ucsd.edu/~tapel/software.html

ICCoacation. Accessed on 15/5/12. http://www.inventorcomposer.net

Livecell Accessed on 15/2/12. http://www.youtube.com/watch?v=qHC6VE_c7H0 15/2/12

Round Midnight forum debate. Accessed on 10/5/12. http://forums.allaboutjazz.com/showthread.php?t=33951

Ash, K. M. & Stavropoulos, N. 2011. Livecell: Real-Time Score Generation Through Interactive Generative Composition. Proceedings of the 2011 International Computer Music Conference. Huddersfield.

Donnarumma, M. 2011 XTH SENSE: sense: researching biological sounds of human body for an experimental paradigm of musical performance. . Proceedings of the 2011 International Computer Music Conference. Huddersfield

Bach, J.S. 19–?. Sonaten und Partiten, violin BWV 1001-1006 . London : Augener, score 7943.

Cage, J. 1960. Sonatas and interludes : prepared piano. New York : Henmar Press, score Edition Peters6755.


TheloniusMonk. 1944. Round Midnight (1999). On Round Midnight (re-issue). Spain: Definitive, DRCD 11133 – CD.

Siouxsie and the Banshees. 1978 Jigsaw Feeling. On The Scream.London: Polydor POLD5009-Vinyl.