This week marks four years since I started Oscilloscope, a monthly event focusing on experimental electronic music. I’m currently writing this as Wales in under lock-down due to the Covid-19 pandemic and as such, any form of gathering is impossible.
Over the last 4 years we have been completely voluntary and survived on donations of time, energy, talent and even sometimes performance space. It has been an amazing experience. With a growing team of organisers, an supportive and articulate audience and performances over 50 different artists from at least 12 different countries producing genuinely world class work, I am truly humbled.
I could go on singling out highlights and kindnesses, but that would seem to neglect those that did not get mentioned, and perhaps I will return at a later date to update this blog and expand on some of those memories. However at this stage I think that the double bill of streaming that was put together with next to no budget and a lot of goodwill for the anniversary double bill on the last day of June and and the 1st of July 2020 speaks for itself.
Thank you to all in these videos of the two live streams from 30/6/20 and 1/7/20; but also a massive thank you to all those involved in the 4 years leading up to them.
It is common for composers to consider their work through a number of dimensions or axes; examples of which can be found in fugue technique, Whishart’s lattice model (Wishart, 2002) or more extremely, embodied throughout the methodology of 12 tone serialism (Schoenberg and Stein, 1950).
This planar thinking lends itself to the invention of various compositional techniques, on a simplistic level running variables in in retrograde fashion through to the nested recursion of the Art of Fugue (Bach, 1750) and Markov Chain compositional techniques (Roads, 2000). This is often linear, looking at a set of pitches or rhythms; interrogating and manipulating a stream of data points. From there it is only a small step to link extra information to this data. For instance, at a given time a certain note could be played at a specified volume by a given instrument (Boulez, Cage and Nattiez, 1999). Thus, in computing terms we arrive at a two-dimensional array of data, something potentially very close to a square if you were to tabulate it.
With the addition of other data and relationships comes the opportunity for higher dimensions within this array, popping an extra axis into a cube, followed by a tesseract, up until such points as it would be difficult to perceive within an audible work.
What if, rather than dealing with data points the composer was able to manipulate the dimensional space of the larger work?
Precession is an exploration of these ideas. It draws on the Pitch Volume Timbre space defined by Wishart but folds in one side of the ‘cube’ to form a pyramid space. Working from the assumption that pitch is mapped to x volume to y and timbre to z notes can be placed anywhere within the body of the pyramid. This simple translation brings a deal of creative perturbation to the system, as when any datum is near the apex of the pyramid (z axis) its potential placement within the two remaining axes (x,y) across the square base are several limited.
Where this becomes especially interesting is being able to roll the pyramid like a die within compositional space. If the view point remains fixed with x for pitch etc. any movement of the pyramid re-voices the notes therein, potentially in strikingly different ways. Thus, phrases are cast and re-cast within the rolling die of the shape causing processionary transformations of the material.
Instrument design realised in Supercollider
Visual components and data transformation in Processing
Composition and array transformations Excel
System communication OSC
Edward Wright 3/12/19
Bach, J.S. (2013). The Art of the Fugue & A Musical Offering. Composed c. 1740-50. Newburyport: Dover Publications.
Boulez, P., Cage, J. and Nattiez, J. (1999). The Boulez-Cage correspondence. Cambridge [England]: Cambridge University Press.
Roads, C. (2000). The computer music tutorial. Estados Unidos: The MIT Press.
Schoenberg, A. and Stein, L. (1985). Style and idea. First published 1950. Berkeley: University of California Press.
Wishart, T. (2002). On sonic art. London: Routledge.
A quick chat given before a performance at Modulate 20/11/19
A few months ago a friend leant me a book he thought I might
like to read. It was obviously a precious thing and the worry that it was
nearly falling apart was only emphasised by the fact that its cover declared
that it had once cost two shillings and sixpence.
It turned out to be written by Fred Hoyle, who some of you
may have heard of as one of the ‘grand-old-men’ of UK astronomy, although sadly
today he is mostly remembered for his opposition to the then fledgling concept
of the Big Bang. What fewer people are aware of is that he also used to write
sci-fi, and when at his best was compared favourably to Arthur C Clarke and Isaac
Asimov, although sadly not as consistent, but on the flip side Asimov and
Clarke were never knighted for services to astronomy either.
So I started reading the book, in a state of terror that it
might fall apart on me at any moment and constantly envisaging having to return
it in several pieces. After a while the fear got the better of me and I
downloaded the audiobook, listened to that, and having finished it (so that I
could answer the relevant questions that might be asked by the owner) returned
the book in one piece.
After that it fell of the radar for a while until another
friend leant me a Kaoss Pad. For those who don’t know what that is, it is essentially
a touch pad that you can use to sample and process live sound. So there I was,
sat at the living room table with the manual on my phone trying to figure this
piece of kit out.
I needed some sort of audio to feed in, so, as my phone was
in my hand I just used the first thing that came up and that turned out be the
audiobook of The Black Cloud by Fred
It already had a slightly 1960s UK B-movie quality to it,
but when you start messing around with the sound and doing basic synthesis with
it it lands emphatically in Radiophonic Workshop, Delia Derbyshire
retro-sci-fi-horror territory, which was intriguing.
So what you are going to hear this evening is a half hour
abridged version of the text which I edited down and built a performance around.
While editing the recording of the voiceover I also did some
sound design on the computer generating a few bits that would be impractical to
create in real time but.
Ironically a lot of this was done in Supercollider which is a text based
programing environment, the name of which is just so close to the research done
at CERN into the fundamental physics of the universe that it feels completely
appropriate to work into the mix.
So I had two streams of sound; the voice over and the supercollider. For ease of use I recorded them to a single audio file. Normal stereo files carry 2 streams of information one for the left speaker and one for the right, and this gives spatialization between the two. In this case I recorded the voice over effectively to one speaker and the supercollider to the other, not so much left and right but synchronous A and B. This means I can isolate and play with the two things separately or in different ways while being able to simply play it out from a normal mp3 player.
Alongside that I have also got the modular synth setup,
which is smaller than quite a lot which you see and this is for several reasons.
The first is lack of money! Sometimes it is hard to justify spending on
something when you could download a free app and get very similar sounds. The
second is that I am interested in the ‘live-ness’ of performance and I can only
control so many things at once. Finally, and I think this is more important, is
the issue of combinations.
As a teenager I had a couple of electric guitar stomp-boxes
hooked up to an electric violin and became amazed with the delay pedal and the
differences you could achieve with only three variables; delay volume, number of
repeats and delay time. It is a bit like trying to work through a combination
lock when you have forgotten the code, there may only be 4 variables but after
0000, 0001, 0002… you quickly realise that that there are a lot of options. So
what fascinates me about modular kit is really delving deep, getting to know each
component and finding a few unexpected or elegant solutions and sounds along
So, we have the two audio streams; the voice-over and the
supercollider, as well as the synth, going into the desk which can be routed
out, either together or individually to the Kaoss Pad for sampling and transformation,
which in turn goes back into the desk. All of this then goes out to the
speakers and I will attempt to make this into a cohesive performance.
I am going to play straight through as a half hour set so grab a drink, sit back and enjoy!
I have already written about some of the creative aspects of Turbo, the piece of music I am currently working on here https://edvirtual440.wordpress.com/2019/07/22/turbo/, but following interest from a number of people here is a bit of a deeper dive into the mathematics of the piece.
Under the gloss and shine of the main body of the work there is a continuous recording of a Ferrari 412 T2 F1 car. The recording starts played back very slowly; gradually getting quicker until it is at normal (breakneck!) speed at the end. Artistically this was done to draw the listener in, there is a slow but impending inexorability about it but I wanted it to work creatively as more than just a cheap trick.
The playback speed is modelled on a golden spiral (i.e. a logarithmic spiral whose growth factor is φ the golden ratio) a design found not only in some engine turbos but also the humble snail from which the component took its name.
A golden spiral with initial radius 1 has the following polar equation:
If one were to travel inwards on such a spiral at a constant speed the time it would take to describe a full rotation around the centre decreases. While your speed would stay the same, in terms of RPM you would be accelerating. The manipulated Ferrari recording follows this pattern making 3 full rotations of the spiral appearing to get faster towards the epicentre at the end of the piece.
At a simple level digital audio works a lot like animation. A snapshot of a soundwave is taken numerous times a second (44100 for CD quality) and these snapshots are essentially then played back in order to create a close likeness to the original continuous sound. Just like animation and flip-books you can change how fast you leaf through the individual ‘pages’, and thus gain mastery of speeding up, slowing down or even appearing to reverse time.
The piece of music was to be 8 mins 39 seconds long. I wanted the Ferrari to be at normal playback speed at 8:38. Hence it was a task of mapping the sample rate to time or its equivalent distance along the spiral. Looking at it backwards I was aiming for a sampling rate of 44100 Hz at 8:38 with that rate decreasing by the proportions of the golden spiral as the time elapsed from the beginning of the piece of music trended to 0.
Several cups of tea later I came up with the following which gives sample rate (Hz) relative to time (s). This is copied from the code which I had left deliberately un-simplified so that I could still almost see what I was doing!
This resulted in a gradual and inevitable winding up of speed but was still very ugly not to mention the processing power it drew before even handling audio. It turns out that trying to feed a digital-analogue converter floating values for sample rate (as opposed to integers) is not a great idea and leads to clicks and glitches. This brought down my need for fine grained resolution by a significant factor. In addition the difference of +/- of half sample is inaudible when 44100 are going past in a second!
This meant that rather than calculate it all on the fly I stuck the formula into and excel spreadsheet and told it to calculate relative to the rows while rounding to the nearest integer; hence an instant lookup table with a revised sample rate for every second. Incidentally it started off *really* slowly at 221 samples in the first second!
I then cooked up a simple playback and record program (it has had 2 incarnations, being initially built in MAX/Msp and later Supercollider due to cross platform issues). It takes the excel table and looks at the values over time, interpolating between the 1 second breakpoints and applies this to the playback speed of the audio while simultaneously recording the output to disk.
The net result is a lookup table and a bit of ramping between values; far more computationally efficient than the junk equation above! Sonically if produces a seamless transition from inaudible, through rumbling sub-bass, murmuring rumbles up to urgent build and finally a howling drive-by.
Turbo is a piece of music which I have been working on for a while. It draws on three separate sound sources namely two Formula One cars which raced when I was young, the Lotus 97T and the Ferrari 412 T2, as well as a Scalextric set. These three sources form the basis of the track with the remaining sounds derived from them or synthesised from first principals.
The engines of that era sounded very different. It was a less regulated time, before the advent of widespread electronics, the introduction of hybrid engines or the level of concern about petrochemicals and their environmental impact. While there have been many changes for the better over the last thirty years in F1, the sound of the grid is not one of them.
While working with the audio to create something musical rather than a historical document several things jumped out at me. Firstly it is amazing how melodic these vehicles are. From the chord of D and A created by snapshot sample of a car in the distance to the almost choir-like textures created by zooming right in on only a very narrow set of frequencies it really should not have been a surprise how ‘tuned’ race cars are. Secondly, the sounds are highly cyclical, but in an evolving way which can be exploited to find not only glitchy textures but also built on and augmented.
As well as drawing on the sounds of the turbo engine the idea of a turbo spiral extends into the structure of the work. There are four main sections to the music which repeat and develop with increasing rapidity. If you travelled around a spiral at a constant speed you would find a new section of music every ninety degrees. As the spiral tightens the sections become shorter, leading to a whirling inexorability in the final stages.
This idea of structural acceleration extends to a Ferrari engine sound heard throughout the work. It starts very low and slow; five-thousandths of its original speed and then gradually ramping up to become audible, first as a bassy rumble before transforming into glitchy background antics before flying past at full speed at the end.
Creating this piece has been an interesting challenge; trying to balance something which is intellectually rigorous (e.g. the golden-spiral mapping and computing), along with creating something which is engaging to listen to. I have tried to convey something of the excitement and nostalgia the source sounds bring to me, but also that tranquil space that you find in split seconds of extreme focus, the loudness and brutality of the spectacle, and hopefully something beautiful.
Turbo is dedicated to my Dad who was always a massive fan of motorsport, particularly Formula One and occasionally Scalextric.
Available online on iTunes, Spotify etc. from 15th October 2019.
I have been asked to go and perform some music at the New York City Electroacoustic Music Festival in July. In terms of pure fun, career progression, networking and exposure it looks like it could be astounding! To give this a bit of context, I live in Wales (UK).
Earlier this year I was commissioned by Bangor Music Festival to compose and perform a piece of electroacoustic music / sonic art. This concert went really well and the piece is now available on iTunes, Spotify etc and is beginning to get noticed. The work was originally composed in 8 channel surround sound and is designed to be diffused in performance on even larger 3D sound arrays.
Above is a stereo mix down of the piece and here is a bit of writing about it.
I have now been asked to give the work a repeat performance on the other side of the Atlantic. My performance would comprise of diffusion the piece in the Abrons Arts Centre New York – 466 Grand St, Lower East Side, using their 16 channel surround sound auditorium. This will take place as part of a week-long series of concerts which also includes practice and rehearsal time in the venue. This will not only further develop my creative and performance skills, but will give me the opportunity to learn from, perform to and engage with world experts in the field, while sharing a tiny bit of UK culture with the world at large.
The festival is happy to provide accommodation etc. but I would need to cover my travel and subsistence.
At the moment I can’t afford to get there. I have applied for various arts/music/culture grants but don’t know if I will be successful. So I attempting to Crowdfund my way as well; to (and from) New York with a little help from friends, fans and family in case the formal funding doesn’t come through 😉 The Crowdfunding link is here, https://www.crowdfunder.co.uk/trying-2-fund-flights-2-play-a-concert-in-new-york/ if you have a few pounds to spare it would be hugely appreciated!
It would mean a massive amount, both on a personal (especially if you read the blog entry about the piece) and professional level if I could get there.
Recently I was given a mixtape on cassette, the first ‘new’ cassette to come into my possession in well over a decade. The sentiment was amazing and it’s a wonderful thing but listening to it the sound quality was less than great being a generational copy compiled from several other tapes and recordings from the radio. Oddly the fluctuations in pitch, EQ colouration and stop start edits did not seem to matter as in the intervening years they had taken on a charm; linked to a time, objects and set of actions which have been technologically superseded which now makes this a creative choice.
The wow and flutter on the tape have become like the adze marks on the beams of rustic cottages, once rough but now a mark of the time, the technology and the interaction between human intent and facilitation.
Artefacts is a piece of music made by pushing digital and analogue audio equipment into extremes of its range while feeding a “silent” or null signal. After repeating this process a number of times the quirks of the system gradually come into focus with the glitches and biases of the equipment becoming amplified by repetition, sometimes with surprising results. These experiments were recorded and then used as material to create this piece.
In about September I was commissioned by Bangor Music Festival to compose a piece of electroacoustic music for their February 2018 event, along with a series of education workshops. I really wanted to do this and it was looking like it was going to be an amazing autumn and early spring of creating cool stuff and having fun; then the floor almost literally gave way.
Following a period of ill health my Dad took his own life in mid October and unsurprisingly this hit me really hard. It is not so much the sadness which is debilitating but the feelings of numbness, rage and lethargy that suck the capacity for creativity away. In my case my Dad and I got on really well, he was a role model and someone who had a massive influence on me throughout my life, when something so seemingly at odds with everything you have ever known happens all the basic assumptions that you make in life come into question. I would even look at my feet when walking down stairs, not through shock or physical instability but because I no longer trusted the assumption that I knew where my feet and where the steps where. It was certainly no mindset to take creative decisions in, they are so vague, so intangible and impossible to verify that the simplest starting impetus threw up paralyzing indecision.
It was at this point that I sadly informed Guto the festival director that I couldn’t fulfill the commission. I have never had to do this before and it left me feeling awful, but also slightly relieved. There followed a period of calm, I got back to doing some work and I managed to get off the antidepressants (citalopram) which had been prescribed to help me sleep, level me out and stop catching things out of the corner of my eye. In late December I got a phone call from Guto offering to take some of the festival work back, but once again asking if I would like to compose ‘something’ for the finishing concert.
I find it really hard to sit down and just make noises or compose, some people start from an initial sound or a feeling, I tend to find some sort of idea or framework to hang something on and then can go from there. I though about this for about 24 hours, it was an incredibly kind offer which Guto had made, and my head was clearing. I went for a run in the hills, it happened to be early as I wanted to make the summit of a mountain near to us to catch the winter solstice sunrise and on the way up the ideas just struck me.
The theme of the event this year is space and I am happy to say that the work shared a stage with Birmingham Ensemble for Electroacoustic Research (BEER). BEER had worked in collaboration with the Art@CMS project at CERN in Switzerland, using real-time sonification of data streams from the Large Hadron Collider, the world’s largest and most complex particle accelerator. This is something which it is foolish to compete against; that, and the fact that I literally have Scott Wilson (of BEER)’s book on coding in Supercollider sat on my desk. Thus I chose to take a different tack and rather than approach it from an analytical and scientific angle I went for something closer to home.
Space To Think – Ed Wright 2018 (8 channel surround sound audio)
A lot of what is in the popular imagination about space and space travel is precisely that, imagination. From the Barron’s Forbidden Planet through to the electronic squelch of radio communication a lot of what we think of as space related is a very human construct. What fascinates me is how much of what we believe sounds as if it comes from outer space or under the sea (or for that matter any environment out of our direct experience) is actually a result of dubbing and sound design in the media. As a culture we have bought into the idea of rockets rumbling as they go past, even though there may be almost nothing in the void of space to transmit the sound and the glockenspiel twinkle of stars is almost as real as the piano wire scrape of the Tardis. This provides a fantastic palate of subverted fantasy with which to create and explore a rich and varied sound world. Apart from the use of two pieces of NASA archive; launch and countdown, the rest of the sounds used I have recorded and shaped myself.
Great delight was taken in recreating a few iconic ‘other worldly’ sound objects and effects along the way, 50 pence pieces were rubbed down piano strings to provide the basis for a tardis noise before looping and adding spring reverb. Humming strip lights were close mic-ed to create light-sabres, and some generative coding brought about drone progressions similar to the opening of Star Trek. These and many other sounds were used as the raw materials of the piece and then developed as the different timbre interact and evolve. The result is an acousmatic work utilizing a wide variety of sounds from analogue synthesis through to simple dislocation and out to generative algorithms creating an apparently extra-terrestrial environment in which our earthbound ears and minds can roam.
Many thanks to Guto Puw and the Bangor Music Festival for their kindness, understanding and faith.
Ricercar was created as a way to create evolving polyphonic textures in a ‘solo’ violin piece. It uses a midi foot controller (currently a Behringer FCB1010) to control an electric violin into a piece of software created specifically for this piece.
Testing at home before the first performance
foot controller, violin and laptop visible in the
chaos of pre-event preparations
The work is essentially fugal, relying on live sampling to create the layers around the instrumental part and uses a 3 layered sequencer to achieve this. Each layer contains a series of buffers which can be recorded into. The top layer has seven 5 second samples, the middle has 5×17 and the lower has 3×71 second samples. These all prime durations so when left to their own devices they will take a long time to ever repeat in precisely the same configuration. Using a foot controller to trigger when to start recording and which buffer to record to or overwrite, thus it is possible to gradually build up a four part texture using the three sampler ‘lines’ and the live violin.
Overlapping prime blocks 5,17 and 71
In itself this is passable, but could be rather recursive and very quickly get repetitive unless loops are overwritten regularly, which in itself could become an oft-repeated task. This has been overcome in 2 ways. The first is that each time a sample is due to be played there is a 50% chance that it will not be in which case the sequencer just moves on and waits until the next step is ready and then re-rolls its virtual dice. When scaled up over each of the 3 layers of fast, medium and slow samples this means that there will probably always be something playing (depending on how many samples have had audio recorded into them!) but the thickness of the texture will vary. The second way that diversity is added is by changing the playback speed of each sample. As a sample is triggered the sequencer takes a decision as to what pitch/speed to play the audio. This varies over four octaves (two up and two down) and results in a shift in playback of between quarter speed and four times as fast. The speed changes are in discrete even tempered steps and thus a broader sense of harmony and polyphonic/contrapuntal line can be achieved.
In addition to this audio focused processing there are 2 global controls. The first is how fast the sequencer triggers each step. This in itself does not affect the length of individual samples so at extreme speeds loops will overlap in a densely fired texture or conversely there may be a lot of space and silence at very slow speeds. The last of the controls is volume of playback, which when used carefully can not only help to balance the live and processed material but can greatly increase the sensation of articulation and dialogue.
Within this framework the performer is free to improvise, guided by the constraints and freedoms of the instrument, themselves the hardware and software.
Video extract taken by Diane Evans at OscilloScope 28/6/15