The Black Cloud-video

A video of a performance 20/11/19.

For more information about the piece itself please have a look at my previous post 🙂

Many people ask about the kit invovled so here is a quick rundown of that was on the table if you feel the need to geek out completely!

There is a synth which I put together, comprising the following modules:

Manufacturer ModuleDescription
2hp FreezFreez is a voltage controlled locked looper
2hp TMTM is a probabilistic random sequence generator.
DoepferA-110-1VCO
DoepferA-124 SEWasp 12db/oct Multimode Filter
DoepferA-140ADSR Envelope Generator
DoepferA-145Low Frequency Oscillator
Erica SynthsPico CV MIX3 channel DC coupled mixer
Erica SynthsPico DSPCustom stereo effects
Expert SleepersDisting mk4Many-in-1 multifunction module
Gershom ElectronicsTrash-80 One of a kind evolving generative FM
Make NoiseFunctionFunction generator
Manhattan AnalogCVPControl Voltage Processor
Mutable instrumentsEarsContact Microphone
Mutable instrumentsVeilsQuad VCA

2. This was processed with a Korg Kaoss Pad KP3

3. The audio book sections and prepared sounds were played from a Zoom H2

4. The audio was routed through a Soundcraft Spirit M12 (which was overkill but does have busses and direct sends!)

Advertisement

Prologue To A Performance

A quick chat given before a performance at Modulate 20/11/19

A few months ago a friend leant me a book he thought I might like to read. It was obviously a precious thing and the worry that it was nearly falling apart was only emphasised by the fact that its cover declared that it had once cost two shillings and sixpence.

It turned out to be written by Fred Hoyle, who some of you may have heard of as one of the ‘grand-old-men’ of UK astronomy, although sadly today he is mostly remembered for his opposition to the then fledgling concept of the Big Bang. What fewer people are aware of is that he also used to write sci-fi, and when at his best was compared favourably to Arthur C Clarke and Isaac Asimov, although sadly not as consistent, but on the flip side Asimov and Clarke were never knighted for services to astronomy either.

So I started reading the book, in a state of terror that it might fall apart on me at any moment and constantly envisaging having to return it in several pieces. After a while the fear got the better of me and I downloaded the audiobook, listened to that, and having finished it (so that I could answer the relevant questions that might be asked by the owner) returned the book in one piece.

Kaoss Pad KP3 holding court in the kitchen

After that it fell of the radar for a while until another friend leant me a Kaoss Pad. For those who don’t know what that is, it is essentially a touch pad that you can use to sample and process live sound. So there I was, sat at the living room table with the manual on my phone trying to figure this piece of kit out.

I needed some sort of audio to feed in, so, as my phone was in my hand I just used the first thing that came up and that turned out be the audiobook of The Black Cloud by Fred Hoyle.

It already had a slightly 1960s UK B-movie quality to it, but when you start messing around with the sound and doing basic synthesis with it it lands emphatically in Radiophonic Workshop, Delia Derbyshire retro-sci-fi-horror territory, which was intriguing.

So what you are going to hear this evening is a half hour abridged version of the text which I edited down and built a performance around.

While editing the recording of the voiceover I also did some sound design on the computer generating a few bits that would be impractical to create in real time but. Ironically a lot of this was done in Supercollider which is a text based programing environment, the name of which is just so close to the research done at CERN into the fundamental physics of the universe that it feels completely appropriate to work into the mix.

Supercollider

So I had two streams of sound; the voice over and the supercollider. For ease of use I recorded them to a single audio file. Normal stereo files carry 2 streams of information one for the left speaker and one for the right, and this gives spatialization between the two. In this case I recorded the voice over effectively to one speaker and the supercollider to the other, not so much left and right but synchronous A and B. This means I can isolate and play with the two things separately or in different ways while being able to simply play it out from a normal mp3 player.

Alongside that I have also got the modular synth setup, which is smaller than quite a lot which you see and this is for several reasons. The first is lack of money! Sometimes it is hard to justify spending on something when you could download a free app and get very similar sounds. The second is that I am interested in the ‘live-ness’ of performance and I can only control so many things at once. Finally, and I think this is more important, is the issue of combinations.

As a teenager I had a couple of electric guitar stomp-boxes hooked up to an electric violin and became amazed with the delay pedal and the differences you could achieve with only three variables; delay volume, number of repeats and delay time. It is a bit like trying to work through a combination lock when you have forgotten the code, there may only be 4 variables but after 0000, 0001, 0002… you quickly realise that that there are a lot of options. So what fascinates me about modular kit is really delving deep, getting to know each component and finding a few unexpected or elegant solutions and sounds along the way.

So, we have the two audio streams; the voice-over and the supercollider, as well as the synth, going into the desk which can be routed out, either together or individually to the Kaoss Pad for sampling and transformation, which in turn goes back into the desk. All of this then goes out to the speakers and I will attempt to make this into a cohesive performance.

A very messy rehearsal in progress!

I am going to play straight through as a half hour set so grab a drink, sit back and enjoy!

Footage from the event

Space To Think

In about September I was commissioned by Bangor Music Festival to compose a piece of electroacoustic music for their February 2018 event, along with a series of education workshops. I really wanted to do this and it was looking like it was going to be an amazing autumn and early spring of creating cool stuff and having fun; then the floor almost literally gave way.

Following a period of ill health my Dad took his own life in mid October and unsurprisingly this hit me really hard. It is not so much the sadness which is debilitating but the feelings of numbness, rage and lethargy that suck the capacity for creativity away. In my case my Dad and I got on really well, he was a role model and someone who had a massive influence on me throughout my life, when something so seemingly at odds with everything you have ever known happens all the basic assumptions that you make in life come into question. I would even look at my feet when walking down stairs, not through shock or physical instability but because I no longer trusted the assumption that I knew where my feet and where the steps where. It was certainly no mindset to take creative decisions in, they are so vague, so intangible and impossible to verify that the simplest starting impetus threw up paralyzing indecision.

8e40b10256c261d3e316e023be4a220d-rimg-w720-h270-gmir.jpg

It was at this point that I sadly informed Guto the festival director that I couldn’t fulfill the commission. I have never had to do this before and it left me feeling awful, but also slightly relieved.  There followed a period of calm, I got back to doing some work and I managed to get off the antidepressants (citalopram) which had been prescribed to help me sleep, level me out and stop catching things out of the corner of my eye. In late December I got a phone call from Guto offering to take some of the festival work back, but once again asking if I would like to compose ‘something’ for the finishing concert.

20171221_082653-01.jpeg

I find it really hard to sit down and just make noises or compose, some people start from an initial sound or a feeling, I tend to find some sort of idea or framework to hang something on and then can go from there. I though about this for about 24 hours, it was an incredibly kind offer which Guto had made, and my head was clearing. I went for a run in the hills, it happened to be early as I wanted to make the summit of a mountain near to us to catch the winter solstice sunrise and on the way up the ideas just struck me.

The theme of the event this year is space and I am happy to say that the work shared a stage with Birmingham Ensemble for Electroacoustic Research (BEER). BEER had worked in collaboration with the Art@CMS project at CERN in Switzerland, using real-time sonification of data streams from the Large Hadron Collider, the world’s largest and most complex particle accelerator. This is something which it is foolish to compete against; that, and the fact that I literally have Scott Wilson (of BEER)’s book on coding in Supercollider sat on my desk. Thus I chose to take a different tack and rather than approach it from an analytical and scientific angle I went for something closer to home.

Space To Think – Ed Wright 2018 (8 channel surround sound audio)

pianostringsA lot of what is in the popular imagination about space and space travel is precisely that, imagination. From the Barron’s Forbidden Planet through to the electronic squelch of radio communication a lot of what we think of as space related is a very human construct. What fascinates me is how much of what we believe sounds as if it comes from outer space or under the sea (or for that matter any environment out of our direct experience) is actually a result of dubbing and sound design in the media. As a culture we have bought into the idea of rockets rumbling as they go past, even though there may be almost nothing in the void of space to transmit the sound and the glockenspiel twinkle of stars is almost as real as the piano wire scrape of the Tardis. This provides a fantastic palate of subverted fantasy with which to create and explore a rich and varied sound world. Apart from the use of two pieces of NASA archive; launch and countdown, the rest of the sounds used I have recorded and shaped myself.

26232759_10155810057685631_1698340499798078925_oGreat delight was taken in recreating a few iconic ‘other worldly’ sound objects and effects along the way, 50 pence pieces were rubbed down piano strings to provide the basis for a tardis noise before looping and adding spring reverb. Humming strip lights were close mic-ed to create light-sabres, and some generative coding brought about drone progressions similar to the opening of Star Trek. These and many other sounds were used as the raw materials of the piece and then developed as the different timbre interact and evolve. The result is an acousmatic work utilizing a wide variety of sounds from analogue synthesis through to simple dislocation and out to generative algorithms creating an apparently extra-terrestrial environment in which our earthbound ears and minds can roam.

hubb

Many thanks to Guto Puw and the Bangor Music Festival for their kindness, understanding and faith.

In memory of Brian Wright

Ricercar (for electric violin and stochastic step-sequencer) 2016

Ricercar was created as a way to create evolving polyphonic textures in a ‘solo’ violin piece. It uses a midi foot controller (currently a Behringer FCB1010) to control an electric violin into a piece of software created specifically for this piece.

setup.JPG

Testing at home before the first performance
foot controller, violin and laptop visible in the
chaos of pre-event preparations

The work is essentially fugal, relying on live sampling to create the layers around the instrumental part and uses a 3 layered sequencer to achieve this. Each layer contains a series of buffers which can be recorded into. The top layer has seven 5 second samples, the middle has 5×17 and the lower has 3×71 second samples. These all prime durations so when left to their own devices they will take a long time to ever repeat in precisely the same configuration. Using a foot controller to trigger when to start recording and which buffer to record to or overwrite, thus it is possible to gradually build up a four part texture using the three sampler ‘lines’ and the live violin.

loops.png

Overlapping prime blocks 5,17 and 71

In itself this is passable, but could be rather recursive and very quickly get repetitive unless loops are overwritten regularly, which in itself could become an oft-repeated task. This has been overcome in 2 ways. The first is that each time a sample is due to be played there is a 50% chance that it will not be in which case the sequencer just moves on and waits until the next step is ready and then re-rolls its virtual dice. When scaled up over each of the 3 layers of fast, medium and slow samples this means that there will probably always be something playing (depending on how many samples have had audio recorded into them!) but the thickness of the texture will vary.  The second way that diversity is added is by changing the playback speed of each sample. As a sample is triggered the sequencer takes a decision as to what pitch/speed to play the audio. This varies over four octaves (two up and two down) and results in a shift in playback of between quarter speed and four times as fast. The speed changes are in discrete even tempered steps and thus a broader sense of harmony and polyphonic/contrapuntal line can be achieved.pitch.JPG

In addition to this audio focused processing there are 2 global controls. The first is how fast the sequencer triggers each step. This in itself does not affect the length of individual samples so at extreme speeds loops will overlap in a densely fired texture or conversely there may be a lot of space and silence at very slow speeds. The last of the controls is volume of playback, which when used carefully can not only help to balance the live and processed material but can greatly increase the sensation of articulation and dialogue.

Within this framework the performer is free to improvise, guided by the constraints and freedoms of the instrument, themselves the hardware and software.

Enjoy

Video extract taken by Diane Evans at OscilloScope 28/6/15