top of page


Instrumentation: Electronics

I've been thinking about a piece that wold continue the work I started in The Machinery since finishing that piece. I presented the early stages of my research at the SOUND/IMAGE 22 conference at University of Greenwich on 19th November 2022 under the name Stochasticism and Liminalty: Over the Garden Sense*. Slides and content from this presentation are below.

Regrettably, my plan for the presentation was informed by a catastrophic misunderstanding of how long 20 minutes is. I cut the section on randomness the night before the conference, but still only managed to get through the section on my first example on the day. The result was that I didn't even get to the stuff on stochasticism, meaning that the title of the presentation didn't even make any sense.


Still, there is plenty of interesting stuff in here that I was and still am excited to share. Since beginning the research, the piece has evolved into two separate pieces.


*The title was supposed to evoke the name of the cartoon Over the Garden Fence (a clever pun really takes an academic talk to the next level). I watched this YouTube video about it a while ago and the talk about crossing thresholds and going between boundaries really stuck with me. When I started work on this project, I inevitably started thinking about the video again and wanted to reference it some way. But wait, what's this--the cartoon isn't called Over the Garden Fence at all. It's called Over the Garden Wall. I misremembered the name, and my rigorous standards of quality meant that I didn't think to check the name of the thing I was referencing before I submitted my proposal to the conference. Yet, in true Lord Copper fashion,  the banquet must go on, and I had to forge ahead in spite of the error and hope no one would ask about it. Basically, the first part of the title contains references to two concepts, one of which I didn't even get to in the presentation and the other of which I actually didn't explicitly explain or reference, and the second half references the title of a children's cartoon that doesn't exist. Now that's funny.

Me at SoundImage.PNG

Example 1

Everything up to slide 9 is about this example. If you'd like to learn more about it, it's recommended that you listen before reading any of the explanation about it. It's about 100 seconds long, and I'd invite you to to listen closely and see if you can figure out what's happening.

If you have listened, your reaction to it was totally correct no matter what it was. If it just sounded like a recording or a road, that's totally fine. If it actually made you angry because it's clearly just some sounds, that's fine too. If you felt that something was going on but it was difficult to say what, that's fine.

I've been thinking about ways of crossing the boundary between 'sound' and 'music', or 'music' and 'non-music', or anything where something that can be described as musical lies on one side and something you could call non-musical lies on the other. I've been sitting listening to the ambient sounds of the world before and had a strange moment of doubt when I realised that a noise in the distance was happening at a regular interval. What happens in that moment? I think your brain realises that it should have been using a different part of itself to listen to the sound. Something changes the moment you realise the thing you are listening to isn't random, and I wanted to try to make this feeling the basis of a piece.

Some very careful treading is required, since the danger of wandering into pseudoscience and/or banal profundity territory is significant indeed. But after conducting some research into neuroscience and studies looking at the things that cause us musical 'pleasure' (elaboration on the studies I looked at found in slides), I really honed in on John Blacking's framing of music as 'humanly-organised sound'.


As an experiment, I was looking at ways of evoking and masking the feeling of there being 'order' in sound, to dance right on the threshold without allowing the listener to believe firmly either way that there was or was not human deliberateness present. The sustaining of this feeling of ambiguity was the purpose of this short example.

I headed to and grabbed this ambient recording of a road. How could I rearrange or manipulate the content of this recording to achieve my effect? Really, what we're doing is trying to control the ability of the listener to recognise and recall sounds. I reasoned that there are three variables that determine a sound's recallability: its duration, its distinctiveness/complexity, and the number of repetitions it is subjected to. Since I want to be in control, I eliminated the variable of distinctiveness/complexity by choosing portions of the recording with the least activity. I could therefore attempt to control the recallability of the sounds throughout careful choices of the duration of the sample and the number of times I repeat it.

What is the ideal duration to make a relatively indistinct and unremarkable sound texture become recallable? Two more pieces of information help to answer this. The perceptual present is the duration of time we feel to be the immediate 'present'. Beyond this is what we feel to the future, and behind it what us to us for all reasonable purposes the past. I'm taking from Diana Deutsch's The Psychology of Music to give an upper of estimate of the duration of this moving window of eight seconds. More relevant to our purposes is echoic memory, which is the short-term memory with which we process sounds. When you listen to music or speech, you're holding sounds in this buffer. Kind of analogous to RAM in that respect. The research I've done gives an estimate of the upper limit for the duration of this memory of two seconds.

So there's something to work with. With full acknowledgement that more research could come to light that shows these concepts to be junk science, as often happens when you dabble with these things as an amateur, it seems reasonable to say that beyond that eight-second window there's not much risk of the listener being able to retain sounds unless they're really distinct in some way. That is to say, if you take nine seconds of ambience and loop it, only after a probably decent number of repetitions are they going to start to realise that they're hearing the same thing over and over again. If you decrease the duration of that loop below eight seconds, they're probably going to realise after fewer repetitions, while loops of around two seconds are less will probably be instantaneously recognisable, since our echoic memory can retain pretty much exact copies of sound of that duration or less.

The latter fact was demonstrated in an experiment by Guttman and Julesz in 1963, which you can repeat by listening to the three samples of white noise with headphones below:

Example 1Luke Madams
00:00 / 01:40

The sample on the left is c. three seconds of white noise looped. The middle is c. two seconds, and the third one second. Based on my own experience, something remarkable happens when you listen to these three samples carefully--the three-second loop definitely just sounds like white noise. With the one-second loop, however, as you listen, a pattern starts to emerge. Your echoic memory can detect that this isn't simply random white noise and can detect the repetition.

With all this in mind, I devised the below temporal scheme for Example 1:


The boxed section shows the duration in seconds of a series of samples. The first six samples (colour-coded blue) are taken from the same low-activity section of the recording. Little happens in this part of the recording, though it is not without some identifying details. A car passes at close range and there is a wind sound that sounds a bit like howling. You can see that the first three times I repeat it, I decrease the duration of the part of this section I'm using. We're edging closer to that two-second echoic memory duration at which the listener will likely identify immediately that they're hearing looped sounds. The coloured bars indicate where I believe the risk of sounds becoming recallable lies. As it is, the combination of repetition and decreasing duration would, I hope, subtly suggest but not make it irrefutably clear that this is the case. I want to leave some doubt. Therefore, following a repetition of the four-second loop, I upped the duration back to eight seconds, then took a different part of the original recording and let than run for 20 seconds to throw the listener off the scent. To course-correct again, I used two-second loops of this B material at the end following by a final five-second sample.

Due to the way these samples are arranged, we hear the howling wind and car passing at the points in time shown above the boxed section. These serve the purpose of suggesting order just as the suspicion that the general ambience is ordered is waning. Finally, I took the most distinct sound of the entire recording, that of a car honking, and used that as a final suspicion-inducer. This sound is so distinct that it can be repeated at intervals of extremes of duration and still be recognisable. Really, as soon you hear a sound like this for a second time you know something is up. It serves a kind of unifying purposes in the composition, filling in where recallability is at a low point.

It's rewarding and interesting using these parameters in counterpoint like this.

Mean Deviation

That example used recallability to introduce order into sound. Much of the research for this project is shaped by this study, which attempted to understand more clearly exactly how we experience musical pleasure by CT scanning people while listening to some carefully selected musical examples. The finding I'm most interested in here is that, at least according to the observations of these researchers, dopamine release is greatest during the anticipation stage of listening, not during the supposedly pleasure-imbuing musical event itself. But as with many things musical, this is something we know intuitively anyway--the joy comes in waiting for the drop, and the discipline of composition itself in large part gives musicians the tools to sustain, delay and toy with tension and release. People don't go quite as apeshit if the drop happens without a noise swell.

You may have noticed that my script is laden with caveats and qualifiers. I'm wary of taking any of these studies too seriously as I've read Ben Goldacre's Bad Science and I'm aware of the perils of dabbling in pop science without full scientific literacy. I'm aware that a subsequent study could trash the findings of this one entirely, but as with Example 1 I'm looking for things that I can say with certainty. One seems to be that, at least to some extent, this ability to be able to predict and anticipate is what our brains like about music. There has to be some kind of shared understanding of language between the music and the listener. At a very basic level, this is corroborated by this study which showed that amygdala (stress-hormone part of the brain) activity increases in mice when they are subjected to what the researchers called 'temporal unpredictability'. The absence of a pulse or regular metric structure actually appears to distress us.

So, continuing my experiments, I'm looking at ways of introducing pulse or a regular metric structure into sound that lacks both of those things. I've had the suspicion that I could use mean deviation to do this for some time, and in fact this is the idea that spawned the entire project. The two lists below show sets of numbers that both have a mean of 50.

Mean 50.jpg

You'll notice, though, that these are two very different lists of numbers. The mean tells us nothing about how 'spread out' the values are. To measure how spread out they are, you could subtract each value from the mean then average these values (taking the absolute in each case, so we don't care whether they're positive or negative, just need to know how far each value is from the mean).

Mean 50 dev.jpg

On average, the values in list A are 2 away from the mean, and in list B the values are 54.8 away from the mean.


Open your wind real wide here--say you were somehow able to adjust the mean deviation of a set of numbers. As you lower it, the values would start to close in on the mean. You could make a set of values converge on a point, which could have some useful potential musical applications. It turns out you can do just this. Very briefly: the first list is randomly-generated numbers in a given range. The second list adjusts these for a target mean (i.e. the point we'll want to 'converge' on later. This is just like picking the list up and moving it so the middle is in a different place. The proportions of the list have not been changed). The third list scales these values based on a set target mean-devation.

Meandev sheet.jpg

Look through the images below and see what the power of maths does to the graph when the target mean-deviation is changed (blue line is original random values, orange line is scaled values).

The orange line first starts to resemble the blue line then flattens as it closes in on the mean. When mean deviation is zero, the line is flat as all values are equal to the mean. Imagine if that graph was rotated by 90 degrees, and that the 'mean' was a point in time. The values could be the locations in time of sound events.  With a succession of beats and a gradually decreasing mean deviation value, we could gradually bring a pulse into clarity. The first image below is a kind of visual representation of this, while the second squashes it into one dimension so you can see how values start to cluster over time.

Meandev over time.jpg
Meandev clusters.jpg

I conceived of the below design for using this effect for musical purposes.

Piece design.jpg

On the left are four sets of random numbers, scaled each time to fit a target mean. Each row is a bar, and each item in the row a ‘beat’. Four beats per bar. The target mean for each list is the location in time of that beat. Again, we’re just picking the list up and moving it for each beat.


On the right is the same lists scaled for a target mean deviation. This is changing the proportions of the list. We’re either spreading the values out more to obscure any sense of there being a pulse, or forcing them to converge closer to the mean. The target mean deviation decreases over time, so the further down you go the narrower the lists are going to become, as in the earlier visualisations. Since we’re dividing the bar into four equal beats, over time a pulse should come gradually into focus.


But an important feature is this weighting slider, which will determine which list values are chosen from. Including some of the unscaled random values could add some humanising variation, so across the duration you might want to move it some but not all the way to the right.

On the left we have four sets of random numbers, scaled each time to fit a target mean. Each row is a bar, and each item in the row a ‘beat’. Four beats per bar. The target mean for each list is the location in time of that beat. Again, we’re just picking the list up and moving it for each beat.

Here is the top page of the Max patch I built to do all this for me:

Max patch home.png

You can see that I have changeable parameters for the number of values per 'beat', the range of these values, tempo, duration, and the starting and target values for mean deviation. Off page are the controls for the random values 'weighting' aspect of the patch. Here's a short example of how it sounds. I'm using a simple percussion sound and lowering mean deviation from 1.2 to 0.05. See how soon you can sense where the pulse is.

Mean Dev ExampleLuke Madams
00:00 / 00:25

Perhaps you can imagine the possible musical applications of this.


That was one way of introducing order to sound through the use of rhythm. There I introduced pulse predictability from disorganised sound. Something more akin to utilising that parametric matrix as I did in Example 1 might be to start with a periodic metric scheme, but up the duration and complexity parameters so that order only becomes apparent after a number of repetitions. I had something like this in mind:

00:00 / 00:31

That's not too easy to count. It's a pattern of 36 semiquavers divided into groups of twos and threes, with some triplets thrown in to obscure the groupings even more. It repeats, but it takes a few goes round to figure out what's going on (or, if you're like me, until you watch a video explaining it to you).

Clockwords scheme_edited.jpg

I put together a little irregular-sounding but still periodic metric scheme. I'm using simple high (H) and low (L) percussion sounds again. Lower-case means half the note value:

Metric phrase.jpg
Main SequenceLuke Madams
00:00 / 00:04

This is the phrase I'm trying to bring into clarity. I'll outline the salient facts of the Max patch I built to do this.

First, I've added this additional corollary phrase:

Metric C phrase.jpg

This phrase is similar-sounding to the main phrase, but extends it to a duration far beyond that at which the listener could conceivably commit it to memory and detect its structure even after successive repetitions. To obscure it even more, I've added intermediary high hits between every part of the sequence. Probability determines whether this extra hit will be heard each time or not.


One of three things happens at the end of the main phrase--either we loop back to the start, continue on to the corollary phrase, or we are sent out of this sequence where some other stuff happens.

This sequence is just one of several possible things we might hear. I'm using a probability gate to send a bang to one of several places (in Max parlance a 'bang' is basically a message for something to start. Like pressing 'go').  It can either be sent to the sequence, or to any of a number of shorter phrases. The probabilities for each of the possible destinations are outlined below.

Metric test probabilities.jpg

At the end of each phrase, a bang is sent back to what I've called the 'hub' where the decision for the decision of the destination of the next bang is made based on a random number generator. Here's the top page of the Max patch. You can see again that I have options to control the parameters of duration, tempo, and some probabilities:

Metric test mainjpg.png

Over the set duration, the probability of those intermediary phrases being heard decreases, as do the probabilities of the corollary phrase being heard and of the bang being sent back out of the main sequence. Over time, therefore, we start to hear the main sequence more often and more closely resembling its original, shortest format. The additional phrases contain some elements that are not present in the makeup of the main phrase, and their gradual increasing absence also makes the confounding increased frequency of the main phrase more pronounced. Here's a short example.

Metric ExampleLuke Madams
00:00 / 01:12

Here's a different visualisation of the design of the patch:

Metric test Markov.jpg

The mathematically-inclined reader may notice that this diagram meets all the criteria for a Markov chain (please do correct me on this if I'm wrong): the sum of the probabilities is 1, and each outcome is 'memoryless' (probabilities unaffected by past events).

Using Markov chains as a compositional procedure isn't unprecedented by any measure. But their usage ranges from Xenakis, which probably requires a day's work to get to grips with, to this kind of thing, which to my mind is no better than simply randomising your material since anything that comes out will sound at least vaguely musical if your material is a major scale.

But useful for my purposes is the maths you can do with this information to learn about future events. The next stage of my research will be to get to grips with steady states, but as I understand it a calculation somewhat akin to expected value can be performed to understand the behaviour the chain settles into if you were continue it to infinity. I'm pretty sure I could choose my material more carefully and pair this knowledge with my research on recallability to make certain things more prevalent over time.


As I've started to think about what I might do with all this research and how I might use it in service of my own compositional aims, the eventual piece has started to take shape. For various reasons, I like giving listeners sensory experiences that border on overwhelming. I want the point of the arrival of order to feel almost like a relief in this piece, so my task is to augment the distance travelled between the two poles as much as possible.

I figure out I can do this by rethinking what exactly these poles are. I conceived of Example 1 as moving from 'sound' to 'music'. Later examples introduced order from disorder. I might do something more calculating by making the initial state not merely the absence of the things our brains like about music identified in my earlier neuroscience research, but to do the opposite of them. I'm conceiving of this moving to music from 'anti'-music thinking about what that might mean.


To really eliminate any semblance of predictability from a texture, I've added another research area to the project and investigated the applications of randomness in music. As was the case in the ol' classic anecdote about the iPod's shuffle algorithm, 'random' isn't as clear a word as it seems. There are different types of randomness. Have a look at the below grid of random numbers I generated in Excel and see if you can't spot any patterns.

Random numbers.png

I'm sure you get the idea—in large enough data sets, improbable things start to happen. There may be stuff in that table that our instincts could not happen by chance. We've run into trouble even in this very early stage of attempting to define randomness. Does manually removing all patterns and repetition from a data set make it more random, or is meddling and intervening like this the opposite of random?

Things get more unsettling the more you learn, because it turns out we don't really have a way of generating randomness anyway (more on that later). We are very capable of achieving apparent randomness, which is where the appearance of randomness 'results exclusively from a lack of full knowledge about the system in consideration', but not so much intrinsic randomness, or a system that would produce no patterns or discernible structure even if it were to run to infinity. That is quite a distinction; any random-number generator you use, any means of encryption we have, does not actually produce randomness. It's just that the patterns and repetitions in these systems occur across such a colossal scale that they are beyond our ability to detect them. They are random as far as we're concerned.

I said we don't really have a way of producing randomness because we actually do. What human beings in their never ending quest for knowledge have figured out is that truly random stuff does happen on the quantum level—

Uh oh! Disclaimer: this is not about to stray into Deepak Chopra territory. Don't worry.

For precisely how randomness happens in quantum physics, you can look up Rabi oscillations or have a read of this paper. I did what reading I did as part of my research for this project, but it's very complicated and frankly insulting to suggest that you can understand it without having invested years into studying physics. I declare, proudly, that I have no fucking idea how or why it works.

But, handily for us, you don't need to, because you can visit a site like this to get numbers that are generated by quantum randomness. As always, any time you feel you're at the forefront of something, you're actually already behind. That paper I linked to in the previous paragraph (here it is again) is a report on the methods researchers have been using to use quantum randomness in the timbral domain. Why timbre? Because a domain where things are happening tens of thousands of things per second, i.e. frequency, presents the best possible opportunity to consider the effects of intrinsic randomness in sonic format. Without sufficiently large scale, there would be no discernible difference between it and apparent randomness. Though the research is speculative, you can say pretty certainly that using quantum randomness would be totally pointless if you were using it to generate time signatures, for example. Any of our systems of achieving apparent randomness will do the job just fine, as indeed they do for all but the most obscure of human needs.

It seems to me that there are two possible ways of employing quantum randomness for musical purposes in a meaningful way. The first would be to employ the benefit of a system that has no periodicity over one that does have periodicity. Apparently-random systems might have periodicity that only becomes apparent after trillions of years, however. Any work in that regard might need to happen on a cosmic scale, so that one is largely theoretical. Second, though, you could take advantage of the fact that the latter system has periodicity and periodicity can be controlled. My idea for this project is to splice a quantum-randomness set with a set of decreasing periodicity. As with my earlier sound examples, I'm interested in the point at which the ear starts to detect patterns, when it becomes apparent that the sound being hears is not entirely random. The diagram below gives a possible rough design.

Randomness model.png

On the subject, there is a fascinating history of the applications of randomness in music. The Buchla Source of Uncertainty is something to look into. Here's a demonstration of it:

Sound Transformation

Taking all this stuff together, here's my design for this piece:

  • Starts with a texture that is decidedly ‘anti’-musical.

  • Does all the things our don’t want. Randomness, no ability to predict. Dissonance, harshness, abrasiveness, trigger amygdala.

  • Gradual transformation.

  • Order is exerted, thresholds crossed. At some point each parameter becomes recognisably ‘musical’.

  • Transformation has to be gradual, imperceptible and stochastic so there are unexpected developments along the way.

  • Ends with a ‘musical’ texture.

  • Deliberately and knowingly does all the things our ears want from music.

  • Humanly organised sound with all the trappings.

bottom of page