#DREAMSCAPES.

My second venture into creating audio and music with code turned into DREAMSCAPES very quickly. In this blog I’ll give a broad write up of how it came together, and where I’m planning on taking it.

After my first experiments and initial series of NFTONES (rarible.com/hexxaudio), I wanted to build on what I’d learned, and take my code from not only producing a short sound as an output, but a full blown piece of music. I’m going to acknowledge straight away that this isn’t at all the classic approach to generative art and I’ve learned a great deal from this process.

Pink Noise themed Dreamscape.
Pink Noise themed Dreamscape.

I began by creating my own samples - lots of one shot notes of instruments in all different rhythms. I decided for ease at this stage to keep everything within the key of C maj / A min. Otherwise the sampling alone would have taken me a lifetime. Using Pydub - I loaded all these samples into a file ‘samples.py’ and saved them as variables:

vibesC34 = AudioSegment.from_wav('vibes-C3-4.wav").fade_out(duraction=50)

Now to create some melodies! To do this - I created arrays of all the samples of each instrument (and a list of ‘rests’ or empty samples of each length. Then using random.choice() I was able to select a random sample from each array - and add this to a new list - the melody list. Finally, I created an empty ‘playlist’ in Pydub, and then iterated through the melody list adding each audio segment one after the other, resulting in a much longer audio segment - our melody! During the random selection - I also played with the volume of each sample to give our melody a bit of life. All in all - the program creates around 10 melodies each time it is run, but only 2 make it onto the final output. More on that later!

The bassline creation works much in the same way - but I split the lists of samples into longer rhythms, and short rhythms in order to create ‘slow’ and ‘fast’ options for the basslines. 4 are created, 2 slow, 2 fast, and then only 1 is selected for the final song.

def write_melody(list_of_samples, list_of_rests)
  counter = 64 
  # make it more likely the random choice is a sample and not a rest
  list = [list_of samples, list_of_samples, list_of_rests]
  output = []
  while counter > 0:
    sample_selection = random.choice(list)
    output.apped(sample_selection)
    counter = counter - 1
  return output

I should also note at this time that Pydub operates in milliseconds - so the obvious choice of BPM was 120pm as this divides equally into milliseconds - just making the maths easier for myself! I decided that this restriction was worth it at this point, as I felt I could get enough variation in the output elsewhere - and that was where my focus was. I did implement time stretch and pitch shift functions that happened retrospectively at the end of the program but decided that they were adding too many artifacts and compromising the audio quality. I did utilize the time stretch function to create some pads though!

For pads - I started with slightly longer samples, and created a function to randomly select these samples, time stretch them, and overlay them with delays to create long loops that would sit underneath the melodies and provide a bed. 9 pads are created in total but only 3 selected for the final output.

Which brings me nicely onto the integration of soundscapes. I have been a sound recordist for well over 10 years, and have travelled all over with my recording kit. I try to record a soundscape whenever I am somewhere new and have built up a little library diary of my travels - and this felt like a perfect opportunity to call on them. The early outputs were taking me in a meditative direction so thought this would work well. I loaded up 12 soundscapes - used Pydub to make them all the same length (2 mins 30 sec ended up being the optimal time for the pieces). One is selected for each DREAMSCAPE and this inspired me on the visuals.

I put a very slow fade in, and quickly worked out that I wanted the functionality of this music to be a ‘rising alarm sound’ for waking up to gently in the morning. This prompted a visual of a rising sun and led to this concept, created by Yard B studios.

The concept by @YardBStudios
The concept by @YardBStudios

The audio reactive waveform sits on the horizon as part of the landscape, the sun rises behind, and we tailored each design to each soundscape to give them a strong unique identity.

The inspiration.
The inspiration.

While the visuals were being developed I continued to tweak the audio. I made some one shot drum samples and worked out a way to fill empty ‘bars’ with single one shot hits in random places in order to build up a beat. I turned bars into loops and added fills, and eventually decided that I’d make drums a rarity. There is a 1/4 chance that the final output will have drums.

I found a great library called pysndfx which helped me add a great level of depth and detail to the audio. Every melody, pad and synth is processed with an array of effects but again, only certain effects are chosen for the final output. Making the effects part of the rarity traits would have got us into a whole new level of labelling - so I decided to leave these out of the trait metadata. Delay, tremolo, reverb and filters are all at play behind the scenes.

When you run the code, it bounces out of 70 stems of audio in the process, consolidates - combines and deletes until you are just left with 1 wav file, and 1 txt file (the metadata). The soundscape is overlaid with 3 pads, 2 melody lines, 2 synth lines, a bassline and (maybe) some drums. The placement of each element, and the duration they loop for is also randomised to give variety and pace within the music. The music is then laid under the animation - which triggers its own generative events as it renders, and we are left with the final thing!

gm!
gm!

All in all. This is not the way I would do it again, but I knew how to write python, and I called on some great libraries to help me along the way. AND I’m over the moon with how they have turned out. The audio has a rich and deep quality and the visuals are stunning. I have learned a great deal about generative art, and the importance of the txn hash seeding the art. This is why I have begun building an on-chain version of Dreamscapes. This will be coming out later this year and have some similarities to these dreamscapes, but also some new ideas. Stay tuned for more, and thank you for reading this far!

ethereum://0x60f80121c31a0d46b5279700f9df786054aa5ee5/5.158022786520724e+30

ethereum://0xabEFBc9fD2F806065b4f3C237d4b59D9A97Bcac7/5.158022786520724e+30

White noise.
White noise.