Category Archives: Code, Data and Network Audio

Granular SYNTHESIS IN SuperCollider

Recently I started learning granular synthesis in SuperCollider, reading help sheets and reviewing patches from other artists. Mainly I’ve been using the UGen: GrainBuf and then experimented with different samples and modulators. I’ve mainly used samples of human voice, for example some ancient Arabic chants or even a recordings of my own voice. With human voice, the effect of the grains becomes really interesting because it shifts between understandable words to weird experimental glitches. Also with slow modulation rich evolving textures are created, a great source for ambient material.

Above is a picture of the script I was experimenting with. I used a sample of me reciting a poem in Spanish and then inputed that buffer into GrainBuf, where with different algorithms manipulated them. I use both Impulse and Dust to trigger the sample. The outcome of this experiment was outstanding, I’ve never heard sounds like this and I thought it could be a great sound Fx for the film of Sound for Screen.

I used the same scripts as a move but with a lot of different sample to compose the end section of the film. I jammed testing different parameters while the scene was running, that way I could react with what was happening in screen . For this part I didn’t use any generative practice because I wanted to focus more in each individual sound, exploring the capabilities of layering a bunch of different granular sample, to create a thick layer and a more precise composition. Sound and image managed to intertwine quite well for my last scene, the granular sound helped convey the industrial atmosphere of it and it’s harsh glitch helped express the mood of the main character.

Aether- First part of the score

As discussed in previous posts I will score a scene of a short film created by a director friend of mine. In this project I will use code as my main compositional tool, creating desired textures through genereative made music. In a short sum up of what happens in the first scene, the main character is walking through London to get to his job and then back home. The movie deals with the theme of isolation inside a big city and the social alienation that can occur because of it. The scene starts with a long shot of the main character walking through a wide street to get to his office. The shot is designed to seem as an optical illusion creating an endless pathway from the character to its destiny. For this first scene I composed an ambient generative piece called Aether using Supercollider.

Aether works as a medium to convey the theme of dreams and reality, creating a trance inducing feeling. I was inspired to take this approach from David Toop’s book Ocean of Sound that I have discussed in a previous post. I used different techniques to create this aesthetic, first of all and already mentioned I used generative practices to create the music. The hypnotic static nature of generative music helps convey a feeling of floating, creating an additive value to the optical illusion of the walk. The second technique I used to achieve the aesthetic is using the pelog scale, characteristic of Gamelan music. I’ve been recently obsessed with it and it is also mentioned in Ocean of Sound, because of its spiritual use and its implications inside the ambient canon.

Above is a draft recording of Aether. Heavy compression have been used with the same objective of creating the static aesthetic, limiting drastic dynamic changes through out the composition. Compression also helped to regulate levels of each individual synth voice, I have found adjusting volumes in Supercollider quite hard, making it hard to create a decent mix.

The picture above is a small snap of my script when composing Aether. I mainly used SynthDef’s and Pdef’s to create and sequence my sounds, this gave my the opportunity to create multiple synth voices that added texture and harmony to the piece. I also experimented using filters, both inside my oscillator and as an external effect. I found out the MoogFF.ar filter that is an emulation of the Moog ladder filter, it has a great sound and I used it through the piece. To further develop the use of filters I also added random numbers to decide the cutoff frequency of the filter, adding a more complex generative technique. The script uses extensive randomness parameters that are the main tool to create the generative nature of it. I modulated mainly patterns, from the steps used to the output sent, this was useful to play with panning and make the piece more interesting.

Aether will be used only in the walking scenes before the character gets inside of the underground. The music will be mixed very quietly in the overall soundtrack of the scene, so both the music and the atmosphere field recordings can be heard. This has the purpose of using aether as just a tonal colouring technique, hinting a melancholic emotion to the listener perceived mostly by the subconscious. For the next scene I will use another piece of music also created in Supercollider, but it will have a more aggressive aesthetic that will contrast the sound and music of the first scene for a narrative development purpose.

Menschen am Sonntag- Benoit and the Mandelbrots.

In 2012 Benoit and the Mandelbrots (a live coding band) rescored in a live performance the classic 1930 pre-war German film Menschen am Sonntag, in Karlsruhe. This performance is one of the few examples of a film soundtrack created with code, in this case Supercollider.

The combination of 1930 black and white german film with modern algorithmic synthesis techniques create a unique audio visual composition, creating a dream like texture and subtly hinting the theme of nostalgia; evoking profound emotions to the listener. The sound track for the most of the film, is composed of long evolving electronic drones and unreal sound effects that contrast the aesthetic of the film but mirrors its mood. The film translates to The Men of Sunday and its a classic romantic comedy featuring the Berlin summer and its lake culture. The evolving dream like drones fit perfectly to the narrative of the story and it successfully represent the pleasantness of the summer. Such sounds progress slowly in time, changing its timbre instead of its tones across a long range of frames. The video was retimed to match the sound so even unconsciously the effect of sound can be perceived in the image. This film its a portal back in time to pre-war times and its utopic nature. Berlin is not corrupted by the fires of fascism and men can enjoy a warm Sunday. Even though video is retimed to sound, the soundtrack fits so well with the film because it portrays the dream land that Berlin would be without the war. The hypnotic qualities of the music hint nostalgia because of its dreamlike aesthetic, a distant fogy memory floating in the aether of a pre war world.

It’s rare when a film sound track is composed through an improvised performance instead of pre-composition, and in this case code gives it an incredible versatile source of sound design. This approach inspired me to use improvised live coding to compose some scenes in the short film I’m working on. Electronic music in general could be categorised as not very expressive due to it’s robotic perfectionist nature. By performing myself instead of just programming the music, the sound will endow a more expressive voice communicating better the themes of the film.

Hydra

Last Monday in the coding and audio network class we where introduced to Hydra. A free open source visual coding program that can be used both in an internet browser and in software. Hydra has a very simple interface, allowing their users to create graphic visuals with very short lines of code. In my experience I would say Hydra work like a video modular synth because the building blocks and terminology are the same as sound synthesis. It uses oscillators and noise as building blocks, then you can modulate such sources to create any aesthetic desired. The coding language also gives the possibility to create generative processes, where the visuals will change endlessly over time. Java Script can also be used in the program, making it easy to create loops and devises like LFO’s.

Below is a short video of a patch I made. The main elements I experimented with where shapes and pixels, then I modulated the parameters of each elements, creating and interesting generative visual. The video is overdubbed by a generative ambient piece I created a long time ago. Both audio and image fit really well and a lot of synchresis are created, giving the illusion that the music is synced to the visual when its not.

I will continue to experiment with the program and I hope that in the future I will be able to create audio-visual performances with Hydra.

Jam In Supercollider

The video below is a small snippet of a Supercollider jam that I did. In the jam I explored the capabilities of SynthDef’s and PDef’s to create a small generative piece. I’m really intrested in generative music and PDef’s was a great tool to experiment with randomness and sequencing. What I like the most of Supercollider is the intense capability of modulation it has, because every component of code is so raw, modulation is capable in any instance of the algorithm, creating endless sonic timbres and textures. For my little jam I inputed an oscilloscope to the server to see the wave forms I was creating, the waves by themself looked very appealing and its a great source for visuals.

TopLap

Temporary Organization for the Promotion of Live Art Programming or TopLap is an international organization created to the exploration and promotion of live coding. Its web page features news, forums, exhibitions and upcoming events of the world of live coding. TopLap was created in 2004 and since it has grown across the world, creating hubs or nodes in different countries, where people can work and collaborate in live code projects and performances. Thanks to TopLap I discovered the live coding scene of Mexico and I joined the facebook group of it, I’m looking forward to exploring the live coding scene of my country.

In the page of TopLap I discovered a Barcelona based artist called Yaxu that uses Tidal Cycles for his live performances. He combines different electronic genres and experimental practices to create his improvisations. I was amazed by his performances and his ways of sequencing sound, I specially thought that his structuring in the performance was very successful. One of the hardest things of live code is the development of ideas and transitions of sections and Yaxu manages really well.

Algoraves

Live Coding or Algoraves is a form of electronic music performance where artists write computer code to create music and improvise with it in stage. The sub-culture of Algoraves have expanded and spread across the world, creating conferences and festivals where enthusiast join to perform and discuss about live code. Different manifests have been written to define what Algoraves (Amrani, 2017) are and what its objective is. Some common features of most manifestos are showing screens with the code in live performances, social inclusivity and diversity, and accept failure as a common factor of live code due to the big amount of technical problems such as computer crashing. The scene is by nature DIY orientated and it follows an anti-systemic and anti-capitalist stream of thought.

The most common languages and softwares for live coding are: Supercollider- orientated to sound design and synthesis, Tidal Cycles-orientated to sequencing and triggering sound.

I am currently learning Supercollider and I’ve tried Mini Tidal before. In general it has been quite challenging because I’ve never done any type of coding before, but I am enjoying a lot the learning process and I’m inspired by the sonic capabilities of coding music. My next objective is to learn Supercollider to a decent level and being able to perform a small improvisation following the rules of the Algorave manifesto.