Recently I started learning granular synthesis in SuperCollider, reading help sheets and reviewing patches from other artists. Mainly I’ve been using the UGen: GrainBuf and then experimented with different samples and modulators. I’ve mainly used samples of human voice, for example some ancient Arabic chants or even a recordings of my own voice. With human voice, the effect of the grains becomes really interesting because it shifts between understandable words to weird experimental glitches. Also with slow modulation rich evolving textures are created, a great source for ambient material.

Above is a picture of the script I was experimenting with. I used a sample of me reciting a poem in Spanish and then inputed that buffer into GrainBuf, where with different algorithms manipulated them. I use both Impulse and Dust to trigger the sample. The outcome of this experiment was outstanding, I’ve never heard sounds like this and I thought it could be a great sound Fx for the film of Sound for Screen.
I used the same scripts as a move but with a lot of different sample to compose the end section of the film. I jammed testing different parameters while the scene was running, that way I could react with what was happening in screen . For this part I didn’t use any generative practice because I wanted to focus more in each individual sound, exploring the capabilities of layering a bunch of different granular sample, to create a thick layer and a more precise composition. Sound and image managed to intertwine quite well for my last scene, the granular sound helped convey the industrial atmosphere of it and it’s harsh glitch helped express the mood of the main character.