I did more work on the visual side of things previously, with refining the basic camera interaction which will be the core of the project. Now I needed to get going on the audio side of things. I decided to incorporate an audio element to the project to increase the complexity of it, and also due to my interest in such fields as graphical sound, and the ability to represent the same data in different forms.

I was, however, unaware of exactly how this would be achieved in a practical sense. I set out looking for a library which would help me achieve my desired effect – forming a link between the motions of a light and therefore the user, and the visual and audio representations of that motion. What I found was SoundCipher. SoundCipher is a library for Processing which is specifically designed to allow the creation of music in the environment. It allows the playing of ‘notes’ via a MIDI format, and facilitates basic interactive sound design – which is exactly what I need.

I installed SoundCipher, and am now in the process of attempting to integrate its functionality into my work. The first thing I did with the software was plug it into the foundation for the project I produced previously, and attempted to play back sounds with the points I was drawing on the screen. To do this I first went through the usual steps of importing the new library and setting it up so that I could use it.

SoundCipher added to the growing list of imports.

SoundCipher added to the growing list of imports.

After this, I thought about what I wanted to actually achieve for an initial test. I looked at how to play a note with SoundCipher, and saw that it took two main arguments to do this. These are the pitch of the sound and its amplitude or volume. Thinking about how best to form a connection between the visuals I’ll be producing and the sounds, these two arguments need to be in some way related to the motion of the light. There are several ways to do this, including taking information from the speed of motion, specific colours the camera can see perhaps, or any number of other factors. For these initial tests, I decided to link the sound to the coordinates of the point being displayed on the screen, taking the y coordinate as an input to the pitch of the produced sound, and the x coordinate as its amplitude.

The code to play a note using SoundCipher.

The code to play a note using SoundCipher.

I played around with the specifics of this implementation for a while – simply using the y coordinate for the pitch didn’t work, as Processing’s coordinate system uses higher y-values going lower down the canvas, so the pitches would appear inverted from the height of the visual points being displayed. I also needed to restrict the range of the pitch, as I found that the majority of the points on the screen ended up resulting in pitches which were ear-splittingly high. Not, perhaps, something I want to inflict on people. Finally I settled upon mapping the y-coordinate between 0 and 100 relative to its (inverse) height on the screen, and using this for the audio pitch. This seems to give an acceptable range of pitches, although I may alter this again in the future. The volume of the sound is currently simply taken from the x-coordinate of the point, although this may also be subject to change.

Here’s a test run of what I’ve got so far.

This has been a useful look into the methods of achieving my goals with SoundCipher, and I will continue to refine my implementation and experiment with the capabilities of the software as I continue with the project.