A digital media design blog.

Month: December 2014

Getting Started with Audio

I did more work on the visual side of things previously, with refining the basic camera interaction which will be the core of the project. Now I needed to get going on the audio side of things. I decided to incorporate an audio element to the project to increase the complexity of it, and also due to my interest in such fields as graphical sound, and the ability to represent the same data in different forms.

I was, however, unaware of exactly how this would be achieved in a practical sense. I set out looking for a library which would help me achieve my desired effect – forming a link between the motions of a light and therefore the user, and the visual and audio representations of that motion. What I found was SoundCipher. SoundCipher is a library for Processing which is specifically designed to allow the creation of music in the environment. It allows the playing of ‘notes’ via a MIDI format, and facilitates basic interactive sound design – which is exactly what I need.

I installed SoundCipher, and am now in the process of attempting to integrate its functionality into my work. The first thing I did with the software was plug it into the foundation for the project I produced previously, and attempted to play back sounds with the points I was drawing on the screen. To do this I first went through the usual steps of importing the new library and setting it up so that I could use it.

SoundCipher added to the growing list of imports.

SoundCipher added to the growing list of imports.

After this, I thought about what I wanted to actually achieve for an initial test. I looked at how to play a note with SoundCipher, and saw that it took two main arguments to do this. These are the pitch of the sound and its amplitude or volume. Thinking about how best to form a connection between the visuals I’ll be producing and the sounds, these two arguments need to be in some way related to the motion of the light. There are several ways to do this, including taking information from the speed of motion, specific colours the camera can see perhaps, or any number of other factors. For these initial tests, I decided to link the sound to the coordinates of the point being displayed on the screen, taking the y coordinate as an input to the pitch of the produced sound, and the x coordinate as its amplitude.

The code to play a note using SoundCipher.

The code to play a note using SoundCipher.

I played around with the specifics of this implementation for a while – simply using the y coordinate for the pitch didn’t work, as Processing’s coordinate system uses higher y-values going lower down the canvas, so the pitches would appear inverted from the height of the visual points being displayed. I also needed to restrict the range of the pitch, as I found that the majority of the points on the screen ended up resulting in pitches which were ear-splittingly high. Not, perhaps, something I want to inflict on people. Finally I settled upon mapping the y-coordinate between 0 and 100 relative to its (inverse) height on the screen, and using this for the audio pitch. This seems to give an acceptable range of pitches, although I may alter this again in the future. The volume of the sound is currently simply taken from the x-coordinate of the point, although this may also be subject to change.

Here’s a test run of what I’ve got so far.

This has been a useful look into the methods of achieving my goals with SoundCipher, and I will continue to refine my implementation and experiment with the capabilities of the software as I continue with the project.

Processing – Working on my Project

After getting started with the basic brightness tracking code for my project, I’ve now been continuing on with the coding of my Processing creation. As I pointed out last time, the first thing I needed to sort out was some kind of threshold light level where the system would start drawing from the input brightest point. The theory behind this is that that if I can control the environment to some extent that the sketch is positioned in, I can aim to ensure that the only light source above a certain level will be the light being controlled by the user as the interaction method.

To figure out a good level to start working with, I made a few small changes to the code I had already written. Firstly, after using OpenCV to find the brightest point of the image each frame, I then simply printed this value to the console so that I had an idea of  what the brightest ambient light level generally was in the environment I was developing the project in (my room).

The line of code to print the  brightest light level.

The line of code to print the brightest light level.

This code simply takes the vector location OpenCV gives as the brightest point in the image, and then finds the brightness of the pixel at that location in the camera feed. I then ran the code with this line added to it, and monitored the values it was returning. Then, to test the scenario of an input light being in use, I held the LED light on my mobile phone in front of the camera and looked at the brightest point values again.

The values I was given for the brightest point, before and after holding a light in front of the camera.

The values I was given for the brightest point, before and after holding a light in front of the camera.

I found the brightness of the brightest pixel under normal conditions (just ambient levels) to be generally under 200, with the value tending to stay between about 160 and 190. When I held the light in front of the camera to simulate a user being present, the value increased to over 250. With this in mind I chose a starting threshold to test at of 240 to allow for slight changes in the brightness of the light being used, although I realise this level may need to be changed when using the sketch in different spaces – unfortunately one of the issues with light tracking in the way I am going about it.

To help test the project, I then overlaid the output of the camera to the screen so I can see if the outcome of the input actions using the light source lines up correctly with the visuals produced. After making these changes, this is what I was left with.

The result of the changes.

The result of the changes.

As can be seen by this, these changes worked to ensure that currently the only light bright enough to be recognised as the input for a drawing is the one I intend the user to be holding. One thing I realised, however is that the input shows up on the screen in reverse from the perspective of the user, due to the nature of the camera’s perspective. This is something which I can fix by simply inverting the x-coordinates of the brightest point before drawing to the screen. For testing purposes, I will also invert the camera output, however I don’t intend this to stay as a feature of the final project.

Processing Project – Ramping Up the Complexity

So, I’ve been working on this brightness tracking idea I had for the project, and it’s not going too badly. I’ve got the basics of OpenCV down and I’m working towards implementing the techniques I’ve learnt from this and other exercises in a cohesive interactive product. I don’t think it’s enough though. Looking over what I plan to do, I’m not sure there is enough substance to the project for my liking. That’s not to say I’m displeased with the project thus far, but that it would benefit from an additional layer, both in terms of its theoretical approach and its technical goals and implementation.

I was drawing a blank as to what form this could take, but I think I’ve come up with a second layer to the project which fits well with what I have already set out to produce.

Audio visualisation is a common feature found in media players and other such audio software. This is a system which generates visual imagery, often animated, based on a particular audio file or piece of music. This imagery plays in time with the audio and forms a visual representation of the audio file.  The reverse of audio visualisation is also possible. Graphical sound is an area which has been looked at in the past, with experiments having been carried out with reels of film with audio tracks, and making graphical markings on the film to produce audio. I am interested in these techniques, as they showcase examples of how data can be turned into different forms and represented and consumed in different ways. This can open up new ways of thinking about the data and prompt a deeper look into its meanings or purpose.

This exploration of the relationship between the visual and audio forms has given me inspiration to expand on my previously laid-out plan for the project. I wish to with my work invoke a process of thought into the connection between different forms of data, specifically visual forms and sound. I aim to achieve this in a practical manner through, much as before, having the audience draw visual patterns via motion and light tracking. Where I will expand upon this, however, is by providing not only a visual form of feedback to the user’s actions, but also an audio one. I intend to, in some way, use the input of the user and the associated visual they produce to generate an audio representation of the patterns. If the intended effect is achieved, this will form a strong connection in the mind of the user between their input actions and the different forms of content that they generate.

Brightness Tracking with OpenCV

OpenCV, short for Open Source Computer Vision, is a programming library intended for real-time computer vision, a field which revolves around capturing, processing, analysing and understanding images with the aim of producing information. OpenCV can be used within the Processing environment to allow a greater level of camera interaction than with the standard Processing video library alone.

One of the capabilities of the OpenCV library is brightness tracking. Since this is the technology I’m going with for my project, I downloaded OpenCV and started reading up on how to implement it. After getting to grips with the basics, I started writing a piece of code to keep track of the brightest point the camera could see, and simply plot a point in the relevant positon on the screen. This would help me learn about the basic functions required to utilise brightness tracking.

The set-up of the brightness tracking code.

The set-up of the brightness tracking code.

As you can see from the code above, I started this by importing the necessary libraries – OpenCV and the Processing video library. I then declared variables for the OpenCV object, a capture device (the webcam on my laptop), and an image which would be used in the absence of a camera input. The setup function then sets the size and background colour of the window, before loading the fallback image and creating a new capture object. I chose a camera reoslution of just 160×120, as for the intended purpose of the sketch a high-resolution camera is not required. The image is also resized so it’s the same size as this camera input, leaving just the camera to be started and the OpenCV object to be initialised.

The main loop of the brightness tracking.

The main loop of the brightness tracking.

To actually go about finding the brightest point seen by the camera, then, I first replaced the image with the output of the camera. Then, using OpenCV’s inbuilt max() method, I was able to set a vector variable as the coordinates of the point in the image with the brightest colour value. To complete this test sketch I simply took this point, mapped it to the size of the canvas, and drew a white dot at the resultant location.

The result of the brightness tracking.

The result of the brightness tracking.

The result was, as I’d expected, a black canvas with white dots being regularly drawn at the location on the brightest point. However, since the point was simply the brightest in view of the camera all the time, the dots were basically being drawn around light sources and other bright objects nearby. In order to use this technique for my actual project I will need to somehow constrain the program so that the point is only drawn to the screen when I want it to be, i.e when a light is specifically being shone to activate the drawing of an image. I think this first test helped me get to grips with OpenCV and some of its capabilities, and I will continue to test methods of achieving my goals within the project.

Idea Development and Media Concepts

As I’ve previously discussed, my current plan is to base my interactive Processing project around drawing to a display by using physical motions and brightness tracking to manipulate the digital image.  Since the last post on the subject, I have developed this concept further and tried to incorporate a relevant media concept to base the project around.

My idea, as it currently stands, is intended to broaden the interaction from simply between the user and the display, to a further interaction between multiple users. I aim to achieve this by allowing not only for people to draw their own patterns or images to the display, but by having the audience collaborate with each other over a period of time to create one ‘community-made’ visual outcome.

This alteration to the idea is made to both further the complexity of the project and to tie the work into a concept which I perceive as key to the 21st century media landscape. Specifically, I have been looking at the ideas of collaboration and open-source culture.

Open-source is a term which gained popularity along with the rise of the internet (Weber 2004), and is generally used in relation to software development. Software that is open-source has its source code made available to the public under a license which allows anyone to look at, alter and re-distribute the code to be used by anyone and for any reason (St. Laurent 2008). This often results in open-source software being developed collaboratively, by multiple contributors and gives the end user a greater level of control and transparency.

The principle of open-source has spread beyond the collaborative development of computer applications, and elements of this way of producing content can be found in many aspects of modern media. Wikipedia, for example, is a mainstay in the landscape of digital media and provides an immensely useful resource in educating people around the globe. The site acts as an encyclopaedia of much of the world’s knowledge, and is available to edit and contribute to by the public. Without this open-source approach to knowledge curation, the site would undoubtedly contain much less information than it does today.

Many of the other features of the modern World Wide Web can also be seen to have been influenced and helped by open-source principles and ideals. Many of the most-used websites of the 21st century, things like social media, blogs, video communities and forums, all rely on user-generated content. This allows communities to form around these services which produce and consume content for and from each other, collaborating to form a bigger media picture. YouTube is a good example of this, with video crazes and ‘viral’ video phenomena happening all the time. Digital media ‘events’ occur such as the Harlem Shake series of videos, which thrive through user submissions and collaborations. In these examples, what starts as a single video soon becomes a much larger popular culture entity as the public collaborates and contributes more and more videos until there is a much bigger picture than a single or a few videos.

This concept could be summed up as ‘open-source culture’, or simply as collaborative media production, but either way it is clear to see its impacts on the way media, and perhaps especially digital media, is consumed in the 21st century.

The project I currently have planned will make use of these principles in allowing the collaboration between audience members to produce a final result. I am choosing, as per the brief, to follow a more abstract route in exploring media concepts, as I feel it is an area that lends itself more to practical demonstration than explicitly giving the audience a message about it.

I will continue to develop my ideas and aims for the project, and likely start initial development and testing of the core functionality shortly.

References:

St. Laurent, A., 2008. Understanding Open Source And Free Software Licensing. Sebastopol: O’Reilly Media, Inc.

Weber, S., 2004. The Success of Open Source. Harvard University Press.

Iterative Design Project – An Idea is Born

In an attempt to come up with an idea for the project, I recently had a look back at all the Processing tasks and mini-projects I’ve worked on since we got the brief for inspiration.  While most of the things I’ve done have been to practise a specific programming concept or technique, like this sketch which dealt with image loading and handling, some have been more general experimentation with the environment or with an idea that I’ve had.

One of the first sketches I wrote in Processing was a simple drawing tool, which would take a mouse input from the user and draw lines or shapes following the mouse location.

A drawing sketch with the ability to change between two colours.

A drawing sketch with the ability to change between two colours.

This is an idea which I think would work well when applied in a project with a brightness-tracking based implementation. The act of physically using a light source to draw digital graphics onto the screen provides a strong sense of interactivity and allows the audience to clearly see their contribution to and impact on the work. This is something I believe to be important in an interactive work such as this. This also appeals to the personal, individual interaction methodology I have in mind for the work.

While I think this ‘light drawing’ idea could form a strong core to the project, I do not think it is enough to simply allow drawing to a canvas. This doesn’t really have enough substance to it and doesn’t pertain to a particular media theory or concept. I’m going to keep the idea in mind then, and aim to expand upon it with possible additional functionality and concepts which will bring me closer to a fully-formed idea for a project which will relate to one of these theories and fit with the given brief.

© 2024 Aaron Baker

Theme by Anders NorenUp ↑