Aaron Baker

A digital media design blog.

Author: Aaron (page 1 of 6)

Processing – The Final Code

I’m going to put a link to the final Processing code for my project here, so that it can be viewed by anyone that may desire to see it.

The code can be viewed on pastebin here.

If you haven’t, read my final summary of this unit here.

 

Iterative Design – Unit Summary

After displaying my Processing project on the Weymouth House screens, and adapting the project to this environment to the best of my ability, this unit is now coming to an end. In this post I will attempt to look back at my project, and evaluate the strengths and weaknesses of my work.
Looking back at the brief, we were tasked at the beginning of the unit with producing:

A piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

We were also told this could take the form of a direct information-graphic or a more abstract piece of work. We were required to produce this in the Processing environment, using some form of camera-based interaction as the primary method of interacting with the audience. Reviewing my project as a whole I have both positive and negative things to say about the work I have produced. Early on I considered the choice between creating a direct information graphic and an abstract/artistic piece of work to fit the brief. At the time, I decided that I wished for my work to exist on the more abstract end of the spectrum, as I thought it would allow for more opportunities to be creative with the work. I believe that I have stuck with this aim, and that the final Processing sketch I produced is intended to convey a media concept in an abstract manner. However, I think that because I chose to go about the task in this way, the project may have become slightly removed from its initial purpose in explaining these media concepts. Aspects of the media concepts I chose to focus on – primarily collaboration/open-source culture and audio/data-visualisation can certainly be seen in the work, however I do not know whether this is enough to completely fulfil the given brief. Whether the work actually ‘explains’ these concepts is up for debate, and one could argue that because of the abstract way I tried to touch upon these ideas, the interpretation of the work and the concepts behind it is left up to the audience. This is common of abstract pieces of artwork or design, however it’s possible that this limits the project’s effectiveness as a piece of information design.

Where the project succeeds is in the fact that it meets the goals I set out for it in attempting to fulfil its role. The audiovisual language of the work was specifically designed to attempt to speak of its concepts. Every design and implementation choice was made with the goal of improving the quality of the interaction with the audience. The way the piece communicates its message, then, can be found in the intuitive way with which its interaction is presented. A clear link is formed, audibly and visually, between the actions of the user and the result that they experience. The work in itself does not exist without the input of the user, which from the start presents the idea of collaborative media production. The intuitiveness of the relationship between the users input and the visual and audio feedback was important as this is the method in which I tried to convey ideas about data visualisation with the work. The goal was to make it instantly clear exactly how the interaction affects all elements of the display, in an attempt to encourage the user to see that their movements, the visuals and the audio output are all representations of the same information. It’s in these ways that I believe I have tried to get across my chosen media concepts with the project, although it is dependent on an individual basis whether these techniques are effective, which is why I am unsure if this can be considered a fulfilment of the brief. I do not believe the project in itself has been a failure, then, but instead I question whether by nature an abstract piece of work such as the one I have produced can be considered an effective method of conveying information.

Technically, I consider the project to be largely a success. I have engaged with the Processing environment, one which I had no previous experience with, and learned much about the techniques and best practices of creating interactive displays with the tool. I used multiple external libraries in the creation of my work, and utilised principles of object-oriented programming in my code. The work I ultimately produced fits with the intended outcome of the unit in that it is an interactive display which uses a method of camera-based interaction. I think the interaction method I chose to pursue, brightness-tracking, was chosen for good reasons. I wanted a way to allow freedom of interaction from the user with the resources I had available, as opposed to some of the more passive methods of interaction like face tracking as I felt it would better suit my project. This turned out well for the most part, albeit with a few technical issues being run into when changing the environment the work was being displayed in to the public Weymouth House setting. If I were to attempt the project again, I might give more thought to attempting a more advanced camera interaction technique such as skeleton tracking, although at the time I believed this to be out of range of my skills and time-frame. I believe that this may have improved the project overall, as the main issues arose as a result of the nature of brightness tracking in uncontrolled environments.

Overall I’m pleased with the project I have managed to produce, and believe the implementation of my idea to be of a reasonably high standard, whether the idea I implemented is considered entirely in line with the intended outcomes of the brief or not. The iterative approach that was taken during the development of the project helped me to better understand the cycle of development that designers and developers carry out. The way in which this was carried out was advantageous as rather than a simple line of analysis, design, testing, and then evaluation the iterative nature caused me to complete the project in stages, at each stage analysing the requirements for the rest of the project before designing a solution and constantly testing my work. This has resulted in, in my view, both a higher quality final product and a higher level of understanding on my part of the good and bad points of my work. The unit as a whole has been a positive learning experience for me, and it is my belief that, while if I were to undergo the task again there are aspects I would change, the experience I have gained with both practical and theoretical sides of the design/development process will help me massively in both upcoming projects and in later life.

Displaying the Project in the Intended Space

Yesterday I took the final version of my project into Weymouth House to display it on the public screens in the space. Going into this, I had a piece of visual, camera-interactive software which I was pleased worked well and was effective in the environments in which I had tested it so far. I knew the project would likely face challenges on the public screens that I had not yet encountered, and prepared to learn from the experience, taking on feedback and experience.

There are multiple screens in the foyer of Weymouth House which were intended for use in displaying this project. The main groups of displays are a long, thin strip of screens high on a wall, and two sets of two horizontal displays with one screen above the other, on either side of a wall close to the entrance of the building. I ended up using the lower of the two screens in the group facing towards the entrance.

The lower of these two screens is the one my project was displayed on.

The lower of these two screens is the one my project was ultimately displayed on.

When I placed the laptop with my project on it in a location that enabled connecting it to the screen, I realised that this location, as it was facing both the front door of the building and some quite bright lights, was not the best suited for brightness tracking as it was currently implemented in my project. The brightly lit environment was interfering with the tracking of the light I was using for interaction, and causing other sources of light to instead be tracked. I disconnected the laptop and went into the code for the project, in the hopes of adjusting some of the settings to suit the new environment. Specifically, I tested different settings for the threshold brightness at which the brightest point seen by the camera would be interpreted as the light held by the user. I increased this setting several times and tested the sketch with the laptop back in its location below the screen.  Eventually, with a combination of a newly increased threshold brightness value and moving closer to the camera with the light, I enabled the light to be tracked with reasonable consistency. This did, however make it necessary for me to prompt users of the project on where to stand for the best results.

Another issue that cropped up is that unfortunately, I was unable to get my hands on a set of speakers more powerful than those found in my laptop. As evidenced by the video above (in combination with the high level of ambient noise in the room and me having a conversation about the technical aspects of the project), this made it even more necessary to be close to the laptop, in order to hear the audio feedback of the project, which is a crucial aspect of the work.

In addition to this, a piece of feedback I got was, since the default state of the screen in my project is a blank white background, it may be difficult for people to know how to interact with the project. One possible fix for this issue would be to add a splash-screen style message to the work, to instruct people on what it is and how to use the light-based interaction. Rather than this approach, however, I think it would be best for, if the installation were to be publicly displayed again, a physical sign to be present explaining the project and how to get involved, along with a few torches or lights which could be used by the audience. This would also solve the issue that currently exists of people wishing to interact with the work needing to have their own light source (although admittedly they commonly do possess one, in the form of a mobile phone).

Things didn’t necessarily go 100% according to plan, then. That isn’t to say that the project was a failure, however. The people that did test it, after I’d tweaked the code to deal with the environment as best it could, seemed genuinely interested in the work. People seemed to enjoy interacting with it, something  which I feel can be attributed to the ‘involved’ nature of the light-tracking based interaction method (although this caused issues in other areas). I also received feedback from users telling me that the project was aesthetically pleasing when in use, which made me glad of the multiple visual changes I’ve made to the project.

In the future, if I were to display the project publicly again there are changes I’d make. The project works best in a controlled environment, so I’d make sure to either get access to a screen in a more suitably-lit area, or I’d make an effort to set up an artificial environment for the project to be displayed in. This could be accomplished using techniques such as dark fabric backdrops and partitions to keep the light at a better level. I would also strive to make use of some more powerful speakers, as the ones in my laptop left something to be desired and didn’t help towards showcasing the audio aspect of the project.

Overall though, I think I adapted well to the challenges and restrictions that I faced during the screening, and feedback leads me to believe the project had a positive impact on its audience. This, in my view, is grounds to call the day a success.

Processing – The Final Version

After engaging in a session of user testing, I’ve been busy updating my Processing code to reflect the feedback that I received.

The main piece of constructive criticism that I got was that the project would be improved by utilising a more interesting, more colourful colour scheme. I have since come up with a way to do this which works around a previous dilemma I was having – which colour to use. The sketch now uses a different random colour for the visuals each time the drawing function starts (either when the program is first run, or after each playback is over). To implement this in a way that still allowed for a visual colour change to coincide with the playing of the audio notes, I decided it would be easier if I first made a few small changes to my existing code.

The new colour mode of the sketch.

The new colour mode of the sketch.

To achieve what I had in mind for the colouring of the notes, I first changed the colour mode of the project to HSB. This stands for hue/saturation/brightness, as opposed to the red/green/blue of the previous RGB colour mode. This is a change I made so that I could easily change the saturation and brightness levels of the colour in correspondence with several other factors of the project.

A random colour.

Changing the colour with the size value.

In the note class I had previously defined, I changed the colour variable to be assigned the value of a new global variable I created called segColor, which is given a random colour value in the setup of the sketch. Then, this colour is modified upon the initialisation of a ‘note’ object, so that the colour varies with the amplitude of the sound it produces and the size of the circle drawn to the screen. This is achieved through altering the saturation value of the colour to a mapping of the z parameter (which also defines size and amplitude) between 0 and 100. This causes faster motions by the user to correspond with bigger circles, more vibrant colours and louder sounds. I think this is a good implementation as it forms a strong link between the elements of the project.

The 'flash' function.

The ‘flash’ function.

It is important for my vision of the project to have the appearance of a note visibly change in some way while its sound is being played to the user, so that it is clear that the two things are related. To achieve this previously, I had the notes turning red to indicate playback. However, this no longer makes sense with the variable colours of the project. Instead, the brightness of the random colour of the notes (which is initially defined with a brightness value of 80), is now increased to 100 for a short period of time, before gradually fading back to its original value. This creates a ‘flash’ of brightness in the note, and very effectively lets the user know which note is being played back. To achieve this, I added a ‘flash’ function to the note class, which simply increases the brightness of the note’s colour and sets a variable to declare that the note has flashed.

The code that causes the brightness of the note to fade.

The code that causes the brightness of the note to fade.

Then, in the drawCircle method of the class is a check for this value to see if the note has increased in brightness. If it has, the brightness is reduced by 2 and the colour updated, until the brightness is once more at its starting value and the note has returned to its initial colour. This creates a gradual fade back to to the starting colour, which makes the transition seem more organic and less jarring for the user.

An additional issue, which I did not point out in the user testing post  but which has since become apparent, is that the sketch was found to start to slow down once the notes ArrayList held too many values. This is because each frame, all of the circles need to be drawn and at each playback the entire list of notes needs to be played. To counter this issue, I implemented a piece of code so that the ArrayList is reset and the canvas cleared if the list holds more than notes.

The code to clear the list and the screen.

The code to clear the list and the screen.

This is useful, as previously there was no way to clear these things without restarting the sketch, and I needed to find a method of doing this before finalisation of the project anyway. One concern I had with doing it this way is that some of the collaborative aims of the project would be lost, as multiple users would be less likely to be able to work together to produce one audiovisual design without it being very short as the previous work would be cleared. However, the limit being set where it is still permits multiple drawings to merge together in this way, as long as they are kept to a reasonable length. This unfortunately is a necessity if the project is to be kept usable, without noticeable slowdowns which reduce the quality of the interaction with the work. Ultimately I do not think the work is negatively impacted overall by implementing this function.

Return of the testers

I asked the testers from the previous testing session to have another look at the project with these changes made to it, to see what their opinions were on (what I hoped would be) the final version of the work.

Their feedback was massively positive, which pleased me to hear. The new colour scheme was approved of, and they agreed that setting the size limit on the list of notes improves the overall usability of the project. The changes seemed to go down so well, in fact, that at the end of the testing when I tried to close the project, one of the testers asked me to leave it for a minute because he was enjoying interacting with it.

This positive feedback is great to hear and I’m confident that, with the issues that were found in my rounds of user testing resolved, I have produced a project which I’m happy to call a final version and take into the Weymouth House foyer for display on the public screens.

Processing – User Testing

When I last left off, I was working on finalising the visual design of the project. I changed the black and red dots I had been using in the development of my code, and found more ways to visually suggest a relationship between the movements made by the user, the graphics appearing on screen, and the audio playback which was produced. I tried to achieve this by linking together various properties of the sound and geometric shapes, for example the size of a circle and the amplitude of a sound.

From my perspective the project was moving quite nicely towards completion, with just a few changes left that I’d like to make before public screening of the work. In any project, though, it is important to gather feedback from others about your work so that others’ opinions and viewpoints can be heard. Since the target demographic of my project is primarily students, as the work is intended for display in a university building, I decided to turn to my house-mates for input on the current version of the piece.

I had three main areas I wanted to find out their opinions of. These were the visuals and audio of the project, the media concepts behind it, and the effectiveness of the work overall.

Visuals & Audio

I wanted to find out what they thought about the visuals and audio that make up the project. Having just overhauled the graphics, I personally thought they were moving in the right direction, however I was still not sure on the overall effect. The testers, having seen some previous incarnations of the project, agreed that the circle-based graphics of the latest iteration worked better than some of the old visuals such as lines and dots. They noticed and praised the fact that the speed of their movements with the light changed aspects of what was being played on the screen, and quickly grasped exactly how the different aspects of the feedback generated from their actions were linked. This is something which I had tried to enforce to make the link to data visualisation, and the different aspects being the same data represented in different forms, so I’m glad that it seems to have worked. They didn’t bring up any problems with the audio aspect of the work, whether the volume, the ‘instrument’ used or any other facet, so for now at least it appears that the work I need to do on this element of the project is complete. Something they did bring up, however, in relation to the visual design, is the colour scheme. While I too was unsure on this, I wanted to wait to see what the testers thought before changing it. Currently the work is in greyscale, and shades of red upon playback. The users suggested that this perhaps isn’t the most exciting colour scheme and may not do the project any favours in regards to keeping people’s interests. I agree with this line of thought, and will take the advice into account and change up the colouring of the project.

Media Concepts

Since the project is supposed to evoke elements of a particular media concept, I wanted to find out from the testers whether my work to include the ideas of collaboration/open-source culture and different forms of data visualisation within the piece has been successful. I asked what they thought the concept behind the work was, and they did not mention these specific media ideas. When I told them about them however they agreed that the work contains elements of these concepts and could see where they were incorporated into the project. In addition, the concepts they gave, such as freedom of expression and creativity, are not necessarily too far away in terms of ideas. ‘Open-source culture’ in the context in which I aimed to represent it focuses on the ability of people to produce media of their own, or collaboratively, independently of large corporations or structures, and in this way the idea of individual expression can easily see to be very much related. I’m satisfied that I’ve represented the concepts I set forward to the best of my abilities with this project, and the fact that exact concepts are not always identified runs parallel to the abstract nature of the work as a whole, in which personal interpretation of the meaning is a large role.

Overall

Asking the testers to give their opinion of the overall effectiveness of the project, they gave generally very positive replies. The consensus seems to me that, other than a few graphical changes which I intend to make, the work I have produced is a success, in that it is enjoyable to interact with and (after the changes) can be aesthetically appealing. The piece, and my prompts as to its concepts, got the testers to engage with media concepts which although not always 100% the ones I had intended incorporate aspects of them and all relate to each other. I feel this, due to the abstract nature of my particular project in its communication approach, is a success in terms of communicating media concepts.

This testing session helped me greatly to see the strengths and weaknesses of my project in its current iteration. I will be taking the advice and opinions I gained on board and will, over the next few days, resolve to incorporate this feedback into my project to hopefully improve upon it and produce a final piece of Processing work which I can showcase in the university space.

A Visual Overhaul

I’ve been writing for a while that I wanted to overhaul the visual design of the project. The black dots turning red, while a good way to clearly see if the functionality I was implementing was working during the development process, are not anywhere near exciting enough to be the final visual representation of the project. More than this, I wanted to add an additional layer or two onto the graphical side of things, so that the connection between the audio and visual elements of the piece could be emphasized in a greater number of ways, and in ways which are more evident. I talked to Paul, the Processing workshop leader, about what I had so far, and while he seemed to approve of the overall project he agreed that visually it needed an upgrade. He suggested that each note, rather than a single point, should be represented by a geometric shape. This shape would then have more properties which could be changed according to the motion of the light, and this could then afford more connections with the properties of the accompanying sounds.

After previously having experimented with different visual styles, such as having the points be connected together with lines (in an attempt to have the ‘drawing’ produced by the light’s motion more resemble an actual line drawing), I decided to follow this advice, and experiment with geometric shapes instead. I tested using different shapes such as triangles and various quadrilaterals for the points, however this produced an effect that I did not like. I wanted the overall appearance of the graphic elements of the project to be quite organic and free-flowing, so that it would not be too far removed from the physical movements made by the user with the light source. To this end, I decided upon using circles as my geometric shape of choice. To change the aesthetic from what I had previously made, the circles needed to be of differing sizes though. This would give me a property of the points which I could pair with the amplitude of the sound, which up until this point had depended on the x-coordinate of the point.

The newly altered lightDraw function, complete with size changing.

The newly altered lightDraw function, complete with size changing.

A link having been made between the properties of graphics and sound, I needed a way to tie both of these in with the physical motions made by the user. The best fit seemed to me to be the speed of these motions.  To start implementing this into the project, I began changing the lightDraw function, as an extra variable would need to be stored in the note objects. Unsure  at first how to find out how fast the light source was moving, I experimented with the previous version of the project and changing the speed at which I moved the light. I found that due to the nature of the fixed frame-rate of the sketch, the faster the light moved, the further apart each successive ‘note’ object would be on the display. Using this information, I realised I could keep track of the distances between points and use this data to generate a size for the point. I declared new variable called ‘last’, which would store the location of the light until the next frame so it could be compared with the new location. To generate a size for the point, I then took the absolute (always positive) of the current x value taken away from the last x value, and added it to the same thing for the y coordinate. I accounted for the case where a point is the first drawn to the screen by making the default ‘last’ x value -10, and then giving the size a base value of 20 if this value was found in the last position (as -10 can not occur naturally for the x position during the course of operation of the sketch). This value is then passed into the new note instance which is created, along with the x and y values which were already being passed.

The changes to the 'note' class.

The changes to the ‘note’ class.

A number of changes were made inside the ‘note’ class to account for the new visual style. For starters, the constructor now accepts a third argument, named z, which receives the size value calculated above. The value of this argument is stored in a new ‘size’ variable. To make the visuals of the notes more cohesive, their colour is also changed according to the size. As before, a base colour of black is set, however now the z parameter is added to all three RGB values of the colour, essentially shifting the colour closer to white as the speed of the light source and hence the size of the note increases. The amplitude of the sound, via a newly declared ‘amp’ variable, is also given the value of z, although constrained between 0 and 100 so as not to cause the volume to get too loud.

The drawCircle method is much the same as before, although now instead of drawing a point an ellipse is drawn at the (x,y) coordinates with the width and height set to the value of the size field, creating a circle. Upon playback of the note via the play method, the audio note is played using the amplitude variable now rather than the x coordinate. The note also, instead of simply turning red, is now essentially ‘tinted’ slightly red by either increasing it’s red value or, if this is already too close to 255, by reducing the blue and green values of the colour. This change was made to preserve the differences in the colouring of different notes after playback. I am unsure of whether to keep this ‘turning red’ behaviour of the playback functionality, but at the moment it serves its purpose for testing. I intend to try and get some user feedback in the near future, so I will ask for some input as to possible colour schemes.

The updated visuals of the piece can be seen here:

As evidenced by this video, some other small changes have been made to the appearance of the project. I have removed the camera output from the display, as this was always simply to make testing the code easier. I have also made the sketch full-screen, as this is the way it will be showcased in its final form on the screens in Weymouth House. I can see some issues with the way the project is now, such as the notes created when the light source is moving quickly being increasingly harder to see as the colour of the circles reaches white and blends in with the background of the canvas. These circles are also the largest due to the way the properties of the notes are linked, and so have the potential to hide other, smaller notes that come before them if they are drawn over them. I will continue to refine the project in its current form, and I intend to get a few other people to test my work soon so that feedback can be incorporated and any additional possible issues can be spotted and resolved before the work is displayed in the Weymouth House foyer space.

Processing – Fixes and OOP-ifying

Jumping right back in where I left off, I’ve managed to fix the issues I was having with my previous attempt at implementing playback functionality. More than that, though, I realised a bigger problem with the code as it was. There was a serious lack of object-oriented programming. Inspired by my previous ventures into incorporating these principles and techniques into my previous work, I decided it’s about time to up the standards of my code and OOP-ify it (no, that’s not a word).

To start down this path, I thought of how my code would best translate to a more object-oriented approach. I would need to define a class, so I mulled upon what my class would be, and how I would use it. I thought about things such as defining a drawing as a class, and giving it functions such as adding a point and playing through its audiovisuals. In this instance the fields of the class would be things like number of ‘notes’, and I would essentially be transforming my entire project into one big class with different methods. While this may have worked, I decided to opt for a somewhat simpler route.

The 'note' class which I created.

The ‘note’ class which I created.

I decided to define a ‘note’ as a class. A note, in the context of my project, is both the visual point on the canvas created by the presence of a bright enough light source, and the audio note representation which is generated from this. Thinking of what fields the class would require then, I declared variables for the x and y coordinates, amplitude and colour. I also included a size field, however in the screenshot above I have hard coded the value 10 as the stroke weight of the drawn point, something which I have since changed to use this variable. The constructor of this class is quite simple, simple accepting parameters for the x and y coordinates of the note and setting the corresponding fields to these values. The colour is currently still set to black for testing, however this is likely to change. As it was before, the amplitude of the sound is set to the x coordinate of the drawn point, however as I have said before I’m still looking at ways to change this up so the sound is more intuitively and evidently linked with the appearance of the visuals. In terms of functionality, the note class needed to do two things – draw a point to the screen while the user was drawing, and play a note while playback is happening, along with in some way altering the point that was drawn. To accomplish this I created two class methods named drawCircle and play. The first of these methods takes functionality that was previously found in the main section of code, setting the stroke colour and weight before plotting a point at the x and y coordinates of the note object. The play function is used to hold some of the code for the playback functionality, and simultaneously plays a note using the SoundCipher library and changes the colour of the note object to be red.

The main loop calling the drawCircle method.

The main loop calling the drawCircle method.

In the main loop of the sketch, I have now included a for loop which, every frame, will iterate through the ArrayList of notes and call the drawCircle method of each note object in the list. This is to ensure that each note gets drawn to the screen every frame, and prevents overlapping problems I was having when I was manually re-drawing the circles in the play method, where notes in the background would get briefly redrawn over those in the front when they were played. This could be seen as a valid way for the notes to work, as it would ensure the currently played note is always seen, however it often produced a jarring visual effect which was aesthetically displeasing.

The newly updated lightDraw function.

The newly updated lightDraw function.

The lightDraw function has been accordingly updated. Since the actual drawing of the points has been moved to a method in the note class, the lightDraw function now adds a new note to the notes list (rather than a vector as it did before) and then calls the drawCircle method of the last note in the list (the one that was just added). This method is called here as well as in the main draw loop, because I found when just leaving the notes to be drawn at the next frame, sometimes the drawing on the screen would noticeably lag behind the movement of the light source, which led to a less intuitive feeling interaction.

Finally, working playback functionality!

Finally, working playback functionality!

That brings me to the newly-revamped and functional playback code. While not massively different from the way it was before in what it does, it differs slightly in the way that it goes about looping through the notes. Firstly, I have declared a new integer variable (not seen here) called index. The function checks whether this index value is greater than the size of the notes ArrayList. If so, the index is returned to its default value of 0 and the playback is stopped. Then the function checks whether the notes list has any data in it (better safe than sorry), and if it does a temporary note object is created and assigned the data of the note object located in the ArrayList at the position defined by the index variable. The play method of this note object is then called, which causes the sound to be played and the changing of colour of the point on the screen. Whereas before this series of events (without the inclusion of the note class) happened in a for-loop, I found that this was causing the playback to occur all at once, rather than at a reasonable tempo. I therefore have now created a timing function to space out the playback of each note. This is called each frame that the playback function is running, and checks using another new variable (again not shown here) simply named ‘time’ whether a specific amount of time has passed since the last note was played. If it has, the time variable is set to the current time, essentially setting this timer back to zero, and the index is incremented to allow the next note in the list to be played by the playback function. This new technique of cycling through the list of notes has solved the issue I was having, and the playback now happens as I intended, note-by-note.

The outcome of all this hard work - it looks pretty much the same.

The outcome of all this hard work – it looks pretty much the same.

I am pleased I have been able to fix the problems with the playback functionality, and although the moving of some of the functions of the project into their own class hasn’t visually, to the outsider, changed how the sketch works, I am confident that it will enable me to more easily proceed with the project and make important changes which I would like to make. I intend to  revamp the visual design of the work, before looking to gather some user feedback for the project in the hopes of getting closer to a final version.

 

Initial Playback Attempts and Code Cleanup

Further developing the Processing project and my goals for it, I have started work on a playback function. My intended outcome for this is that a user will draw to the screen using the light, and then when they are done drawing the software will loop through what they have drawn and play the audio representation of the pattern, while showcasing exactly where the sounds are being generated from. This should provide a sense of ‘completeness’ to the activity, by which I mean that the audience will be able to draw what they are intending to draw, and then afterwards hear start to finish the generated sounds they have produced through their actions. I think this is superior to the previous implementation I had, where the sounds would play as the user was drawing, as this way the audio is played as a whole experience and has the potential to form a cohesive piece of music (we’re probably not talking top-40, but music all the same). By enforcing the link to the visual while the audio playback is happening, the connection can still be seen between the actions of the user and the audio feedback. It is important to find a way to enforce this, then, as if the audio simply plays back to a static drawing it may not be clear that the visual and audio patterns are related.

The new variables for use in the playback.

The new variables for use in the playback.

To start trying to piece together playback functionality into the project, I realised I’d have to store the data gathered from the light tracking, rather than just using it to draw to the screen and then getting rid of it. To accomplish this, I added a new ArrayList to store the points, which I have called ‘notes’ due to the musical nature of the points. You can also see I added a boolean variable to control whether the sketch is in ‘playback mode’ or not.

The cleaned up main loop section.

The cleaned up main loop section.

I used adding this new functionality to the program as an opportunity to clean up the code a little before it becomes out of hand. The new core of the main loop section of the code checks, as it did before, whether the brightest point is higher than the threshold brightness level. If it is, a new function, lightDraw, is called. This function is intended to contain the necessary code to handle the drawing of visuals and gathering of data from the movement of the light source. I thought about how to prompt the sketch going into playback mode. Initially for testing purposes a mouse click was required, however to make the project work entirely based off of the light tracking method of interaction, I have changed the playback mode to trigger if no light source above the threshold brightness is detected (and the doPlayBack variable is defined as true, which will happen after a point has been drawn, so that playback will not occur without any data to play). I think this is an acceptable solution to this problem, although it has the potential to trigger this mode if the user stops drawing for a second, without having finished. I will monitor this situation moving forwards and try to determine whether this solution causes any problems.

The light drawing function.

The light drawing function.

The lightDraw function, in its current incarnation, takes the input x and y parameters from the brightest point on the image, and as before plots a point to the screen at these coordinates. However, additionally these values are now also added to the notes ArrayList, in a vector format. This allows the data to be reused in the future for playback purposes. The function also sets the doPlayBack boolean to true, since data has been drawn to the screen meaning the playback function can be called into effect.

Current iteration of the playback function.

Current iteration of the playback function.

If the brightest point is under the threshold, and the doPlayBack variable is declared as true, the playBack function is run. This function is currently incomplete, as it has a few problems with it, though it is a good start towards the eventual implementation I am aiming for. In its current form, the function iterates through the notes ArrayList, at each index retrieving the vector which is stored, playing  a note which corresponds to the x and y values and plotting a point to the screen at the relevant coordinate. This point is plotted, this time in red, to show which point in the drawn pattern the user created is generating the sound which is played. This is an early attempt at this, and while helpful in the testing stages will need to be changed for the final project as the it is not very aesthetically appealing.  I also intend to further emphasize the link between audio and visual in some way, although I am not yet sure as to what method I will use to accomplish this. As it stands the only link is the physical location of the dots – higher dots produce higher sounds, and the volume of the sound varies with the x-location of the points. I do not feel this second link is visually strong or intuitive enough to immediately comprehend so it is here that I will probably try to improve.

The code featured here gives the following outcome.

Although not the easiest thing to see on video, this showcases the current problem I’m having with the playback functionality. The nature of the for-loop iteration method I have used in the playback function means that as soon as playback mode is entered, the entire array of notes play essentially back to back far faster than I would like. The turning of the points on the screen from black to red also happens all at once, after the sound has played. This is a huge issue, as the intention of the colour change is to show, note by note, which point is producing each sound so it is important that each point changes colour as its respective note plays. Another problem with what I have so far is the need to clear the array after every playback. This is necessary as due to the aforementioned problem with how the playback happens, if the amount of data is too large the program will come practically to a standstill while it tries to play it all at once. In the final build of the project I hope to remove the need to clear the array every time, so I can preserve the possibility of collaboration between people in creating one set of data, and therefore one audiovisual production. I will continue to work on the features I have implemented and strive to fix these issues over the coming days.

Getting Started with Audio

I did more work on the visual side of things previously, with refining the basic camera interaction which will be the core of the project. Now I needed to get going on the audio side of things. I decided to incorporate an audio element to the project to increase the complexity of it, and also due to my interest in such fields as graphical sound, and the ability to represent the same data in different forms.

I was, however, unaware of exactly how this would be achieved in a practical sense. I set out looking for a library which would help me achieve my desired effect – forming a link between the motions of a light and therefore the user, and the visual and audio representations of that motion. What I found was SoundCipher. SoundCipher is a library for Processing which is specifically designed to allow the creation of music in the environment. It allows the playing of ‘notes’ via a MIDI format, and facilitates basic interactive sound design – which is exactly what I need.

I installed SoundCipher, and am now in the process of attempting to integrate its functionality into my work. The first thing I did with the software was plug it into the foundation for the project I produced previously, and attempted to play back sounds with the points I was drawing on the screen. To do this I first went through the usual steps of importing the new library and setting it up so that I could use it.

SoundCipher added to the growing list of imports.

SoundCipher added to the growing list of imports.

After this, I thought about what I wanted to actually achieve for an initial test. I looked at how to play a note with SoundCipher, and saw that it took two main arguments to do this. These are the pitch of the sound and its amplitude or volume. Thinking about how best to form a connection between the visuals I’ll be producing and the sounds, these two arguments need to be in some way related to the motion of the light. There are several ways to do this, including taking information from the speed of motion, specific colours the camera can see perhaps, or any number of other factors. For these initial tests, I decided to link the sound to the coordinates of the point being displayed on the screen, taking the y coordinate as an input to the pitch of the produced sound, and the x coordinate as its amplitude.

The code to play a note using SoundCipher.

The code to play a note using SoundCipher.

I played around with the specifics of this implementation for a while – simply using the y coordinate for the pitch didn’t work, as Processing’s coordinate system uses higher y-values going lower down the canvas, so the pitches would appear inverted from the height of the visual points being displayed. I also needed to restrict the range of the pitch, as I found that the majority of the points on the screen ended up resulting in pitches which were ear-splittingly high. Not, perhaps, something I want to inflict on people. Finally I settled upon mapping the y-coordinate between 0 and 100 relative to its (inverse) height on the screen, and using this for the audio pitch. This seems to give an acceptable range of pitches, although I may alter this again in the future. The volume of the sound is currently simply taken from the x-coordinate of the point, although this may also be subject to change.

Here’s a test run of what I’ve got so far.

This has been a useful look into the methods of achieving my goals with SoundCipher, and I will continue to refine my implementation and experiment with the capabilities of the software as I continue with the project.

Processing – Working on my Project

After getting started with the basic brightness tracking code for my project, I’ve now been continuing on with the coding of my Processing creation. As I pointed out last time, the first thing I needed to sort out was some kind of threshold light level where the system would start drawing from the input brightest point. The theory behind this is that that if I can control the environment to some extent that the sketch is positioned in, I can aim to ensure that the only light source above a certain level will be the light being controlled by the user as the interaction method.

To figure out a good level to start working with, I made a few small changes to the code I had already written. Firstly, after using OpenCV to find the brightest point of the image each frame, I then simply printed this value to the console so that I had an idea of  what the brightest ambient light level generally was in the environment I was developing the project in (my room).

The line of code to print the  brightest light level.

The line of code to print the brightest light level.

This code simply takes the vector location OpenCV gives as the brightest point in the image, and then finds the brightness of the pixel at that location in the camera feed. I then ran the code with this line added to it, and monitored the values it was returning. Then, to test the scenario of an input light being in use, I held the LED light on my mobile phone in front of the camera and looked at the brightest point values again.

The values I was given for the brightest point, before and after holding a light in front of the camera.

The values I was given for the brightest point, before and after holding a light in front of the camera.

I found the brightness of the brightest pixel under normal conditions (just ambient levels) to be generally under 200, with the value tending to stay between about 160 and 190. When I held the light in front of the camera to simulate a user being present, the value increased to over 250. With this in mind I chose a starting threshold to test at of 240 to allow for slight changes in the brightness of the light being used, although I realise this level may need to be changed when using the sketch in different spaces – unfortunately one of the issues with light tracking in the way I am going about it.

To help test the project, I then overlaid the output of the camera to the screen so I can see if the outcome of the input actions using the light source lines up correctly with the visuals produced. After making these changes, this is what I was left with.

The result of the changes.

The result of the changes.

As can be seen by this, these changes worked to ensure that currently the only light bright enough to be recognised as the input for a drawing is the one I intend the user to be holding. One thing I realised, however is that the input shows up on the screen in reverse from the perspective of the user, due to the nature of the camera’s perspective. This is something which I can fix by simply inverting the x-coordinates of the brightest point before drawing to the screen. For testing purposes, I will also invert the camera output, however I don’t intend this to stay as a feature of the final project.

Older posts

© 2017 Aaron Baker

Theme by Anders NorenUp ↑