A digital media design blog.

Author: Aaron (page 2 of 6)

Processing Project – Ramping Up the Complexity

So, I’ve been working on this brightness tracking idea I had for the project, and it’s not going too badly. I’ve got the basics of OpenCV down and I’m working towards implementing the techniques I’ve learnt from this and other exercises in a cohesive interactive product. I don’t think it’s enough though. Looking over what I plan to do, I’m not sure there is enough substance to the project for my liking. That’s not to say I’m displeased with the project thus far, but that it would benefit from an additional layer, both in terms of its theoretical approach and its technical goals and implementation.

I was drawing a blank as to what form this could take, but I think I’ve come up with a second layer to the project which fits well with what I have already set out to produce.

Audio visualisation is a common feature found in media players and other such audio software. This is a system which generates visual imagery, often animated, based on a particular audio file or piece of music. This imagery plays in time with the audio and forms a visual representation of the audio file.  The reverse of audio visualisation is also possible. Graphical sound is an area which has been looked at in the past, with experiments having been carried out with reels of film with audio tracks, and making graphical markings on the film to produce audio. I am interested in these techniques, as they showcase examples of how data can be turned into different forms and represented and consumed in different ways. This can open up new ways of thinking about the data and prompt a deeper look into its meanings or purpose.

This exploration of the relationship between the visual and audio forms has given me inspiration to expand on my previously laid-out plan for the project. I wish to with my work invoke a process of thought into the connection between different forms of data, specifically visual forms and sound. I aim to achieve this in a practical manner through, much as before, having the audience draw visual patterns via motion and light tracking. Where I will expand upon this, however, is by providing not only a visual form of feedback to the user’s actions, but also an audio one. I intend to, in some way, use the input of the user and the associated visual they produce to generate an audio representation of the patterns. If the intended effect is achieved, this will form a strong connection in the mind of the user between their input actions and the different forms of content that they generate.

Brightness Tracking with OpenCV

OpenCV, short for Open Source Computer Vision, is a programming library intended for real-time computer vision, a field which revolves around capturing, processing, analysing and understanding images with the aim of producing information. OpenCV can be used within the Processing environment to allow a greater level of camera interaction than with the standard Processing video library alone.

One of the capabilities of the OpenCV library is brightness tracking. Since this is the technology I’m going with for my project, I downloaded OpenCV and started reading up on how to implement it. After getting to grips with the basics, I started writing a piece of code to keep track of the brightest point the camera could see, and simply plot a point in the relevant positon on the screen. This would help me learn about the basic functions required to utilise brightness tracking.

The set-up of the brightness tracking code.

The set-up of the brightness tracking code.

As you can see from the code above, I started this by importing the necessary libraries – OpenCV and the Processing video library. I then declared variables for the OpenCV object, a capture device (the webcam on my laptop), and an image which would be used in the absence of a camera input. The setup function then sets the size and background colour of the window, before loading the fallback image and creating a new capture object. I chose a camera reoslution of just 160×120, as for the intended purpose of the sketch a high-resolution camera is not required. The image is also resized so it’s the same size as this camera input, leaving just the camera to be started and the OpenCV object to be initialised.

The main loop of the brightness tracking.

The main loop of the brightness tracking.

To actually go about finding the brightest point seen by the camera, then, I first replaced the image with the output of the camera. Then, using OpenCV’s inbuilt max() method, I was able to set a vector variable as the coordinates of the point in the image with the brightest colour value. To complete this test sketch I simply took this point, mapped it to the size of the canvas, and drew a white dot at the resultant location.

The result of the brightness tracking.

The result of the brightness tracking.

The result was, as I’d expected, a black canvas with white dots being regularly drawn at the location on the brightest point. However, since the point was simply the brightest in view of the camera all the time, the dots were basically being drawn around light sources and other bright objects nearby. In order to use this technique for my actual project I will need to somehow constrain the program so that the point is only drawn to the screen when I want it to be, i.e when a light is specifically being shone to activate the drawing of an image. I think this first test helped me get to grips with OpenCV and some of its capabilities, and I will continue to test methods of achieving my goals within the project.

Idea Development and Media Concepts

As I’ve previously discussed, my current plan is to base my interactive Processing project around drawing to a display by using physical motions and brightness tracking to manipulate the digital image.  Since the last post on the subject, I have developed this concept further and tried to incorporate a relevant media concept to base the project around.

My idea, as it currently stands, is intended to broaden the interaction from simply between the user and the display, to a further interaction between multiple users. I aim to achieve this by allowing not only for people to draw their own patterns or images to the display, but by having the audience collaborate with each other over a period of time to create one ‘community-made’ visual outcome.

This alteration to the idea is made to both further the complexity of the project and to tie the work into a concept which I perceive as key to the 21st century media landscape. Specifically, I have been looking at the ideas of collaboration and open-source culture.

Open-source is a term which gained popularity along with the rise of the internet (Weber 2004), and is generally used in relation to software development. Software that is open-source has its source code made available to the public under a license which allows anyone to look at, alter and re-distribute the code to be used by anyone and for any reason (St. Laurent 2008). This often results in open-source software being developed collaboratively, by multiple contributors and gives the end user a greater level of control and transparency.

The principle of open-source has spread beyond the collaborative development of computer applications, and elements of this way of producing content can be found in many aspects of modern media. Wikipedia, for example, is a mainstay in the landscape of digital media and provides an immensely useful resource in educating people around the globe. The site acts as an encyclopaedia of much of the world’s knowledge, and is available to edit and contribute to by the public. Without this open-source approach to knowledge curation, the site would undoubtedly contain much less information than it does today.

Many of the other features of the modern World Wide Web can also be seen to have been influenced and helped by open-source principles and ideals. Many of the most-used websites of the 21st century, things like social media, blogs, video communities and forums, all rely on user-generated content. This allows communities to form around these services which produce and consume content for and from each other, collaborating to form a bigger media picture. YouTube is a good example of this, with video crazes and ‘viral’ video phenomena happening all the time. Digital media ‘events’ occur such as the Harlem Shake series of videos, which thrive through user submissions and collaborations. In these examples, what starts as a single video soon becomes a much larger popular culture entity as the public collaborates and contributes more and more videos until there is a much bigger picture than a single or a few videos.

This concept could be summed up as ‘open-source culture’, or simply as collaborative media production, but either way it is clear to see its impacts on the way media, and perhaps especially digital media, is consumed in the 21st century.

The project I currently have planned will make use of these principles in allowing the collaboration between audience members to produce a final result. I am choosing, as per the brief, to follow a more abstract route in exploring media concepts, as I feel it is an area that lends itself more to practical demonstration than explicitly giving the audience a message about it.

I will continue to develop my ideas and aims for the project, and likely start initial development and testing of the core functionality shortly.

References:

St. Laurent, A., 2008. Understanding Open Source And Free Software Licensing. Sebastopol: O’Reilly Media, Inc.

Weber, S., 2004. The Success of Open Source. Harvard University Press.

Iterative Design Project – An Idea is Born

In an attempt to come up with an idea for the project, I recently had a look back at all the Processing tasks and mini-projects I’ve worked on since we got the brief for inspiration.  While most of the things I’ve done have been to practise a specific programming concept or technique, like this sketch which dealt with image loading and handling, some have been more general experimentation with the environment or with an idea that I’ve had.

One of the first sketches I wrote in Processing was a simple drawing tool, which would take a mouse input from the user and draw lines or shapes following the mouse location.

A drawing sketch with the ability to change between two colours.

A drawing sketch with the ability to change between two colours.

This is an idea which I think would work well when applied in a project with a brightness-tracking based implementation. The act of physically using a light source to draw digital graphics onto the screen provides a strong sense of interactivity and allows the audience to clearly see their contribution to and impact on the work. This is something I believe to be important in an interactive work such as this. This also appeals to the personal, individual interaction methodology I have in mind for the work.

While I think this ‘light drawing’ idea could form a strong core to the project, I do not think it is enough to simply allow drawing to a canvas. This doesn’t really have enough substance to it and doesn’t pertain to a particular media theory or concept. I’m going to keep the idea in mind then, and aim to expand upon it with possible additional functionality and concepts which will bring me closer to a fully-formed idea for a project which will relate to one of these theories and fit with the given brief.

Interactive Project – Interaction Technologies

Before moving on to more specific details for this project, I thought it necessary to think through the fundamentals of what I want to create. I have set out to define these basic characteristics for my work – things such as the visual style I’m aiming for, the media concepts involved, and in what general way I envision the audience interacting with the work.

Firstly it’s important to think about the technologies involved. The camera-based brief and where and how I choose to incorporate this  layer of interaction with the audience will at least partially define the outcome of the project, as well as the scope of possible media concepts which could be looked at. As I see it, there are really three main options which could be explored in terms of camera interaction methods.

One of these is face tracking, whereby algorithms and libraries are used to allow the camera to recognise the form of the human face. This allows for the locations of the faces of the audience to be tracked, and for this information to somehow control or alter a visual element on the display. I see this technique as having both advantages and disadvantages when it comes to this project, and audience interaction in general. The nature of the technique means that those wishing to interact with the work don’t need to have any access to additional tools, resources or skills. The use of this system revolves around the fact that everyone has a face, and therefore the means to participate in the interaction. However, a side effect of this is that the possible breadth of interactions is somewhat reduced. What I mean by this is that, if the interaction taking place is simply based on following a face, the interaction from the audience is reduced to standing in front of the display and moving around slightly, or walking past it.

The second option is brightness or colour tracking. This, similar to face tracking, also utilises algorithms to track a specific point or area in the camera’s field of view. In this case however rather than faces, the software is tracking some mixture of the brightest point or a certain colour or range of colours. This method could be argued to lend itself more to interacting with the general environment rather than individuals, as the brightest point the camera can see in the whole room would be the thing being tracked and influencing the interaction. However, it may be possible to set the work up in such a way that the environment is controlled, and the audience is provided with a bright light or coloured marker of some sort, with which they could control the interaction. This then would give the work a more personal one-on-one approach, as one person would be interacting with the display at a time as opposed to every face being recognised. While possibly making the interactive element less intuitive or harder to approach, I think that this would ultimately afford a greater freedom of interaction possibilities.

The third approach would be more along the lines of hand or full-skeleton tracking. This would be perhaps the most interesting, and would allow a number of possibilities for the method of interaction with the audience. It would, however, on a more technical note, necessitate the use of several things in the production of the work. These include a more sophisticated Kinect camera, rather than a standard web-cam, as well as more complicated and advanced libraries and programming. While a certain number of Kinect cameras are available for use, I’m unsure as to the feasibility of this approach for a number of reasons, including the necessity to do a large amount of the work over the Christmas holidays without access to university resources and facilities. I will continue looking into this method, however, I do not foresee a scenario where I would greatly prefer to use this technology over one of the other options. Hand tracking, for example, could be emulated with brightness tracking and a light held in the hand.

Ultimately, then, in choosing between the camera-interaction methods available, a choice must be made as to what kind of project I want to create. Face tracking would lend itself to a possibly more easily accessible level of interaction with the ability to be effective with a more passive audience – those that are simply walking past or standing in view of the camera. For brightness tracking to be used to interact with an audience, however, the audience would need to be given, or otherwise have access to additional tools (a light) to interact with the work. This targets a more active audience who are purposefully interacting rather than passing by. It also allows fore a more involved, deliberate feeling level of interaction.

Thinking about the space the work will be displayed in, and looking back at my analysis of it here, I have come to a conclusion. While it may seem that the prevalence of people passing through the space would point to a more passive interaction method being appropriate, the observations we made of the space showed that these people passing through the space typically don’t look at the screens, so the work would likely go unnoticed.  I think therefore possibly the best approach is to create a piece of work which is intended to be purposefully interacted with. This way, the people who are in a rush to get where they’re going are left to do their thing, and those that are actually actively occupying the space and might be more inclined to interact with a display can be targeted, allowing for these people to receive a deeper level of interaction as a result.

For this reason, I will be utilising brightness/light tracking within my project as the primary method of audience interaction. I think all the choices are valid and could lead to quality pieces of work, however this approach fits more with the type of work I wish to produce.

Now, time to develop the concepts and design of the work.

An object-oriented take on a previous Processing task.

In order to gain additional experience with Processing, I recently looked back over some of the past work I have done in the environment. I decided to modify the image manipulation code I showcased here and improve it with more advanced functionality and object-oriented principles.

In order to stick with the same ‘blocky’ aesthetic from the original sketch, I decided to again split the apple image into squares, however this time I applied a simple form of random motion to the resulting squares so that they would move around the canvas.

In order to achieve this effect, I first declared a class for the squares of the image, named simply ‘Agent’. I thought about the properties the squares would need to have, as well as the functions they would need to carry out.

OOP_apple_class

Fields and constructor.

The necessary fields for the class were an x and y coordinate, as well as a colour value for the square. I defined these as variables and created a constructor which would initialise the ‘Agent’ objects created and  allow these values to be passed to the object.

OOP_apple_class_methods

The methods of the Agent class.

The functions the class required were pretty simple. The squares generated from the image would need to do two things – first they would need to move in a random manner, and then they would need to be drawn to the canvas. To achieve this I created two methods for the class – update and draw. The update method simply assigns new values to the x and y variables one greater or smaller than the current value. The draw method sets the fill colour to be used as the colour value passed in to the object in the constructor, and then draws a rectangle at the location defined by the x and y values of the object. This rectangle is currently always 10 pixels in width and height, although this could also be controlled by a variable. It would be best to do it that way, however since this is simply a small practise exercise I chose to hard-code the value.

With the class and its methods and fields defined, I moved to the main functions of the program that would make it work. This is split into three main sections – setup, including setting the size of the canvas and creating a list for the agents to occupy, a function to load the image and populate the list with its pixels, and the main draw loop, which reads the list and calls the update and draw methods of the agent objects it contains.

The set-up code.

The set-up code.

The set-up code of the application is mostly simple – the size of the canvas is declared and the stroke is set to 0. A global ArrayList variable is also declared called agents. ArrayLists are useful constructs since they do not need to have a fixed size. This is useful in this situation as I do not necessarily know beforehand how many Agent objects will be produced from the image. This ArrayList is then given the value of the returned variable of the imageToAgents function.

The imageToAgents function.

The imageToAgents function.

This function creates a new temporary ArrayList named ‘p’. It then declares an image variable and loads in the ‘apple.jpg’ image which I used before for the previous image manipulation example. The pixels of this image are then loaded. What follows is a nested for-loop, which simply iterates through the array of pixels loaded from the image in increments of 10 on both x and y axes. The colour value of each pixel is found, and a new instance of the Agent class created with the x and y coordinates and colour set to match the pixel from the image. This object is then added to the ArrayList ‘p’. After this is complete, this ArrayList is returned, so that the ‘agents’ ArrayList receives its values.

The main draw loop.

The main draw loop.

The main draw loop simply iterates through the ‘agents’ ArrayList, creates a temporary Agent object with the values of the object at each position, and calls the update and draw methods defined within the Agent class for each one. This causes the squares with the colour obtained from the loaded image to be drawn to the canvas and move around randomly.

OOP_apple_result

The final effect.

I’m pleased with the final result I produced here, and this task has afforded some useful practise with classes and objects, and the ways in which they can be used to create a visual project in Processing. These principles and techniques are likely to be very useful not just in the creation of my final interactive project, but also moving forwards with my programming abilities in general.

 

Processing and Object-oriented Programming

The previous Processing work showcased on this blog has so far been of a somewhat introductory level, programming-wise. This is not a bad thing, as the main goal of the work I have been doing has been to introduce myself to the Processing environment, and the capabilities and functions it provides.

Now, in an attempt to begin attempting more ambitious tasks with the goal of preparing for the final Processing piece, we’re looking at more advanced programming ideas and techniques.

One of the things we’ve been taking a delve into is object-oriented programming (OOP). This is not a new concept for me, having previously learnt the fundamentals of OOP during a computing course at A-Level. However, it’s been a while since then and a refresher is never a bad thing.

Object-oriented programming is a paradigm of programming which is based on the use of ‘objects’. In this context, objects are structures which hold data and functions in a self-contained manner. Objects typically have procedures which can modify the information inside that specific object, as well as often methods to perform whatever task the object is designed for. Object-oriented programming is widely supported to some degree or other by a large number of programming languages and environments. Notable examples include C++, C#, Java, Python and PHP.

The idea of this method of programming is to create a piece of software from modular components. Using encapsulation and inheritance, this way of working is intended to make it easier to re-use existing code and extend the functionality of these objects and the overall software. For example if a piece of code was being written to simulate a flock of birds flying around the sky, instead of writing an individual block of code for each separate bird with its own properties and functions, code can be written to define a bird and the processes the bird will go through, and then objects can be created and used to refer to this bird construct. In this way, the code to determine the birds’ patterns and mannerisms only needs to be written once, and then can be used to create the entire flock.

In this example, the original piece of code to define a bird would be written as a class. In object-oriented programming, a class is like a template for the creation of objects. The class will have within it default initial values for characteristics of the objects in the form of variables. Classes also contain the necessary functions (methods) to define the behaviour of the objects created from them. These objects are then referred to as instances of the class.

OOP offers many benefits and advantages. One of these is the parallel that can be found to the physical world. Objects often encapsulate things and processes found in life – for example a shopping system will use objects for things like the shopping cart, customers, products and making orders.  Booch (1983) stated the greatest strength of object-oriented programming to be its ability to portray a model of real life.

I aim to incorporate object-oriented programming principles into my project. This will provide a good opportunity to get more accustomed with the programming techniques involved as well as helping to ensure that the piece of work I produce  is up to a high technical standard.

To this end I will no doubt be practising in the Processing environment some more in the near future, and will post my experiments with the object-oriented approach to analyse and improve my skills from.

References:

Booch,  G., 1983. Software Engineering with Ada. Calif.: Benjamin/Cummings Pub. Co.

The Brief and Initial Thoughts

I started thinking about ideas for my project by taking a closer look at the brief we have been given.
The Design Iterations unit calls for the production of:

A piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

The brief also points out that  the work produced can be a literal piece of direct information design, however we can also choose to create a more abstract or artistic piece. We are to use camera-based interaction for the work. At first, I looked at this and began thinking about areas and topics which would make interesting informative graphics. Concepts like the ever-increasing capacity storage media and the spread of the internet came to mind, as these are ideas which affect our everyday lives.

Upon further thought on how these topics would translate to a camera-based interactive piece, I realised that these directly informational style subjects perhaps don’t leave as much room for visual creativity as I would like. In my mind trying to explicitly tell the audience a message or fact about something lends itself to a somewhat rigid visual style – certainly there needs to be some way of saying what the piece is trying to say, be it text-based messages on the screen or some other communication method. The camera-based interaction here would likely be more along the lines of navigating or discovering the message of the piece. While this is certainly not without its merits, and poses an interesting challenge, the camera-based nature of the task, to me, fits with a more abstract project.

To this end, I will continue to look for media topics to base my project around, but I will keep in mind the nature of the piece I aim to create, and will aim to choose an area which fits well with a strong abstract visual design.

Posters, Processing and the Space

I have been thinking over the recent poster brief I undertook for the Design Iterations unit, and how it relates to the interactive display assignment for the final assessment. I believe that having done the task, and having discovered the things I did while completing it, will prove useful in producing my display piece.

Firstly, the process of analysing and requirement gathering that we went through before displaying our posters is a crucial part of the iterative design process as a whole, and one which will need to be carried out again for this next piece of work. Taking a a closer and more thorough look at the Weymouth House foyer, a space which while I have been in many times before I have always taken somewhat for granted, was interesting in that it pointed out the areas which need to be taken into account when designing for a specific space. More now than before designing the posters, I am aware of elements in the physical environment of a design project which have a potential impact on the way the work is seen by the audience. Things such as the general flow of people in the space, the lighting in the room and other such factors can be necessary to take into account at both the initial design phase and any subsequent alterations made to the design in order to maximise the potential of the project in its intended environment.

From the poster brief, I now know several key pieces of information about the Weymouth House foyer. I know the primary demographic of the occupants of the space to be primarily students, for a start. This much I could have easily guessed without any formal analysis. but it’s always good to actually find these things out. This means when I’m designing my work I can aim it at this young target audience, and not have to worry too much about designing for a group with interests too drastically dissimilar from my own. I also, importantly, have a better idea of the habits of these people in the space and the locations in the room where people are more likely to congregate.

Unlike the posters, however, which could be placed anywhere in the room within reason, there are only a certain number of screens which can be used for this project. This slightly limits the options available when considering the best location for the project, but I’m sure one will be suitable.

Mini Processing project – digital clock

In the spirit of taking simple methods and processes and applying them in different ways, I have set about creating a quick real-time digital clock in Processing. I did this in a very similar way to how I implemented the screenshot file naming scheme of a processing project from a while ago, with the inbuilt date/time functions and simple string concatenation.

sketch_clock

This just goes to show that these fundamental techniques of programming can be applied to many different scenarios – something which I will keep in mind when coming up with plans for the interactive assignment, as I will no doubt be able to incorporate much of what I have covered already.

Older posts Newer posts

© 2024 Aaron Baker

Theme by Anders NorenUp ↑