Aaron Baker

A digital media design blog.

Month: November 2014

Interactive Project – Interaction Technologies

Before moving on to more specific details for this project, I thought it necessary to think through the fundamentals of what I want to create. I have set out to define these basic characteristics for my work – things such as the visual style I’m aiming for, the media concepts involved, and in what general way I envision the audience interacting with the work.

Firstly it’s important to think about the technologies involved. The camera-based brief and where and how I choose to incorporate this  layer of interaction with the audience will at least partially define the outcome of the project, as well as the scope of possible media concepts which could be looked at. As I see it, there are really three main options which could be explored in terms of camera interaction methods.

One of these is face tracking, whereby algorithms and libraries are used to allow the camera to recognise the form of the human face. This allows for the locations of the faces of the audience to be tracked, and for this information to somehow control or alter a visual element on the display. I see this technique as having both advantages and disadvantages when it comes to this project, and audience interaction in general. The nature of the technique means that those wishing to interact with the work don’t need to have any access to additional tools, resources or skills. The use of this system revolves around the fact that everyone has a face, and therefore the means to participate in the interaction. However, a side effect of this is that the possible breadth of interactions is somewhat reduced. What I mean by this is that, if the interaction taking place is simply based on following a face, the interaction from the audience is reduced to standing in front of the display and moving around slightly, or walking past it.

The second option is brightness or colour tracking. This, similar to face tracking, also utilises algorithms to track a specific point or area in the camera’s field of view. In this case however rather than faces, the software is tracking some mixture of the brightest point or a certain colour or range of colours. This method could be argued to lend itself more to interacting with the general environment rather than individuals, as the brightest point the camera can see in the whole room would be the thing being tracked and influencing the interaction. However, it may be possible to set the work up in such a way that the environment is controlled, and the audience is provided with a bright light or coloured marker of some sort, with which they could control the interaction. This then would give the work a more personal one-on-one approach, as one person would be interacting with the display at a time as opposed to every face being recognised. While possibly making the interactive element less intuitive or harder to approach, I think that this would ultimately afford a greater freedom of interaction possibilities.

The third approach would be more along the lines of hand or full-skeleton tracking. This would be perhaps the most interesting, and would allow a number of possibilities for the method of interaction with the audience. It would, however, on a more technical note, necessitate the use of several things in the production of the work. These include a more sophisticated Kinect camera, rather than a standard web-cam, as well as more complicated and advanced libraries and programming. While a certain number of Kinect cameras are available for use, I’m unsure as to the feasibility of this approach for a number of reasons, including the necessity to do a large amount of the work over the Christmas holidays without access to university resources and facilities. I will continue looking into this method, however, I do not foresee a scenario where I would greatly prefer to use this technology over one of the other options. Hand tracking, for example, could be emulated with brightness tracking and a light held in the hand.

Ultimately, then, in choosing between the camera-interaction methods available, a choice must be made as to what kind of project I want to create. Face tracking would lend itself to a possibly more easily accessible level of interaction with the ability to be effective with a more passive audience – those that are simply walking past or standing in view of the camera. For brightness tracking to be used to interact with an audience, however, the audience would need to be given, or otherwise have access to additional tools (a light) to interact with the work. This targets a more active audience who are purposefully interacting rather than passing by. It also allows fore a more involved, deliberate feeling level of interaction.

Thinking about the space the work will be displayed in, and looking back at my analysis of it here, I have come to a conclusion. While it may seem that the prevalence of people passing through the space would point to a more passive interaction method being appropriate, the observations we made of the space showed that these people passing through the space typically don’t look at the screens, so the work would likely go unnoticed.  I think therefore possibly the best approach is to create a piece of work which is intended to be purposefully interacted with. This way, the people who are in a rush to get where they’re going are left to do their thing, and those that are actually actively occupying the space and might be more inclined to interact with a display can be targeted, allowing for these people to receive a deeper level of interaction as a result.

For this reason, I will be utilising brightness/light tracking within my project as the primary method of audience interaction. I think all the choices are valid and could lead to quality pieces of work, however this approach fits more with the type of work I wish to produce.

Now, time to develop the concepts and design of the work.

An object-oriented take on a previous Processing task.

In order to gain additional experience with Processing, I recently looked back over some of the past work I have done in the environment. I decided to modify the image manipulation code I showcased here and improve it with more advanced functionality and object-oriented principles.

In order to stick with the same ‘blocky’ aesthetic from the original sketch, I decided to again split the apple image into squares, however this time I applied a simple form of random motion to the resulting squares so that they would move around the canvas.

In order to achieve this effect, I first declared a class for the squares of the image, named simply ‘Agent’. I thought about the properties the squares would need to have, as well as the functions they would need to carry out.

OOP_apple_class

Fields and constructor.

The necessary fields for the class were an x and y coordinate, as well as a colour value for the square. I defined these as variables and created a constructor which would initialise the ‘Agent’ objects created and  allow these values to be passed to the object.

OOP_apple_class_methods

The methods of the Agent class.

The functions the class required were pretty simple. The squares generated from the image would need to do two things – first they would need to move in a random manner, and then they would need to be drawn to the canvas. To achieve this I created two methods for the class – update and draw. The update method simply assigns new values to the x and y variables one greater or smaller than the current value. The draw method sets the fill colour to be used as the colour value passed in to the object in the constructor, and then draws a rectangle at the location defined by the x and y values of the object. This rectangle is currently always 10 pixels in width and height, although this could also be controlled by a variable. It would be best to do it that way, however since this is simply a small practise exercise I chose to hard-code the value.

With the class and its methods and fields defined, I moved to the main functions of the program that would make it work. This is split into three main sections – setup, including setting the size of the canvas and creating a list for the agents to occupy, a function to load the image and populate the list with its pixels, and the main draw loop, which reads the list and calls the update and draw methods of the agent objects it contains.

The set-up code.

The set-up code.

The set-up code of the application is mostly simple – the size of the canvas is declared and the stroke is set to 0. A global ArrayList variable is also declared called agents. ArrayLists are useful constructs since they do not need to have a fixed size. This is useful in this situation as I do not necessarily know beforehand how many Agent objects will be produced from the image. This ArrayList is then given the value of the returned variable of the imageToAgents function.

The imageToAgents function.

The imageToAgents function.

This function creates a new temporary ArrayList named ‘p’. It then declares an image variable and loads in the ‘apple.jpg’ image which I used before for the previous image manipulation example. The pixels of this image are then loaded. What follows is a nested for-loop, which simply iterates through the array of pixels loaded from the image in increments of 10 on both x and y axes. The colour value of each pixel is found, and a new instance of the Agent class created with the x and y coordinates and colour set to match the pixel from the image. This object is then added to the ArrayList ‘p’. After this is complete, this ArrayList is returned, so that the ‘agents’ ArrayList receives its values.

The main draw loop.

The main draw loop.

The main draw loop simply iterates through the ‘agents’ ArrayList, creates a temporary Agent object with the values of the object at each position, and calls the update and draw methods defined within the Agent class for each one. This causes the squares with the colour obtained from the loaded image to be drawn to the canvas and move around randomly.

OOP_apple_result

The final effect.

I’m pleased with the final result I produced here, and this task has afforded some useful practise with classes and objects, and the ways in which they can be used to create a visual project in Processing. These principles and techniques are likely to be very useful not just in the creation of my final interactive project, but also moving forwards with my programming abilities in general.

 

Processing and Object-oriented Programming

The previous Processing work showcased on this blog has so far been of a somewhat introductory level, programming-wise. This is not a bad thing, as the main goal of the work I have been doing has been to introduce myself to the Processing environment, and the capabilities and functions it provides.

Now, in an attempt to begin attempting more ambitious tasks with the goal of preparing for the final Processing piece, we’re looking at more advanced programming ideas and techniques.

One of the things we’ve been taking a delve into is object-oriented programming (OOP). This is not a new concept for me, having previously learnt the fundamentals of OOP during a computing course at A-Level. However, it’s been a while since then and a refresher is never a bad thing.

Object-oriented programming is a paradigm of programming which is based on the use of ‘objects’. In this context, objects are structures which hold data and functions in a self-contained manner. Objects typically have procedures which can modify the information inside that specific object, as well as often methods to perform whatever task the object is designed for. Object-oriented programming is widely supported to some degree or other by a large number of programming languages and environments. Notable examples include C++, C#, Java, Python and PHP.

The idea of this method of programming is to create a piece of software from modular components. Using encapsulation and inheritance, this way of working is intended to make it easier to re-use existing code and extend the functionality of these objects and the overall software. For example if a piece of code was being written to simulate a flock of birds flying around the sky, instead of writing an individual block of code for each separate bird with its own properties and functions, code can be written to define a bird and the processes the bird will go through, and then objects can be created and used to refer to this bird construct. In this way, the code to determine the birds’ patterns and mannerisms only needs to be written once, and then can be used to create the entire flock.

In this example, the original piece of code to define a bird would be written as a class. In object-oriented programming, a class is like a template for the creation of objects. The class will have within it default initial values for characteristics of the objects in the form of variables. Classes also contain the necessary functions (methods) to define the behaviour of the objects created from them. These objects are then referred to as instances of the class.

OOP offers many benefits and advantages. One of these is the parallel that can be found to the physical world. Objects often encapsulate things and processes found in life – for example a shopping system will use objects for things like the shopping cart, customers, products and making orders.  Booch (1983) stated the greatest strength of object-oriented programming to be its ability to portray a model of real life.

I aim to incorporate object-oriented programming principles into my project. This will provide a good opportunity to get more accustomed with the programming techniques involved as well as helping to ensure that the piece of work I produce  is up to a high technical standard.

To this end I will no doubt be practising in the Processing environment some more in the near future, and will post my experiments with the object-oriented approach to analyse and improve my skills from.

References:

Booch,  G., 1983. Software Engineering with Ada. Calif.: Benjamin/Cummings Pub. Co.

The Brief and Initial Thoughts

I started thinking about ideas for my project by taking a closer look at the brief we have been given.
The Design Iterations unit calls for the production of:

A piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

The brief also points out that  the work produced can be a literal piece of direct information design, however we can also choose to create a more abstract or artistic piece. We are to use camera-based interaction for the work. At first, I looked at this and began thinking about areas and topics which would make interesting informative graphics. Concepts like the ever-increasing capacity storage media and the spread of the internet came to mind, as these are ideas which affect our everyday lives.

Upon further thought on how these topics would translate to a camera-based interactive piece, I realised that these directly informational style subjects perhaps don’t leave as much room for visual creativity as I would like. In my mind trying to explicitly tell the audience a message or fact about something lends itself to a somewhat rigid visual style – certainly there needs to be some way of saying what the piece is trying to say, be it text-based messages on the screen or some other communication method. The camera-based interaction here would likely be more along the lines of navigating or discovering the message of the piece. While this is certainly not without its merits, and poses an interesting challenge, the camera-based nature of the task, to me, fits with a more abstract project.

To this end, I will continue to look for media topics to base my project around, but I will keep in mind the nature of the piece I aim to create, and will aim to choose an area which fits well with a strong abstract visual design.

Posters, Processing and the Space

I have been thinking over the recent poster brief I undertook for the Design Iterations unit, and how it relates to the interactive display assignment for the final assessment. I believe that having done the task, and having discovered the things I did while completing it, will prove useful in producing my display piece.

Firstly, the process of analysing and requirement gathering that we went through before displaying our posters is a crucial part of the iterative design process as a whole, and one which will need to be carried out again for this next piece of work. Taking a a closer and more thorough look at the Weymouth House foyer, a space which while I have been in many times before I have always taken somewhat for granted, was interesting in that it pointed out the areas which need to be taken into account when designing for a specific space. More now than before designing the posters, I am aware of elements in the physical environment of a design project which have a potential impact on the way the work is seen by the audience. Things such as the general flow of people in the space, the lighting in the room and other such factors can be necessary to take into account at both the initial design phase and any subsequent alterations made to the design in order to maximise the potential of the project in its intended environment.

From the poster brief, I now know several key pieces of information about the Weymouth House foyer. I know the primary demographic of the occupants of the space to be primarily students, for a start. This much I could have easily guessed without any formal analysis. but it’s always good to actually find these things out. This means when I’m designing my work I can aim it at this young target audience, and not have to worry too much about designing for a group with interests too drastically dissimilar from my own. I also, importantly, have a better idea of the habits of these people in the space and the locations in the room where people are more likely to congregate.

Unlike the posters, however, which could be placed anywhere in the room within reason, there are only a certain number of screens which can be used for this project. This slightly limits the options available when considering the best location for the project, but I’m sure one will be suitable.

© 2017 Aaron Baker

Theme by Anders NorenUp ↑