What did I learn?
Using a laptop to run Processing involved having to start up/run processing/set co-ordinates/run the sketch at the start of each day and then shut down/secure the laptop at the end of each day. This proved time-consuming, so recent experiments with the Raspberry Pi are an attempt to work around this. The Pi can be left running 24 hours a day due to it’s low power usage, so hypothetically I would just need to use the remote with the video projector each day to start/stop the installation.
The experience of setting up the show itself was both exciting, frustrating and disappointing. It made me determined to have control over who I show work with (outside of the exhibiting requirements for the Masters degree)
What was successful?
I observed lots of viewers interacting with the installation, picking up the objects, watching the triggered videos and generally engaging with the piece. Each viewer that engaged with the work picked up all objects and watched all of the videos. I was surprised to see each person carefully replace each object into exactly the same position (the grey outlines on the print served as a sufficient prompt) – even so, amazing that no-one just tossed an object back onto the table.
What was unsuccessful?
The narrative part of the project didn’t succeed. I think the combination of picking up the object, watching the video and then having to read the subtitles on each video was too demanding on the viewer. Also I observed viewers picking up an object and being fascinated by how the installation worked – hence missing the triggered video. One viewed thought that the printed surface I’d placed on the table was interactive, no-one even spotted the webcam mounted on the rail (I thought it was terribly intrusive – clearly not). I deliberately gave no information on where the objects came from (found on a beach) nor the relationship between the videos (found on archive.org) and the narratives I’d imposed on them. When I chatted to a few people later on in the show, and explained the relationship between the objects and the videos they said that they understood the work much more clearly and wished that they’d known that when interacting with the piece. I have no wish to put up paragraphs of text explaining the reasoning behind my work, but I do want to work harder at imparting that meaning within the work itself. Perhaps if the triggered videos had been more overtly ‘found’ (if I somehow illustrated that they were from a depository of out-of / never in copyrighted work for instance) then this relationship would have been more discoverable.
Initially I was going to carry on making all 50 odd objects interactive, inviting the public to contribute via twitter / instagram – but I think the purpose of this work was to a)test an idea so I can b)expand it further (as opposed to carrying on and making this piece bigger). I think the whole piece was just too oblique, and that the physical and the digital relationship should have been more overt. I may embark on the social media side as a small experiment though.
So – my plan is to try and get this script working on the pi, failing that, buy a cheap PC laptop and deal with the logistical problems as outlined above. I have an idea for a v2 piece using the same script within a different installation.
Video of installation:
I’ve been setting up the Raspberry Pi. Just got Processing 2.1.2 running on it. Next step is to get the script (written by Ed Kelly) for ‘The Mystery Beach‘ running on it.
Why a Pi?
I have an idea around using a motion sensor to control video across several screen. I’m going to work through a bunch of tutorials, learn to code using Python then test my idea out.
I also want to further the concept I tested at the Brixton show. I have the script running but have some errors – It think they refer to OpenGL but need to do some further debugging.
During the Brixton East show I learnt that having to deal with a laptop in a public space is a massive pain. Having to visit the gallery twice a day, unlocking/locking up the laptop etc
This site helped me set up Processing on the Pi.
Thinking of how the video/content appear next to the objects. Could it appear next to them, projected from the ceiling?
Could I project all of the plastic items onto the table from above, and have a few of them placed as objects in amongst the projection?
Version 2 – webcam mounted above the table / surface – this means I don’t have to use a transparent surface / cut holes out of the table
And a very rough idea of what the mixture of projected and real items would look like
Felt that the work wasn’t as immediate as it could be due to the viewer having to use a mouse and keyboard to interact with the piece
I feel unsure as to what is gained by displaying the piece in the context of a gallery, as opposed to it living on the web.
I stumbled on this interactive installation in the foyer of the Royal Festival Hall. Your left hand controls the pan and volume of the music, your right is meant to follow the timing line (close up in 3rd pic)Very nice idea but…it just didn’t work properly. The lag between moving your hand and the reaction of the installation was a good 3-5 seconds. Will try and find out what controller they used…
“German movie, Last Call, is the first ever interactive horror movie. When you go into the theatre, you text your phone number to a speed-dial database. During the movie, the protagonist makes a phone call to a random audience member and asks their advice. “Should I go up or down?” “Left or right?” “Should I help the creepy man wrapped in bandages, rocking back and forth on the floor, or should I look out for myself?” Voice recognition software means the character identifies what the audience member wants them to do and follows his or her instructions.”