What did I learn?
Using a laptop to run Processing involved having to start up/run processing/set co-ordinates/run the sketch at the start of each day and then shut down/secure the laptop at the end of each day. This proved time-consuming, so recent experiments with the Raspberry Pi are an attempt to work around this. The Pi can be left running 24 hours a day due to it’s low power usage, so hypothetically I would just need to use the remote with the video projector each day to start/stop the installation.
The experience of setting up the show itself was both exciting, frustrating and disappointing. It made me determined to have control over who I show work with (outside of the exhibiting requirements for the Masters degree)
What was successful?
I observed lots of viewers interacting with the installation, picking up the objects, watching the triggered videos and generally engaging with the piece. Each viewer that engaged with the work picked up all objects and watched all of the videos. I was surprised to see each person carefully replace each object into exactly the same position (the grey outlines on the print served as a sufficient prompt) – even so, amazing that no-one just tossed an object back onto the table.
What was unsuccessful?
The narrative part of the project didn’t succeed. I think the combination of picking up the object, watching the video and then having to read the subtitles on each video was too demanding on the viewer. Also I observed viewers picking up an object and being fascinated by how the installation worked – hence missing the triggered video. One viewed thought that the printed surface I’d placed on the table was interactive, no-one even spotted the webcam mounted on the rail (I thought it was terribly intrusive – clearly not). I deliberately gave no information on where the objects came from (found on a beach) nor the relationship between the videos (found on archive.org) and the narratives I’d imposed on them. When I chatted to a few people later on in the show, and explained the relationship between the objects and the videos they said that they understood the work much more clearly and wished that they’d known that when interacting with the piece. I have no wish to put up paragraphs of text explaining the reasoning behind my work, but I do want to work harder at imparting that meaning within the work itself. Perhaps if the triggered videos had been more overtly ‘found’ (if I somehow illustrated that they were from a depository of out-of / never in copyrighted work for instance) then this relationship would have been more discoverable.
Initially I was going to carry on making all 50 odd objects interactive, inviting the public to contribute via twitter / instagram – but I think the purpose of this work was to a)test an idea so I can b)expand it further (as opposed to carrying on and making this piece bigger). I think the whole piece was just too oblique, and that the physical and the digital relationship should have been more overt. I may embark on the social media side as a small experiment though.
So – my plan is to try and get this script working on the pi, failing that, buy a cheap PC laptop and deal with the logistical problems as outlined above. I have an idea for a v2 piece using the same script within a different installation.
Video of installation: