June

point-blank

Back in the studio after a week on the PG CERT course that Istituto Marangoni have enrolled me on. The course has great relevance for my Masters (reflective learning, teaching and learning theory) but also increases my workload considerably (the qualification equates to 1/3 of a full Masters).

Prior to this week I’ve been struggling with the Pi and Processing. After several frustrating days (and with the help of Ed Kelly) I’ve discovered that the Pi isn’t capable of running Processing at the speed necessary to display video. I’ve now purchased a small desktop PC which I’m going to use exclusively for Processing-based installation work. I still have the ‘stealability’ issue to deal with, but figure a refurbished desktop machine is way less covatable than a laptop. Nethertheless, I’ve lost a couple of weeks trying to bend the Pi to my will, so now I have rather a large mountain to climb if I’m going to show anything in July.

With this in mind I thought I’d spec out the rest of June. I’m pursuing 3 projects so the next 2 weeks are going to be crucial. I have a Plan A, Plan B and Plan C with regards to both the Lumen Prize and the interim show.

The 3 ideas that I’m working on are:

——————————————————————————————————————————

Plan A – Interactive video installation

Using Processing, a sonar distance sensor, a video projector, sound and image assets. The viewer controls the speed that a set of images appear. The closer they get to the projection, the slower the image rate, halting when the viewer is approx 1cm from the image.

or

a ‘Noir’ version of The Mystery Beach, re-using the technology from the interim show at Brixton East

——————————————————————————————————————————

Plan B – Video installation

Either:

Appropriated footage from John Boorman’s 1967 revenge thriller Point Blank

or

Appropriated footage from Peter Godfrey’s 1956 film noir Please Murder Me

——————————————————————————————————————————

Plan C – Redacted Comics

Display a series of redacted comics, perhaps in conjunction with a moving image piece.

——————————————————————————————————————————

W/C 16th June:

Get sonar distancing sensor, arduino and Processing to talk to each other.

Use distance measuring to control the image rate.

Test with imagery created by me (collage, greek statue vs superhero, digital collage)

Test with imagery downloaded from the internet

W/C 23 June

Finalise imagery to be used in installation

Work on display options for redacted comics

Test Point Blank / Please Murder Me footage

W/C 30 June

Enter work for Lumen

———————————————————————-

Final Piece – Interim Show

The final piece was made using the technology described above, with the viewer controlling the rate of the actors pace, using appropriated footage from the revenge thriller ‘Point Blank’.

Advertisements

The Mystery Beach. Interim Show. Reflection.

interim

What did I learn?

Using a laptop to run Processing involved having to start up/run processing/set co-ordinates/run the sketch at the start of each day and then shut down/secure the laptop at the end of each day. This proved time-consuming, so recent experiments with the Raspberry Pi are an attempt to work around this. The Pi can be left running 24 hours a day due to it’s low power usage, so hypothetically I would just need to use the remote with the video projector each day to start/stop the installation.

The experience of setting up the show itself was both exciting, frustrating and disappointing. It made me determined to have control over who I show work with (outside of the exhibiting requirements for the Masters degree)

What was successful?

I observed lots of viewers interacting with the installation, picking up the objects, watching the triggered videos and generally engaging with the piece. Each viewer that engaged with the work picked up all objects and watched all of the videos. I was surprised to see each person carefully replace each object into exactly the same position (the grey outlines on the print served as a sufficient prompt) – even so, amazing that no-one just tossed an object back onto the table.

What was unsuccessful?

The narrative part of the project didn’t succeed. I think the combination of picking up the object, watching the video and then having to read the subtitles on each video was too demanding on the viewer. Also I observed viewers picking up an object and being fascinated by how the installation worked – hence missing the triggered video. One viewed thought that the printed surface I’d placed on the table was interactive, no-one even spotted the webcam mounted on the rail (I thought it was terribly intrusive – clearly not). I deliberately gave no information on where the objects came from (found on a beach) nor the relationship between the videos (found on archive.org) and the narratives I’d imposed on them. When I chatted to a few people later on in the show, and explained the relationship between the objects and the videos they said that they understood the work much more clearly and wished that they’d known that when interacting with the piece. I have no wish to put up paragraphs of text explaining the reasoning behind my work, but I do want to work harder at imparting that meaning within the work itself. Perhaps if the triggered videos had been more overtly ‘found’ (if I somehow illustrated that they were from a depository of out-of / never in copyrighted work for instance) then this relationship would have been more discoverable.

Next steps

Initially I was going to carry on making all 50 odd objects interactive, inviting the public to contribute via twitter / instagram – but I think the purpose of this work was to a)test an idea so I can b)expand it further (as opposed to carrying on and making this piece bigger). I think the whole piece was just too oblique, and that the physical and the digital relationship should have been more overt. I may embark on the social media side as a small experiment though.

So – my plan is to try and get this script working on the pi, failing that, buy a cheap PC laptop and deal with the logistical problems as outlined above. I have an idea for a v2 piece using the same script within a different installation.

Video of installation:

Triggered videos:

Raspberry Pi + Processing

RaspberryPi_Logo

I’ve been setting up the Raspberry Pi. Just got Processing 2.1.2 running on it. Next step is to get the script (written by Ed Kelly) for ‘The Mystery Beach‘ running on it.

Why a Pi?

I have an idea around using a motion sensor to control video across several screen. I’m going to work through a bunch of tutorials, learn to code using Python then test my idea out.

I also want to further the concept I tested at the Brixton show. I have the script running but have some errors – It think they refer to OpenGL but need to do some further debugging.

During the Brixton East show I learnt that having to deal with a laptop in a public space is a massive pain. Having to visit the gallery twice a day, unlocking/locking up the laptop etc

This site helped me set up Processing on the Pi.

Multiscreen / Single frame

multiscreentest-v1

 

I’ve been creating some works based on multiscreen / single frame video / imagery.

V1 idea is to create a multiscreen installation that allows the viewer to pause each screen individually. Sort of like a ‘create your own comic strip’ thing.

Related to this are some works I’ve started making –  ‘Redacted Comic Books‘ – painting over / obscuring elements within each single frame of comic book to alter the narrative. I’ll put some pics up of these soon.

So this video is a crude draft of how this thing could look – with each video playing on an individual monitor. In this example you watch all video until the end and the final frames invite you to invent your own narrative.

*this is a very rough draft of an idea*

 

Exploring The Abandoned. Midpoint review.

Reflection on the midpoint review.

Feedback from my colleagues on my work was neglible. The usual people participated and the usual people didn’t. This is frustrating for someone of my age – I have the confidence to give constructive criticism / feedback, but this can’t be said for the majority of the students in my year. So…

One thing I picked up on was that nobody talked about the subtitled narratives that I’d imposed onto the archive.org video clips. There was talk of the decontextualisation of the imagery, of the viewer being forced to impose their own narrative, but no comments about the mini-scripts that I’d writtten. This seems to back up the reflective observations that I made about this part of The Mystery Beach.

I feel that the amount / diversity of the work that I’m producing is perhaps going against me a little – I’m not working on The Big Idea – nor am I spending months working on one piece. I know this experimental idyll will have to come to an end once we move into year 2, but I will strive to maintain a reckless appproach to making work.

 

Plastics Installation – Interim Show

Thinking of how the video/content appear next to the objects. Could it appear next to them, projected from the ceiling?

plastics-slected

Could I project all of the plastic items onto the table from above, and have a few of them placed as objects in amongst the projection?

schemtaic

Version 2 – webcam mounted above the table / surface – this means I don’t have to use a transparent surface / cut holes out of the table

 schemtaic3

And a very rough idea of what the mixture of projected and real items would look like

plastic-projected

Collecting Coloured Plastic Washed Up By The Sea – Interactive sketch idea

plastic

I’m going to create a test piece for the interim show at Brixton East.

It’ll involve 1 or more of the plastic pieces, probably on a table. A webcam mounted underneath, Using Processing (with a huge amount of help from Ed Kelley) the webcam will detect if one (or more?) of the objects are moved/picked up. This will then trigger audio, video, still images or text – projected (on the table?) or on a screen. It’s an ambitious sketch to work out in 3 weeks but would like to prove the theory – and then expand the work.