Recife: Aquatic Pathways Installation

Earlier this year I was lucky enough to go to Brazil as part of the 2014 Recife: Playable City project. Our project Aquatic Pathways sought to increase awareness of the numerous river taxis that populate Recife’s waterways in a playful way.

The project took three forms: an online map, a river happening and a temporary installation. This post will focus on the installation, which demonstrated how our project would operate in the real world.

IMG_0344

What was the installation?

The installation consisted of a table, a 5m RGB LED strip, an Entec DMX Pro unit, a hacked Playstation 3 Eye Camera, a custom built computer vision application and an infrared LED attached to a small wooden boat.
If the boat moved into a hotspot area then the LED strip would change colour, and on exiting the hotspot the strip would slowly fade back to a neutral colour.

Aquatic Pathways

How it worked?

An openFrameworks application passed images from the PS3 Eye through a background subtraction algorithm. When the boat was placed on the table, the application would check for differences between the captured background image and the comparison image: any difference between the two is made into a blob object. The blob data was then used to detect whether or not the boat had entered into a hotspot or not. These hotspots existed as virtual areas of pixels: if the blob centroid (centre point of a blob) entered the pixel area then the software pushed colour values to the DMX Controller.

Using code extracts, the following section will explain how the installation reacted when participants moved the boat into the hotspots.

In the setup of the application we create numerous hotspots, which are custom objects with the following attributes:

Boxes one;
one.x = 366;
one.y = 112;
one.width = 40;
one.height = 40;
one.col = ofColor(255,13,0);
one.img = pinIcon;
vBoxes.push_back(one);

These are stored in an array for use later in the program.

After some other processes we begin performing the tracking algorithms. In the update method we check if a new frame has been loaded in from the camera, then push the pixels from that image into a grayscale image. After passing the grayscale image into through a blur filter, it looks for the absolute difference between the background image (which is captured on a click of a GUI element) and the current image. This is then passed into a contour finder that looks for distinct changes between dark and light pixels which are packaged up into blob objects. The minBlobSize, maxBlobSize, maxNumBlobs variables are there to tailor the tracker so we can focus on very specific elements.

bool bNewFrame = false;
cameraIn.update();
bNewFrame = cameraIn.isFrameNew();

if(bNewFrame)
{
colorImg.setFromPixels(cameraIn.getPixels(), CAM_WIDTH,CAM_HEIGHT); colorImg.mirror(bMirrorH, bMirrorW); grayImage = colorImg; if (bLearnBakground == true) { grayBg = grayImage; bLearnBakground = false; } grayDiff.blurGaussian(blur); grayDiff.absDiff(grayBg, grayImage); grayDiff.threshold(threshold); contourFinder.findContours(grayDiff, minBlobSize, maxBlobSize, maxNumBlobs, holes);
}

At the same time another process checks whether any blobs have been found in the image, if they have then copy the centroid coordinates over to a integer variable. In this project we had to scale the coordinates to fit the projection window, so we multiplied the coordinates with a scaling value from the GUI.

int x;
int y;

if (contourFinder.nBlobs > 0)
{
x = contourFinder.blobs[0].centroid.x*scaleX;
x = contourFinder.blobs[0].centroid.y*scaleY;
}

if ((x > vBoxes[0].x)&&(x < (vBoxes[0].x+vBoxes[0].width))&&((y > vBoxes[0].y)&&(y < (vBoxes[0].y+vBoxes[0].height))))
{
dmx.setLevel(channelR, vBoxes[0].col.r);
dmx.setLevel(channelG, vBoxes[0].col.g);
dmx.setLevel(channelB, vBoxes[0].col.b);
fadeValue[0] = vBoxes[0].col.r;
fadeValue[1] = vBoxes[0].col.g;
fadeValue[2] = vBoxes[0].col.b;
dmx.update();
}

The program then compares the position of the x and y variables against the position of the hotspot objects.

If the x and y coordinates entered a hotspot, the colour values assigned to that specific hotspot would be pushed to DMX unit. So DMX channelR would have a value of 255, channelG would have a value of 13 and finally channelB would have a value of 0.

If x and y exited a hotspot then the values would crossfade from the hotspot colour to our neutral colour, which is demonstrated below.

else {
if (fadeValue[0] != 0) { 
fadeAmountR = 5; 
fadeValue[0] -= fadeAmountR; 
dmx.setLevel(channelR, fadeValue[0]); dmx.update();
}
if (fadeValue[0] < FADE_THRESHOLD) { 
if (fadeValue[2] != 255) { 
fadeAmountB = 5;
fadeValue[2] += fadeAmountB; 
dmx.setLevel(channelB, fadeValue[2]);
dmx.update();
}
if (fadeValue[1] != 156) {
fadeAmountG = 5; 
fadeValue[1] += fadeAmountG; 
dmx.setLevel(channelG, fadeValue[1]); 
dmx.update(); 
}
}
if(fadeValue[0] < 1) {
fadeValue[0] = 0;
fadeAmountR = 0;  
} 
if(fadeValue[1] > 155) { 
fadeValue[1] = 156;  
fadeAmountG = 0;  
} 
if(fadeValue[2] > 254) { 
fadeValue[2] = 255;  
fadeAmountB = 0; 
} 
} 
}

And here it is in action!