Playable City Award 2014: Shadowing – Part 2 Computer Vision

B000VTQ3LU.01.lg

Overview

In the previous post, we explored what hardware elements made up Shadowing. In this post we will explore the computer vision system, that allowed us to analyse, abstract and record events that occurred beneath the Shadowing lamp posts. It will also document how we managed to remotely repair disconnected or unresponsive cameras.

First what is computer vision? openCV (open computer vision) is a library of functions which focuses on image analysis, processing and evaluation. In laymen’s terms, it allows computers to process and understand visual data. openCV is used in a range of products and computing systems, such as robotics, Augmented Reality and ANPR (Automatic Number Plate Recognition).

Camera

We needed to use a fairly robust camera that could operate efficiently in low light conditions and still return useable images, the PS3 Eye camera fulfilled all these criteria. We removed the infrared filter and added a customised wide angle lens which blocked visible light, this meant that the camera would not see any projected imagery. This was important as any interference to the openCV processes would throw the tracking and recording of peoples Shadows off completely.

Why bother … well one of the openCV processes we used is called Background Subtraction. This is where you capture a default model/image of an environment, then subtract live images against the model image. Any differences between the images once processed and tracked are known as blobs. Now in standard Background Subtraction scenarios you only have to pay attention to lighting conditions which can affect the reliability of your image. However, when you use projected images and Background Subtraction you have the issue of the projected image being captured as well as the thing you actually want to be captured. By removing and replacing the IR Filter the projected light is removed from this equation.

It was then a simple case of mounting the camera and plugging the USB into the NUC. For more details on how to remove the infrared filter checkout this guide. You can also buy ready hacked cameras from Peau Productions.

How the CV system worked?

Our CV loop was a combination of processes that allowed us to accurately analyse whether participants were underneath the lamp post and return useful recording images. The system used two CV processes Background Subtraction and Pixel Differencing and a custom process for constructing the recorded image. The image below denotes the process and code snippets will explain what happened at each point of the process.

openCV

Code Section 1

This was the main loop, it allowed us pass variables into the CV processes to customise each lamp post to specific locations.

For example in Champions Square the post stands at 8 meters. At this height the Infrared Canon (which illuminated the scene) was not strong enough alone to pick out participants from the background, for this specific post we boosted the brightness and contrast aswell as lowering the threshold so the camera could see more of the subtracted image.

void CV::subtractionLoop(bool bLearnBackground,bool mirrorH,bool mirrorV,int threshold, int blur,int minBlobSize, int maxBlobSize,int maxBlobNum,bool fillHoles, bool useApproximation,float brightness,float contrast);

    // declare new boolean  
    bool bNewFrame = false;

    // First the system grabbed an image from the camera.
    vidGrabber.update();
    bNewFrame = vidGrabber.isFrameNew();

    if (bNewFrame)
    {
        // Copy the pixels from the grabber
        colorImg.setFromPixels(vidGrabber.getPixels(), _width,_height);

        if (bLearnBackground == true)
        {
            // Store a comparison image (background image)
            grayBg = grayImage;
            bLearnBackground = false;
        }
        // Grayscale the image
        grayImage = colorImg;

        // Copy Colour Image to the difference image then absDifference against the lastFrame 
        frameDiff = colorImg;
        frameDiff.absDiff(lastFrame);
        
        // Abs difference the background image
        grayImage.absDiff(grayBg);

        // Copy the Gray Image to the Difference image
        grayDiff = grayImage;
        
        // Copy gray difference to the thresh image
        threshImage = grayDiff;
        threshImage.threshold(threshold);
        threshImage.blur(blur);

        // Apply some image processing boosting the brightness, making the blobs more apparent
        threshImage.brightnessContrast(0.8, 0.5);

        grayDiff += frameDiff;
        // Isolate any pixels that do not conform to the threshold value
        frameDiff.threshold(threshold);
        
        // Pass the difference image into the contour finder
        contourFinder.findContours(frameDiff, minBlobSize, maxBlobSize, maxBlobNum,fillHoles,useApproximation);
        
        // Copy the Pixels
        unsigned char * diffpix = grayDiff.getPixels();
        unsigned char * threshpix = threshImage.getPixels();
        // Loop through the pixels
        for (int i = 0; i < (_width*_height); i ++)
        {
            int r = i * 4 + 0;
            int g = i * 4 + 1;
            int b = i * 4 + 2;
            int a = i * 4 + 3;
            
            if( threshpix[i] > 1)
            {
                outpix[r] = ofClamp(diffpix[i]*10, 0, 255);
                outpix[g] = ofClamp(diffpix[i]*10, 0, 255);
                outpix[b] = ofClamp(diffpix[i]*10, 0, 255);
                outpix[a] = threshpix[i];
            }
            else
            {
                outpix[r] = 0;
                outpix[g] = 0;
                outpix[b] = 0;
                outpix[a] = 0;
            }
        }
        // Set the Last frame to current loop frame
        lastFrame = colorImg;
    }
    // Set pix object to outpix
    pix.setFromPixels(outpix, _width, _height,4);
    
    // If no-one is there grab background after timer 
    if(contourFinder.nBlobs == 0 && ofGetElapsedTimeMillis()-  backgroundTimer >  4000 )
    {
        // Could just add to running average here;
        cout << "New Background" << endl;
        grayBg = colorImg;
        present = false;
    }

    if(contourFinder.nBlobs > 0)
    {
        backgroundTimer = ofGetElapsedTimeMillis();
        present = true;
    }
}

Code Section 2

To record the shadows we constructed a recording ‘frame’ or FBO (frame buffer object). This would allow us to populate pixels with specific data from CV processes. Now openFrameworks has a function for reading pixels from FBO’s which is FBO::readToPixels() However, this did not work with the Linux distro we used for some reason. So we replaced it with glReadPixels(), which is fairly similar to the oF code. For our purposes it simply copies pixels from one object to another.

//--------------------------------------------------------------
void CV::readAndWriteBlobData(ofColor backgroundColor,ofColor shadowColor)
{
    // Compile the FBO
    recordFbo.begin();
    
    // Make a White rectangle as the Background
    ofSetColor(backgroundColor);
    ofFill();
    ofRect(0, 0, _width, _height);
    
    // For however many people there are underneath the post
    for (int i = 0; i < contourFinder.nBlobs; i++)
    {
        // Make a shape
        ofSetColor(shadowColor);
        ofBeginShape();
        for (int k = 0 ; k < contourFinder.blobs[i].nPts; k++)
        {
            // Loop through and pull the coordinates from the contour finder and reconstruct it into a vertex polyline
            ofVertex(contourFinder.blobs[i].pts[k].x, contourFinder.blobs[i].pts[k].y);
        }
        ofEndShape(true);
    }
    // Grab pixels in the coordinates range and copy them to the pixels object
    glReadPixels(0, 0, 320, 240, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
    recordFbo.end();
    
    // Set pix object to Pixels
    pix.setFromPixels(pixels, 320, 240, 4);
}

When it came to recording these 'frames' we called the a custom function which returned the pixels from the FBO.

//--------------------------------------------------------------
ofPixels CV::getRecordPixels()
{
    return pix;
}

Disconnecting Camera

During the development process, we came across an unusual camera issue that presented itself with the following error message.

[warning] ofGstVideoUtils: update(): ofGstVideoUtils not loaded

This indicated that there were no images returning from the camera, after listing the active USB devices on the NUC (simply by opening the terminal window and typing lsusb), we found that the camera had disappeared from the USB interface list.
USB list
USB error

The only thing we could attribute to this error was the power saving mechanics on the NUC (it would disengage any inactive USB device to conserve power). A solution to this problem was to unplug and replug the camera each time the error occurred - however this is not a viable option when the cables were 5m in the air!

We resolved this issue by remotely disengaging and re-engaging the camera in the launch procedure of the Shadowing software.

I had previously written a addon that allowed us to control the projector through the use of a USB to RS-232 Cable. So when the software launched the addon would pulse a serial command to the projector telling it to turn on, likewise on exit another pulse would turn the projector off. We altered this system to solve our problem.

First, we split a male to female USB extender placing a Reed Relay on the 5V line between both ends of the cable. Then attached the DTR pin (4) and Ground pin (5) from the RS-232 cable to the Relay.
CameraHack

The relay would essentially behave as a on/off switch for the camera.

db9_pinout

When the software launched it would send an OFF (0) signal to the DTR pin which opened the relay cutting the power line to camera.

//--------------------------------------------------------------
void ofxProjectorControl::turnOffCamera()
{
    unsigned int state = ~TIOCM_DTR;
    ioctl(fd, TIOCMSET, &state);
    cout << "Turned camera off" << endl;
}

The program would then wait a second then send the ON signal to the DTR pin, closing the relay reinstating the power to the camera.

//--------------------------------------------------------------
void ofxProjectorControl::turnOnCamera()
{
    unsigned int state = TIOCM_DTR;
    ioctl(fd, TIOCMSET, &state);
    cout << "Turned camera on" << endl;
}

The program would stall for a further 3 seconds before allowing the program to progress, this gave the NUC enough time to establish a link to the camera before launching the Video Grabber setup routine.

To see how we recorded these images see the next post.

For more information on camera types go to Blair Neal's fantastic guide on the Creative Applications website.