Playable City Award 2014: Shadowing – Part 3 Shadows and ofxVideoBuffer

Overview

This post demonstrates how we captured ‘shadows’ from the images generated in the previous post and how we stored them for later use. We will also introduce the openFrameworks addon that was born out of achieving this goal – ofxVideoBuffer.

For those who might not have experienced Shadowing or seen any online documentation. Shadowing intended to give streetlights memory, as people walked beneath the lampheads, they would record the shadows then play them back for the next person who walks through.


The video above shows a very early example of the sort of effect and interaction Jonathan and Matthew wanted to explore. The main technical question we asked ourselves was …

How do you record video, prepare it and then play it back almost instantly?

We tried a number of different methods, such as recording videos using ffmpeg encoder. Where we piped processed images into an encoder, reloaded a directory populated with video files and generated ofVideoPlayer objects based on the number of videos in the directory. However, the reloading process did not allow for encoding time, and more often than not the videos corrupted.

Solution

Instead of attempting to physically save videos on to the storage device, we tried to create a video recording system that saved images into a live memory bank that could be accessed quickly without having to load and generated new objects.

Our solution was to create a system that populates a series of buffers with images.
This was the basis for our ofxVideoBuffer addon.

How it works?

The addon works by sequentially grabbing images specified by the user and pushing them into a buffer or mutable container of image/pixel objects.

When the sequence is played the addon cycles through the buffer frame by frame similar to a video clip, displaying the images sequentially.

How we used it?

We split the addon’s processes into three blocks:

  • Capture
  • Playback
  • Forget Me

This section may not make complete sense because of the random variable names and strange functions, but bear with me and I’ll explain what we did at each junction.

Capture

This did raise some interesting technical and artistic questions such as, at what point do you begin capturing images, how long do you need to capture for, what will happen during the capturing and what happens if someone leaves? A lot of the subsequent processes were dependent on the smoothness and successful execution of the capturing of Shadows.

This is how we approached it.

For the purpose of this post, we created a global instance of ofxVideoBuffer called it b.
Then created a std::deque of ofxVideoBuffer’s called buffers.
These will be used in all of the examples below.

What we did was first to check whether our openCV tracking had detected whether someone was underneath the lamp.
If so flip the startRecording flag.

    // If a blob is detected and the number of images in the buffer is less than the 
    // MAX buffer size flip the Start Recording bool
    if(openCV.isSomeoneThere() && imageCounter < MAX_BUFFER_SIZE)
    {
        startRecording = true; // Enable recording
        hasBeenPushedFlag = false; // Recording has not been added to the memory bank
    }

Once the startRecording flag had been flipped to true we then checked whether the camera had a new image.
If so b (which is the single ofxVideoBuffer) would grab a processed openCV image every frame and increment the imageCounter by one. By incrementing the counter we could keep track of the size of the buffer being created.

if(startRecording == true)
{
    // If new frame from the Camera
    if (openCV.newFrame())
    {
        // Capture Data according to %i number
        if (ofGetFrameNum() % 1 == 0)
        {
            // Capture the CV image 
            b.getNewImage(openCV.getRecordPixels());
            imageCounter++;
        }
    }
}

If someone left, the program would flip the startRecording flag false, check if the buffer was larger than the predefined Minimum. If not discard the recording. If so, push b (single video buffer) to the front of the of the buffers container. Then clear b of data, setting the hasBeenPushedFlag to true and resetting the imageCounter to 0. This essentially transferred the image buffer into the memory bank.

if(!openCV.isSomeoneThere())
{
        startRecording = false;
        if (hasBeenPushedFlag == false)
        {
            if (imageCounter >= MIN_BUFFER_SIZE)
            {
                buffers.push_front(b);
                b.clear();
                hasBeenPushedFlag = true;
                imageCounter = 0;
            }
            else if(imageCounter < MIN_BUFFER_SIZE)
            {
                b.clear();
                imageCounter = 0;
                hasBeenPushedFlag = true;
            }
        }
}

This was our first attempt of integrating the addon with the computer vision system, which would capture then playback instantly - which Jonathan happily volunteered to demonstrate.

A video posted by davidhaylock (@davidhaylock) on

Forget Me

As we could have quite happily continued storing recordings, we needed to ensure the system was not taxed by holding all the image data. Therefore we limited the number of recordings the system could hold in memory. This was perhaps the easiest part of the buffers to mange as it could be done by one line of code. We simply checked the size of the buffers container, If it grew bigger than howManyBuffersToStore (14 Safely. 20 at a push) then the program popped/deleted the last buffer in memory then shifted the buffers one down in the container.

//--------------------------------------------------------------
    // If we have %i buffers in the memory then release one
    if (buffers.size() > howManyBuffersToStore)
    {
        buffers.pop_back();
    }

Playback

Once we had the recordings/buffers we had to figure out what to do with them. Jonathan and Matthew had previously drawn up a number of interaction loops and settled on two separate playback routines.

Awake

The Active or Awake state is triggered by someone (well call them shadow 1) walking beneath the lamphead. In this state, the lamppost would wake up and play the previous shadow in the memory bank (shadow 2), whilst capturing the live action. If the live action continued beyond the last shadow (shadow 2) recording, the program would begin sequentially going through the memory bank ie shadow 3,4 etc. However, if shadow 1 were to leave, the program would playback their shadow to them.

void ofApp::ShadowingProductionMode()
{
    if(openCV.isSomeoneThere() && openCV.isSomeoneThere() != lastPresentState && buffers.size() > 0 && buffers[0].isNearlyFinished())
    {
        playBackLatch = false; 
        buffers[0].reset();
        buffers[0].start();
    }
    else if(!openCV.isSomeoneThere() && dream == false && playBackLatch == false)
    {
        bSwitch = true;
        buffers[0].start();
        playBackLatch  = false;
    }
    else if(dream == true)
    {
        // Dream Sequentially
        ShadowingDreamState();    
    }
    lastPresentState = openCV.isSomeoneThere();
}

This is the lamppost waking up, capturing Jonathan and Matthew then playing them back to themselves.

A video posted by davidhaylock (@davidhaylock) on

Dreaming

Dreaming was the dormant state of the lamppost. This was only triggered once the lamppost had been inactive for more than 1 minute.
After this it began to 'dream' about the last 14 shadows that had past beneath it.

//--------------------------------------------------------------
//* Dream state - Play the buffers Sequentially
//--------------------------------------------------------------
void ofApp::ShadowingDreamState()
{
    // Check if the buffers are live. Start and Draw the first element in the vector.
    // When the buffer has finished playing the iterate to the next buffer
    if (!buffers.empty())
    {
        buffers[whichBufferAreWePlaying].start();
        
        if (buffers.size() > 2)
        {
            if (buffers[whichBufferAreWePlaying].isFinished() && randomWaitLatch == false)
            {
                randomWaitTimer = ofGetElapsedTimeMillis() + ofRandom(1000,4000);
		randomWaitLatch = true;
            }
        }
	if (buffers.size() > 2)
	{
            if (randomWaitLatch && ofGetElapsedTimeMillis() > randomWaitTimer)
	    {
	         if (whichBufferAreWePlaying >= buffers.size())
        	 {
            	        // Reset the first Buffer
	                buffers[0].reset();
        	        // Go back to the start 
            	        whichBufferAreWePlaying = 0;
           	        buffers[whichBufferAreWePlaying].start();
		        randomWaitLatch = false;
        	 }
		 else
		 {
			 buffers[whichBufferAreWePlaying+1].reset();
			 whichBufferAreWePlaying++;
		 	 buffers[whichBufferAreWePlaying].start();
		 	 randomWaitLatch = false;
		 }	
	    }
	}
	if (whichBufferAreWePlaying >= buffers.size())
	{
		buffers[0].reset();
		whichBufferAreWePlaying = 0;
		buffers[whichBufferAreWePlaying].start();
	}
    }
}

And here is the sequential dreaming.

In the development process, we created a secondary dreaming state purely as an experiment. This state dreamt about everybody who had passed through recently at the same time. This is what the results looked like.

A video posted by davidhaylock (@davidhaylock) on

Draw

To draw the buffers, we simply rendered them through a Blur Shader then into a Frame Buffer object. This allowed us to get smoothed edges on the shadows, eliminating any holes in the captured images and expand the image without taxing the GPU and CPU.

One thing we found, was that to ensure smooth transition between all states, we needed to constantly draw all the buffers (no matter whether the lampposts were dreaming or awake). The buffers would fadeout and stop on their own, if we stopped the buffers, it looked disjointed and ugly. So in the program we made the buffers update and draw constantly and only trigger them from the specific state, by the using the commands start and reset.

/--------------------------------------------------------------
void ofApp::draw()
{
    ofBackground(backColor);
    mainOut.begin();
    ofClear(backColor);
    
    if (useShader)
    {
        shader.begin();
        ofSetColor(255, 255);
        ofRect(0, 0, 320,240);
    }
    ofEnableBlendMode(OF_BLENDMODE_MULTIPLY); 
    if (!buffers.empty())
    {
        for (int i = 0; i < buffers.size(); i++)
        {
            buffers[i].draw(255);
        }
    }
    ofDisableBlendMode();
    if (useShader)
    {
        shader.end();
        shader.draw();
    }
    mainOut.end();

    ofSetColor(255, 255, 255);
    mainOut.draw(0,0,ofGetWidth(),ofGetHeight());

    // As it implies does alpha layering and draws mask to blur the edges of the projection
    if (drawMask)
    {
        ofEnableAlphaBlending();
        ofSetColor(255, 255);
        masks[whichMask].draw(0,0,ofGetWidth(),ofGetHeight());
        ofDisableAlphaBlending();
    }
}

The only other important process in playback was to pass the buffers through a mask, which smoothed the edges of the projected image and created a interaction zone, we settled on using an elliptical mask as it was in keeping with the streetlight image.

As Jonathan says Shadowing ...

"In its most poetic form, creates pools of memory on the street, essentially compressing time in a single space."

If you want to see how we maintained and monitored the lampposts. Similarly, if you want to look at how we created other parts of Shadowing.

If you want a quick and simple guide of how to use ofxVideoBuffer read on.

  • First download the addon from https://github.com/DHaylock/ofxVideoBuffer
  • Place the addon in your openFrameworks addon directory
  • Create a new project using the project generator, adding the ofxVideoBuffer addon
  • In the .h file add the following
    // Include the addon
    #include "ofxVideoBuffers.h"
    
    // Specify some width and height parameters
    #define WIDTH 320
    #define HEIGHT 240
    
    ofFbo fbo; // This is the recording frame
    ofxVideoBuffers buffer; // The ofxVideoBuffer object
    ofImage pixs; // Output pixels from the recording frame
    bool record;
    bool playback;
    
  • In your app setup you need to allocate the FBO size and set some variables
    //--------------------------------------------------------------
    void ofApp::setup()
    {
        ofSetFrameRate(60);
        fbo.allocate(WIDTH,HEIGHT,GL_RGBA);
        
        fbo.begin();
        ofClear(0, 0, 0);
        fbo.end();
        
        record = false;
        playback = false;
    }
  • Next we need to draw or create our images for our buffer to grab. In this case well use a circle that changes colour over time.
    //--------------------------------------------------------------
    void ofApp::update()
    {
        fbo.begin();
        ofClear(0); 
        float hue = fmodf(ofGetElapsedTimef()*100,255);
        ofColor c = ofColor::fromHsb(hue, 255, 255);
        ofSetColor(c);
        ofCircle(mouseX, mouseY, 10);
        fbo.end();
        fbo.readToPixels(pixs);
        
        // If record is true grab the images from the FBO
        if (record)
        {
            buffer.getNewImage(pixs);
        }
        else  {   }
    
        if (buffer.isFinished())
        {
            playback = false;
        }
        
        // If the buffer has no images in it don't update. This is important! 
        // Other wise the app will try and update an empty buffer and crash
        if (!buffer.isEmpty())
        {
            buffer.update();
        }
    } 
  • Then its a simple case of drawing the buffer.
     
    //--------------------------------------------------------------
    void ofApp::draw()
    {
        ofBackground(50);  
        ofSetColor(255);
        fbo.draw(0,0);
        
        // Draw the Buffer
        if (!buffer.isEmpty())
        {
            buffer.draw(WIDTH, 0, WIDTH, HEIGHT);
        }
        
        if(record)
        {
            ofSetColor(255, 0, 0);
            ofCircle(10, 12, 5);
        }
        
        if (playback)
        {
            ofSetColor(0, 255, 0);
            ofCircle(WIDTH+10, 12, 5);
        }
    }
  • After which you can add your control statements.
    //--------------------------------------------------------------
    void ofApp::keyPressed(int key)
    {
        switch (key) {
            case 'r':
                record = !record;
                break;
            case 'p':
                playback = true;
                buffer.reset();
                buffer.start();
                break;
            case 'c':
                buffer.clear();
                break;
            case 'f':
                buffer.setFade(false);
                break;
            case 'F':
                buffer.setFade(true);
                break;
            default:
                break;
        }
    } 

Heres the Recording.
Recording
And heres the Playback.
playback

Thats a rather basic example but the principles are standard throughout the more complex examples found in the repo.