Wednesday, January 24, 2007

Trials In Motion Segmentation

I'm working on segmenting the objects out of a video feed.

Most of the research papers I have read in regards to stationary traffic surveillance use simple background subtraction such as the frame before it or where the background image is taken when no traffic are present. ["Vision-based Detection of Activity for Traffic Control”] [“Real Time Detection of Crossing Pedestrians For Traffic Adaptive Signal Control”] Erosion and Dilation can be applied after to reduce noise. During our last class, Serge suggested perhaps taking the last N frames and compute an average from it to use as the background image. Oddly, I can't get the perfect segmentation pictures as found in the various research papers.

Here's a few runs in both day and night using different methods:
bike at night:

Subtracting from the Previous Frame Bike at Night:

Subtracting from the Average across all frames(not technically realistic since we can't see frames ahead of the current one) Working on doing just previous N frames.
bike at night:


car 5pm:


subtract previous frame car 5pm:


subtract avg car 5pm:

I also was looking other segmentation methods. I found an interesting one, “Better Foreground Segmentation Through Graph Cuts” by Nicholas Howe and Alexandra Deschamps. They had the Matlab code available for usage but it seems only to work with the demo movie file they included.
Here's how the sample video turned out. Looks good for blob detection(the next step), I'll take a more detailed look at it.

No comments: