Wednesday, March 14, 2007

A Look Back at the Project

Vision Based Traffic Light Triggering for Motorbikes Research Paper

Vision Based Traffic Light Triggering for Motorbikes Report

The goal of this project was to create a computer vision algorithm to detect incoming motorbike traffic for traffic light triggering.

I intended to partially track the motorbike enough for it to distinguish it from cross traffic and noise.

Problems I ran into and steps I took to correct them:
1. I choose to use video processing rather than single image processing. Video footage allows motion segmentation by background subtraction.

2. Defining the background image for subtraction was also a problem. A set image is not very robust. a slight movement to the camera will produce noise. Just using the previous frame will not give enough difference. Elected to use a sliding average to determine the background image.


2. Labeling the foreground blobs for tracking. Accomplished by computing the distance from previous and attaching it to nearest one within a threshold. If not, create a new label.


3. Lighting conditions does adversely affect motion segmentation. Solved by using different colorspaces to get reduce of the influence of lighting.
RGB


B/Y Opponent

4. Simple Tracking methods are easily affected by various factors. Cross traffic can easily steal the intended tracking(or vice versa). I determined that even if the tracking does not entirely work, sometimes the partial tracking maybe enough to do the job. In my case, I was able to use RANSAC to determine the partial trajectories.



Steps I did not have time for:
- Actual classification system. Now the traffic detection problem is reduced to a line fitting/classification problem. Ideas include feeding in annotated correct trajectories and testing by a difference on the test data.

- Extensive training/testing set.

- Should have researched more on other methods at the various steps. Instead of tracking by labeling by area, perhaps should have tried interest point detection(as used by the other groups) and following a region of interest points moving in the same direction.

RANSAC Lines

To solve last week's problem of fitting partial lines, Serge suggested RANSAC. I implemented RANSAC for line fitting and the results are good as expected. RANSAC basically takes n random points, creates a model based on these points, calculate the error for all the data points from this model and add to a list of possible models if within a threshold count.More details can be found at wikipedia

Here is a line fitting to the motorcycle frame from last week:


This is one from a car in the background:


As seen in the first image, we now have a more robust model for detecting motion towards a traffic stop.

Wednesday, March 7, 2007

Partial Line Fittings for Classification

My problem now is to see how to fit partial lines. Originally, I wanted the trajectory creation to make a line like Fig 1. All incoming traffic will have a path that is similar to this path given that it is tracked fine..

Fig. 1 ideal case

The tracking system I had used is still vulnerable to mis-labeling. This is even after using minimum distance and area difference to retain a previous labeling across frames.

On Fig.1, we see a stolen trajectory from an opposing traffic, but it still possess the last portion which can be used to determine if it is incoming traffic. It still is able the same slope and direction movement as Fig. 1. So how can we just use the last part of the line instead of the whole trajectory path?

both min. dist and area difference for labeling:


Fig.1a,b Stolen trajectory from opposing traffic.


Fig. 2 Opposing traffic.


Harris Corner Detection
Curious to see how harris interest pts would work out. I used N. True's method for his parking example: Do a harris corner detect over the region of interest and sum up the points. The sum should different from empty traffic because pavement produces no corners. I've tried this with N. True's openCV implementation and it is able to pick up interest points on the bike.

The biggest problem I see is how occlusion of cross traffic can greatly affect it which is why motion can be used to help separate correct incoming traffic.

Monday, March 5, 2007

Thoughts on Classification

In regards to the classification, I was thinking of the possible problems I might run into with my idea. My idea is to have a human label the correct incoming traffic object in the training set. The system will save these object's trajectories. For testing, when given a video footage(but in the same location) the system has not seen before, it will do the blob detection and check the current trajectory if it near matches the ones in the training set.

The problems I see are:
- occlusion will happen sometimes so the incoming traffic will might only have half the trajectory.
- a vehicle traveling north in the opposing lane will have the close to the same points as the incoming traffic. --> need to use time/frame numbers to help.



On a side note, check out the power of human low-resolution recognition

Monday, February 26, 2007

Opponent Color Subtraction Result

I've implemented image subtraction using the blue/yellow channel as opposed to just rgb.
I also had to dilate the image before doing the connected component labeling for the daytime bike to be more apparent.

This is a video clip of the rgb subtract, notice no bike spotted in the image subtraction.


This is a video clip of the blue/yellow channel subtract with dilation.


This is the same capture as above with the footage shown.


Using this blue/yellow channel image subtraction seems to hurt night time bike tracking.The bike is first labeled as 2 and then when the car drives by, it becomes a new blob.


Maybe use RGB for night time and B/Y on daytime?

--
For the classification rules, I'm thinking of something that trains on a set of known trajectories that are known to be bike traffic. Then each actual labeled blob is compared to this set and check if the distance between the two trajectories are within a certain bounds.

Wednesday, February 21, 2007

Opponent Colors Image Subtraction

I did some more experimenting on image subtraction in different color space. I used two frames from the daytime footage. Serge suggested using opponent colors rather than LAB for better debugging. Opponent Color channels has a simpler model.
Below is plain RGB subtraction.


These two show the subtraction using opponent color channels. I ignore The first one is the green/red one and the second one is blue/yellow. The Blue/yellow seems to bring out difference more, but nosier than RGB.


Wednesday, February 14, 2007

Daytime Clip and Blob Position Plots

I recorded a clip with a bike at 1 pm. With the same settings as used in the gilman car 5pm and bike at night, it was not able to recognize the bike correctly. Perhaps the image is too saturated with light.

I found a L*A*B* colorspace function for Matlab. Here are the different components for a snapshot of the above video.


----
I plotted the positions of different objects in a 3d plot with x,y and frame number as the axes.

This one is of the car at 5pm gilman. Only posted ones with more than 10 positions.

This one is of the motorbike at night gilman. Only posted ones with more than 5 positions.



TODO Next:
-Fix daytime issue.
-Come up with classification rules.

Wednesday, February 7, 2007

Tracking Blobs

Since last week's discussion, I have decided to change my detection algorithm by adding a tracking component. After image subtraction to get the moving blobs and thresholding, I track the moving blobs to determine which direction it is moving in. Since the camera traffic light triggering is intended for only traffic stop, we can then determine if the blob is moving in the direction of the targeted traffic light.

My algorithm for tracking is simple:

For all the frames in the video:
Compute the background by a sliding average.

Do an image subtraction of the background image from the current image.

Convert the resultant image to a binary image and use a connected component labeling

threshold on area to reduce noise

for all the blobs on the current frame:
compute position(centroid), area, boundingbox

compare the current blob's position, area against the global blob set
if positions are the nearest and within a certain threshold, add position to the matching global blob
if the positions are not near, add to global blob set

if a blob's trajectory has shown the intended direction path in the specified area, turn on boolean flag for triggering

Results:
night time bike:

car at 5pm:


The cross traffic does not track as well because they are moving faster where as the incoming traffic is moving slowly towards a stop sign and are able to be tracked.

TODO:
-Experiment and grab more training data in the daytime.
-->Expect to run into issues in the day with more cross-traffic, people, lighting conditions.

Sunday, January 28, 2007

Headlight (or Blob) Detection

I tried experimenting with Nicholas Howe's Segmentation through Graph Cuts but was largely unsuccessful on night images. I lowered the threshold as Nicholas Howe suggested but it still could not catch anything in the test video clips.

I implemented the sliding average(currently set at n=15 frames) to compute the background image for the motion segmentation. It produces much better results than just the previous frame.

Additionally, I worked on blob detection a little. After I get my image difference, I convert the frame into binary and do a connected component labeling on it. This is thresholded based on area. As you can see in the sample videos, the connected component labeling finds the headlights, but it also catches a lot of other false positives.

night time bike:




car at 5pm:



One idea I have is somehow to incorporate directional motion(like gradient with respect to y) as a possible feature. Since each camera will only be responsible for one traffic stop, we know the direction that the motorbike will be traveling. In this case, we can check if the blob is heading south. If this can be implemented successfully, we can get rid of the cross traffic detection.

Wednesday, January 24, 2007

Trials In Motion Segmentation

I'm working on segmenting the objects out of a video feed.

Most of the research papers I have read in regards to stationary traffic surveillance use simple background subtraction such as the frame before it or where the background image is taken when no traffic are present. ["Vision-based Detection of Activity for Traffic Control”] [“Real Time Detection of Crossing Pedestrians For Traffic Adaptive Signal Control”] Erosion and Dilation can be applied after to reduce noise. During our last class, Serge suggested perhaps taking the last N frames and compute an average from it to use as the background image. Oddly, I can't get the perfect segmentation pictures as found in the various research papers.

Here's a few runs in both day and night using different methods:
bike at night:

Subtracting from the Previous Frame Bike at Night:

Subtracting from the Average across all frames(not technically realistic since we can't see frames ahead of the current one) Working on doing just previous N frames.
bike at night:


car 5pm:


subtract previous frame car 5pm:


subtract avg car 5pm:

I also was looking other segmentation methods. I found an interesting one, “Better Foreground Segmentation Through Graph Cuts” by Nicholas Howe and Alexandra Deschamps. They had the Matlab code available for usage but it seems only to work with the demo movie file they included.
Here's how the sample video turned out. Looks good for blob detection(the next step), I'll take a more detailed look at it.

Monday, January 15, 2007

Improved method of capturing sample data

So it seems taking pictures is a poor way of capturing data. A better approach is just to take video footage and cut the frames out of it. I figured out the simple matlab commands for it:

FILENAME = 'MVI_1619.avi';
file_info = aviinfo(FILENAME);
num_frames = file_info.NumFrames;
for current_frame=1:num_frames
%grab the movie frame
movieframe = aviread(FILENAME, current_frame);
%convert to an image and save into cell
imageframe{current_frame} = frame2im(movieframe);
end


I read the research paper in the previous post in more detail. To answer my own question regarding why motion detection is needed rather than just work with a single image, motion detection is used for background detection. By subtracting two frames of a time interval, we can set the foreground to the objects that moved.

Wednesday, January 10, 2007

Research Papers

I found an interesting research paper that many of the ideas for segmentation that Serge suggested. It is titled "Vision-based Detection of Activity for Traffic Control."
The paper discusses how a single algorithm to cover all cases of traffic detection is inefficient. I tend to agree. There are variations between daylight and night that can be exploited to achieve better results.

The approach in the paper suggests two detection modes based on the environmental conditions. The first stage applies motion detection by background differentiation, ghost removal, segment filtering and an adaptive background update. Then, a dual processing scheme is applied that uses different methods to refine the motion parameters based on the contrast in the current image. A selection algorithm that uses HSV color space is used to determine different conditions(weather, day, night). In high contrast, shadows with moving objects need to be detected to avoid being merged with the foreground objects. In nighttime and low contrast, only the headlights is needed.

My question seems to for detection, why is motion detection utilized? Why not just analyze a static image? We can apply background subtraction, then look at the window of interest(the lanes) for key feature points such as the headlights.

As for testing images, i found a good spot at the intersection of Villa La Jolla and the Gilman Parking structure. here are some pics:


At first I took the image from the ground floor, then I took it from the 2nd floor of the structure. I think it's a more realistic approach to take it from a higher ground since thats where cameras are usually mounted.

Also, found this one that might of interest to the group, Vision-Based Human Tracking and Activity Recognition

Saturday, January 6, 2007

Sample Images

I found this site, http://www.metrokc.gov/kcdot/mycommute/allcams.cfm, that has webcams setup near Seattle, WA.


Assuming I try to based my algorithm on detecting headlights, we see two immediate problems given the images below: glare and reflections. Detecting a motorcycle at night time i think is the most vital because it is when cars are less likely to be on the road, thus not set off the lights.



When I take test images on campus, I will try to mimic the perspective. I will also try to work with low resolutions to mimic a poor government bought camera.


Traffic camera: 100th Ave NE @ Juanita-Woodinville Way
100 Ave NE @ Juan-Wood Way



Traffic camera: Novelty Hill @ 208th Ave NE - SE cor
Novelty Hill @ 208 Ave NE (SE)





Traffic camera: 180th SE @ W Valley Hwy
W Valley Hwy @ S 180th St


Traffic camera: 180th SE @ W Valley Hwy - West
W Valley Hwy @ S 180th St (W)




Traffic camera: Talbot Road @ S 43rd St/Carr Rd
Talbot Rd @ S 43rd St/Carr Rd



Traffic camera: SR 515 @ Petro Rd
SE 176th St/Carr Rd @ SR515



Traffic camera: 116th SE @ Petro Rd
116th SE @ Petro Rd



Traffic camera: Iss Hob Rd @ SE May Vly Rd

Iss Hob Rd @ SE May Vly Rd