Wednesday, March 14, 2007

A Look Back at the Project

Vision Based Traffic Light Triggering for Motorbikes Research Paper

Vision Based Traffic Light Triggering for Motorbikes Report

The goal of this project was to create a computer vision algorithm to detect incoming motorbike traffic for traffic light triggering.

I intended to partially track the motorbike enough for it to distinguish it from cross traffic and noise.

Problems I ran into and steps I took to correct them:
1. I choose to use video processing rather than single image processing. Video footage allows motion segmentation by background subtraction.

2. Defining the background image for subtraction was also a problem. A set image is not very robust. a slight movement to the camera will produce noise. Just using the previous frame will not give enough difference. Elected to use a sliding average to determine the background image.


2. Labeling the foreground blobs for tracking. Accomplished by computing the distance from previous and attaching it to nearest one within a threshold. If not, create a new label.


3. Lighting conditions does adversely affect motion segmentation. Solved by using different colorspaces to get reduce of the influence of lighting.
RGB


B/Y Opponent

4. Simple Tracking methods are easily affected by various factors. Cross traffic can easily steal the intended tracking(or vice versa). I determined that even if the tracking does not entirely work, sometimes the partial tracking maybe enough to do the job. In my case, I was able to use RANSAC to determine the partial trajectories.



Steps I did not have time for:
- Actual classification system. Now the traffic detection problem is reduced to a line fitting/classification problem. Ideas include feeding in annotated correct trajectories and testing by a difference on the test data.

- Extensive training/testing set.

- Should have researched more on other methods at the various steps. Instead of tracking by labeling by area, perhaps should have tried interest point detection(as used by the other groups) and following a region of interest points moving in the same direction.

RANSAC Lines

To solve last week's problem of fitting partial lines, Serge suggested RANSAC. I implemented RANSAC for line fitting and the results are good as expected. RANSAC basically takes n random points, creates a model based on these points, calculate the error for all the data points from this model and add to a list of possible models if within a threshold count.More details can be found at wikipedia

Here is a line fitting to the motorcycle frame from last week:


This is one from a car in the background:


As seen in the first image, we now have a more robust model for detecting motion towards a traffic stop.

Wednesday, March 7, 2007

Partial Line Fittings for Classification

My problem now is to see how to fit partial lines. Originally, I wanted the trajectory creation to make a line like Fig 1. All incoming traffic will have a path that is similar to this path given that it is tracked fine..

Fig. 1 ideal case

The tracking system I had used is still vulnerable to mis-labeling. This is even after using minimum distance and area difference to retain a previous labeling across frames.

On Fig.1, we see a stolen trajectory from an opposing traffic, but it still possess the last portion which can be used to determine if it is incoming traffic. It still is able the same slope and direction movement as Fig. 1. So how can we just use the last part of the line instead of the whole trajectory path?

both min. dist and area difference for labeling:


Fig.1a,b Stolen trajectory from opposing traffic.


Fig. 2 Opposing traffic.


Harris Corner Detection
Curious to see how harris interest pts would work out. I used N. True's method for his parking example: Do a harris corner detect over the region of interest and sum up the points. The sum should different from empty traffic because pavement produces no corners. I've tried this with N. True's openCV implementation and it is able to pick up interest points on the bike.

The biggest problem I see is how occlusion of cross traffic can greatly affect it which is why motion can be used to help separate correct incoming traffic.

Monday, March 5, 2007

Thoughts on Classification

In regards to the classification, I was thinking of the possible problems I might run into with my idea. My idea is to have a human label the correct incoming traffic object in the training set. The system will save these object's trajectories. For testing, when given a video footage(but in the same location) the system has not seen before, it will do the blob detection and check the current trajectory if it near matches the ones in the training set.

The problems I see are:
- occlusion will happen sometimes so the incoming traffic will might only have half the trajectory.
- a vehicle traveling north in the opposing lane will have the close to the same points as the incoming traffic. --> need to use time/frame numbers to help.



On a side note, check out the power of human low-resolution recognition

Monday, February 26, 2007

Opponent Color Subtraction Result

I've implemented image subtraction using the blue/yellow channel as opposed to just rgb.
I also had to dilate the image before doing the connected component labeling for the daytime bike to be more apparent.

This is a video clip of the rgb subtract, notice no bike spotted in the image subtraction.


This is a video clip of the blue/yellow channel subtract with dilation.


This is the same capture as above with the footage shown.


Using this blue/yellow channel image subtraction seems to hurt night time bike tracking.The bike is first labeled as 2 and then when the car drives by, it becomes a new blob.


Maybe use RGB for night time and B/Y on daytime?

--
For the classification rules, I'm thinking of something that trains on a set of known trajectories that are known to be bike traffic. Then each actual labeled blob is compared to this set and check if the distance between the two trajectories are within a certain bounds.

Wednesday, February 21, 2007

Opponent Colors Image Subtraction

I did some more experimenting on image subtraction in different color space. I used two frames from the daytime footage. Serge suggested using opponent colors rather than LAB for better debugging. Opponent Color channels has a simpler model.
Below is plain RGB subtraction.


These two show the subtraction using opponent color channels. I ignore The first one is the green/red one and the second one is blue/yellow. The Blue/yellow seems to bring out difference more, but nosier than RGB.


Wednesday, February 14, 2007

Daytime Clip and Blob Position Plots

I recorded a clip with a bike at 1 pm. With the same settings as used in the gilman car 5pm and bike at night, it was not able to recognize the bike correctly. Perhaps the image is too saturated with light.

I found a L*A*B* colorspace function for Matlab. Here are the different components for a snapshot of the above video.


----
I plotted the positions of different objects in a 3d plot with x,y and frame number as the axes.

This one is of the car at 5pm gilman. Only posted ones with more than 10 positions.

This one is of the motorbike at night gilman. Only posted ones with more than 5 positions.



TODO Next:
-Fix daytime issue.
-Come up with classification rules.