Wednesday, January 10, 2007

Research Papers

I found an interesting research paper that many of the ideas for segmentation that Serge suggested. It is titled "Vision-based Detection of Activity for Traffic Control."
The paper discusses how a single algorithm to cover all cases of traffic detection is inefficient. I tend to agree. There are variations between daylight and night that can be exploited to achieve better results.

The approach in the paper suggests two detection modes based on the environmental conditions. The first stage applies motion detection by background differentiation, ghost removal, segment filtering and an adaptive background update. Then, a dual processing scheme is applied that uses different methods to refine the motion parameters based on the contrast in the current image. A selection algorithm that uses HSV color space is used to determine different conditions(weather, day, night). In high contrast, shadows with moving objects need to be detected to avoid being merged with the foreground objects. In nighttime and low contrast, only the headlights is needed.

My question seems to for detection, why is motion detection utilized? Why not just analyze a static image? We can apply background subtraction, then look at the window of interest(the lanes) for key feature points such as the headlights.

As for testing images, i found a good spot at the intersection of Villa La Jolla and the Gilman Parking structure. here are some pics:


At first I took the image from the ground floor, then I took it from the 2nd floor of the structure. I think it's a more realistic approach to take it from a higher ground since thats where cameras are usually mounted.

Also, found this one that might of interest to the group, Vision-Based Human Tracking and Activity Recognition

No comments: