IMG_0987.JPG

Firm Updates and Announcements

Video requirements for your first video conflict analysis project

Video data is a critical tool for the evaluation of transportation networks within road safety engineering, operations and traffic monitoring. The development of precise computer vision algorithms and automated video processing techniques means that video quality and consistency is important.  

Video data is the foundation of any computer aided video conflict analysis tool and arguably its most crucial component. The following list presents requirements for video data collection:

Video camera height: The camera must be installed at least 7 m (23 ft) above the road facility.

Video camera angle: The camera should be angled at least 30 degrees below the horizontal.

Video camera resolution: The camera should have the capability to record at 720p or higher.

Video camera stabilization: The camera should be stabilized using physical constraints, otherwise video stabilization software may be necessary.

The video camera can be integrated into either a static (pole-mounted) or dynamic (Unmanned Aerial Vehicle) recording system:

Pole-mounted: The camera is mounted to a fixed object adjacent to the road facility and angled towards area of interest.

Unmanned Aerial Vehicle (UAV): The camera is mounted to a UAV and flown adjacent to the road facility.

Sample image from video data collected using an Unmanned Aerial Vehicle (UAV) from approximately 200 ft above ground level.

Sample image from video data collected using an Unmanned Aerial Vehicle (UAV) from approximately 200 ft above ground level.

Some of the issues that may affect the quality of video data include poor weather conditions, lack of camera stabilization and lack of suitable camera location options. The implementation of video data collection guidelines can help limit potential collection problems.

Once video data has been collected, it undergoes a number of transformation processes before it can be used in the evaluation of the traffic characteristics. These transformation processes include homography calibration, masking, feature tracking and feature grouping.

Homography calibration: In order to evaluate the interaction between vehicles, the two-dimensional video frame needs to be transformed into a three-dimensional plane using a homography matrix.

Masking: A mask can be applied to the video data in order to focus the subsequent processing steps on a particular region of the video frame.

Feature Tracking: A feature is a small group of pixels (smaller than a vehicle) that is moving and exhibits some contrast to the surrounding environment. They are identified using frame-by-frame pixel comparison, as shown below.

Feature Grouping: A heuristic algorithm brings together features based on the following clues: speed, proximity, and shared geometric edges. Features are grouped together into objects. These objects may represent one vehicle, pedestrian, or cyclist.

Once features have been grouped into objects, these objects are stored in an object trajectory database, which includes a record of each object's class, trajectory, and position for each frame of the video. The date and time may also be stored based on the frame number, the frame rate, and the start time of the video. The object trajectory database is then ready for safety analysis (e.g. safety indicator/conflict type, frequency, and severity). Although the collection and preparation of video data requires some manual intervention, the majority of the processing work is done using an automated system. Furthermore, once the video data is properly calibrated, video conflict analysis tools can automatically process very large quantities of video data.

For more information on video data collection, please contact Paul Anderson-Trocmé or Craig Milligan.

 

 

Guest User