Drone-based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring
dc.contributor.author | Zhang, Guohui | |
dc.contributor.author | Yuan, Runze | |
dc.contributor.author | Prevedouros, Panos | |
dc.contributor.author | Ma, Tianwei | |
dc.date.accessioned | 2023-04-13T18:36:45Z | |
dc.date.available | 2023-04-13T18:36:45Z | |
dc.date.issued | 2023-01-30 | |
dc.identifier.uri | http://hdl.handle.net/11122/13171 | |
dc.description.abstract | This report documents the research activities to develop a drone-based computer vision-enabled vehicle dynamic safety performance monitoring in Rural, Isolated, Tribal, or Indigenous (RITI) communities. The acquisition of traffic system information, especially the vehicle speed and trajectory information, is of great significance to the study of the characteristics and management of the traffic system in RITI communities. The traditional method of relying on video analysis to obtain vehicle number and trajectory information has its application scenarios, but the common video source is often a camera fixed on a roadside device. In the videos obtained in this way, vehicles are likely to occlude each other, which seriously affects the accuracy of vehicle detection and the estimation of speed. Although there are methods to obtain high-view road video by means of aircraft and satellites, the corresponding cost will be high. Therefore, considering that drones can obtain high-definition video at a higher viewing angle, and the cost is relatively low, we decided to use drones to obtain road videos to complete vehicle detection. In order to overcome the shortcomings of traditional object detection methods when facing a large number of targets and complex scenes of RITI communities, our proposed method uses convolutional neural network (CNN) technology. We modified the YOLO v3 network structure and used a vehicle data set captured by drones for transfer learning, and finally trained a network that can detect and classify vehicles in videos captured by drones. A self-calibrated road boundary extraction method based on image sequences was used to extract road boundaries and filter vehicles to improve the detection accuracy of cars on the road. Using the results of neural network detection as input, we use video-based object tracking to complete the extraction of vehicle trajectory information for traffic safety improvements. Finally, the number of vehicles, speed and trajectory information of vehicles were calculated, and the average speed and density of the traffic flow were estimated on this basis. By analyzing the acquiesced data, we can estimate the traffic condition of the monitored area to predict possible crashes on the highways. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Drone | en_US |
dc.subject | Convolutional Neural Network | en_US |
dc.subject | Transfer Learning | en_US |
dc.subject | Speed Estimation | en_US |
dc.subject | Crash Prediction | en_US |
dc.subject | Traffic Flow Parameter Estimation | en_US |
dc.subject | safety | en_US |
dc.title | Drone-based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring | en_US |
dc.type | Technical Report | en_US |
refterms.dateFOA | 2023-04-13T18:36:45Z |