My Account Log in

1 option

Target Geolocation Method Based on Monocular Vision Beijing Institute of Technology, School of Information and

SAE Technical Papers (1906-current) Available online

View online
Format:
Book
Conference/Event
Author/Creator:
Zhang, Nijia, author.
Contributor:
Chen, Ziyi
Fu, Xiongjun
Hu, Weidong
Lu, Mingfeng
Tao, Ran
Zhang, Feng
Conference Name:
2025 5th International Conference on Smart City Engineering and Public Transportation (SCEPT2025) (2025-03-28 : Beijing, China)
Language:
English
Physical Description:
1 online resource cm
Place of Publication:
Warrendale, PA SAE International 2025
Summary:
In the intelligent traffic system (ITS), roadside sensing can obtain the movement status of various objects in the traffic scene in real time with a globalized perspective, which is of great significance for traffic flow optimization, accident early warning, and rescue afterwards. Accurate target positioning is one of the key links to realize these functions, which can not only help the traffic management department to grasp the traffic condition in time, but also provide the basis for rescue personnel to respond quickly when an accident occurs, so as to minimize the damage caused by the accident. Therefore, a method for acquiring the Global Positioning System (GPS) coordinates of objects relying on monocular surveillance installed on the roadside is proposed in this paper. By combining the target detection algorithm and the coordinate transformation method, and considering the information such as the installation status and internal parameters of the camera, the pixel positions of objects of interest are converted to GPS coordinates under the Global Navigation Satellite System (GNSS) by two different methods according to the known conditions in different situations. In order to evaluate the accuracy and stability of the method in practical applications, several sets of experiments in real scenes are conducted. The results of the experiments show that the latitude and longitude information of the objects in the camera-monitored scene can be estimated by our method in different intervals from the camera. Meanwhile, comparative analysis with other localization methods demonstrates the higher accuracy, feasibility, and superiority of our method
Notes:
Vendor supplied data
Publisher Number:
2025-99-0021
Access Restriction:
Restricted for use by site license

The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.

My Account

Shelf Request an item Bookmarks Fines and fees Settings

Guides

Using the Library Catalog Using Articles+ Library Account