Development of a vision system for an intelligent ground vehicle | Conference Paper individual record
abstract

The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software, which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well. © 2009 SPIE-IS&T.

author list (cited authors)
Nagel, R. L., Perry, K., Stone, R. B., & McAdams, D. A.
publication date
2009
publisher
spie Publisher
identifier
71933SE
Digital Object Identifier (DOI)
volume
7252