Sensor Fusion. Top: Two-dimensional, video-only view. Bottom: Three-dimensional view of video fused with LADAR and satellite imagery.
Distinguishing important classes of objects. Top: camera image. Bottom: detecting a mannequin (red), crops (green) and ground (cross-hatch).
Detecting the edges of an unimproved road.
We specialize in developing software that enables a robot or its remote operator to perceive its environment. Our algorithms process sensor input to detect features in the robot's environment, such as the target task, positions of stationary and moving objects, and obstacles or terrain that may be hazardous to the robot.
Our algorithms are the heart of image processing applications as diverse as finding dump truck locations for loading applications, classifying the behavior of test subjects, spotting failures in conveyor belt splices, identifying vehicles on the highway, and inspecting harvested strawberry plants. We also developed sophisticated classifiers for very complex outdoor terrain that enable unmanned ground vehicles to safely detect and avoid hazards such as trees, steep slopes, ditches, washouts, rocks, and fence posts.
Our capabilities go beyond processing two-dimensional images. We can fuse different types of sensor data – for instance, video images and laser range data – to create complete, three-dimensional models of the environment. These sophisticated models serve as input to vehicle autonomy systems, assist operators of both manned and unmanned vehicles, and can be used to plan missions and train operators for new tasks.
• Terrain and object classification
• Vegetation detection
• Thermal image processing
• Medical image analysis and registration
• Dynamic object
• Fusion with range sensor data & overhead imagery
• Behavior identification
Projects featuring Image Processing:
• Cargo UGV
• Strawberry Sorter
• Tartan Rescue Team
• Specialty Crop Automation