Patent classifications
G06V20/58
Systems and methods for providing warnings of imminent hazards
A system and method for alerting a driver of a motor vehicle or a person walking along a road or hiking on a trail of potentially dangerous hazards in their path. Hazards may be deep water, ice, oil slicks or other hazards. In the case of a motor vehicle, the system uses cameras mounted on or within the vehicle to detect potential hazards and then analyzes the images combined with the known topography of the location to evaluate the ability of the vehicle to safely traverse the hazard. In the case of a person walking or hiking, the person may use the camera on a personal mobile device to capture images of the hazard and to combine the images with the known topography at the location to evaluate the danger presented by the hazard.
Systems and methods for providing warnings of imminent hazards
A system and method for alerting a driver of a motor vehicle or a person walking along a road or hiking on a trail of potentially dangerous hazards in their path. Hazards may be deep water, ice, oil slicks or other hazards. In the case of a motor vehicle, the system uses cameras mounted on or within the vehicle to detect potential hazards and then analyzes the images combined with the known topography of the location to evaluate the ability of the vehicle to safely traverse the hazard. In the case of a person walking or hiking, the person may use the camera on a personal mobile device to capture images of the hazard and to combine the images with the known topography at the location to evaluate the danger presented by the hazard.
Real-time perception system for small objects at long range for autonomous vehicles
A small-object perception system, for use in a vehicle, includes a stereo vision system that captures stereo images and outputs information identifying an object having a dimension in a range of ˜20 cm to about ˜100 cm in a perception range of ˜3 meters to ˜150 meters from the vehicle, and a system controller configured to receive output signals from the stereo vision system and to provide control signals to control a path of movement of the vehicle. The stereo vision system includes cameras separated by a baseline of ˜1 meter to ˜4 meters. The stereo vision system includes a stereo matching module configured to perform stereo matching on left and right initial images and to output a final disparity map based on a plurality of preliminary disparity maps generated from the left and right initial images, with the preliminary disparity maps having different resolutions from each other.
Real-time perception system for small objects at long range for autonomous vehicles
A small-object perception system, for use in a vehicle, includes a stereo vision system that captures stereo images and outputs information identifying an object having a dimension in a range of ˜20 cm to about ˜100 cm in a perception range of ˜3 meters to ˜150 meters from the vehicle, and a system controller configured to receive output signals from the stereo vision system and to provide control signals to control a path of movement of the vehicle. The stereo vision system includes cameras separated by a baseline of ˜1 meter to ˜4 meters. The stereo vision system includes a stereo matching module configured to perform stereo matching on left and right initial images and to output a final disparity map based on a plurality of preliminary disparity maps generated from the left and right initial images, with the preliminary disparity maps having different resolutions from each other.
Method and apparatus for de-biasing the detection and labeling of objects of interest in an environment
Described herein are methods of generating learning data to facilitate de-biasing the labeled location of an object of interest within an image. Methods may include: receiving sensor data, where the sensor data is a first image; determining reference corner locations of an object in the first image using image processing; generating observed corner locations of the object in the first image from the determined reference corner locations; generating a bias transformation based, at least in part, on a difference between the reference corner locations and the observed corner locations of the object in the first image; receiving sensor data from another image sensor of a second image; receiving observed corner locations of an object in the second image from a user; and applying the bias transformation to the observed corner locations of the object in the second image to generate de-biased corners for the object in the second image.
Associating three-dimensional coordinates with two-dimensional feature points
An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.
Associating three-dimensional coordinates with two-dimensional feature points
An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.
Remote monitoring system, remote monitoring method, and remote monitoring server
The on-board information processing apparatus executes: detecting an object ahead of the vehicle from an image acquired by an on-board camera; compressing the image to generate a compressed image; transmitting first data including the compressed image and an image acquisition time of the image before compression; and transmitting second data including an object detection result and an image acquisition time of the image used for object detection. The remote monitoring server executes: receiving the first data to store it in a memory; receiving the second data to store it in the memory; extracting the compressed image and the object detection result whose image acquisition times are the same time from the memory in time series; and superimposing an extracted object detection result on a restored image acquired by restoring an extracted compressed image to display the restored image on which the object detection result is superimposed on a monitoring screen.
Remote monitoring system, remote monitoring method, and remote monitoring server
The on-board information processing apparatus executes: detecting an object ahead of the vehicle from an image acquired by an on-board camera; compressing the image to generate a compressed image; transmitting first data including the compressed image and an image acquisition time of the image before compression; and transmitting second data including an object detection result and an image acquisition time of the image used for object detection. The remote monitoring server executes: receiving the first data to store it in a memory; receiving the second data to store it in the memory; extracting the compressed image and the object detection result whose image acquisition times are the same time from the memory in time series; and superimposing an extracted object detection result on a restored image acquired by restoring an extracted compressed image to display the restored image on which the object detection result is superimposed on a monitoring screen.
Temporal information prediction in autonomous machine applications
In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.