Patent classifications
G06T2207/30261
APPARATUS FOR OBSTACLE DETECTION BASED ON RELATIVE IMAGE MOTION
A vehicular vision system includes a camera that captures image data. The vehicular vision system, responsive to processing by an image processor of image data captured by the camera, detects a plurality of objects present within the field of view of the camera. The vehicular vision system, responsive to detecting the objects, generates a plurality of intra-hypotheses. The vehicular vision system, responsive to generating the plurality of intra-hypotheses, generates a plurality of inter-hypothesis, with each inter-hypothesis (i) based on at least one of the intra-hypotheses and (ii) representing a respective detected object different from each other inter-hypothesis. The vehicular vision system, responsive to generating the plurality of inter-hypothesis tracks the detected object. The vehicular vision system, responsive to tracking the object, controls a driver assistance system of the vehicle.
LiDAR point selection using image segmentation
The subject disclosure relates to techniques for selecting points of an image for processing with LiDAR data. A process of the disclosed technology can include steps for receiving an image comprising a first image object and a second image object, processing the image to place a bounding box around the first image object and the second image object, and processing an image area within the bounding box to identify a first image mask corresponding with a first pixel region of the first image object and a second image mask corresponding with a second pixel region of the second image object. Systems and machine-readable media are also provided.
Apparatus for acquiring 3-dimensional maps of a scene
An active sensor for performing active measurements of a scene is presented. The active sensor includes at least one transmitter configured to emit light pulses toward at least one target object in the scene, wherein the at least one target object is recognized in an image acquired by a passive sensor; at least one receiver configured to detect light pulses reflected from the at least one target object; a controller configured to control an energy level, a direction, and a timing of each light pulse emitted by the transmitter, wherein the controller is further configured to control at least the direction for detecting each of the reflected light pulses; and a distance measurement circuit configured to measure a distance to each of the at least one target object based on the emitted light pulses and the detected light pulses.
Vehicle hatch clearance determining system
A vehicle hatch clearance determining system includes a plurality of cameras disposed at the vehicle and a control having a processor for processing image data captured by the cameras. During a parking event, and responsive to processing of captured image data, the system detects objects, gathers information pertaining to detected objects exterior the vehicle and generates an environmental model based at least in part on the gathered information. With the vehicle parked, the system determines location of the vehicle relative to at least one detected object of the environmental model and determines if the model includes an object in a region that would be swept by a hatch of the parked vehicle when the hatch is being opened or closed. Responsive to determination that the object is in the region that would be swept by the hatch, the system stops movement of the hatch to avoid impact with the object.
Three-Dimensional Object Detection
Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.
OBJECT TRAJECTORY FORECASTING
A plurality of agent locations can be determined at a plurality of time steps by inputting a plurality of images to a perception algorithm that inputs the plurality of images and outputs agent labels and the agent locations. A plurality of first uncertainties corresponding to the agent locations can be determined at the plurality of time steps by inputting the plurality of agent locations to a filter algorithm that inputs the agent locations and outputs the plurality of first uncertainties corresponding to the plurality of agent locations. A plurality of predicted agent trajectories and potential trajectories corresponding to the predicted agent trajectories can be determined by inputting the plurality of agent locations at the plurality of time steps and the first uncertainties corresponding to the agent locations at the plurality of time steps to a variational autoencoder. The plurality of predicted agent trajectories and the potential trajectories corresponding to the predicted agent trajectories can be output.
ANOMALY MONITORING FOR TECHNICAL SYSTEMS BASED ON THE COMPATIBILITY OF IMAGE DATA WITH A PREDEFINED DISTRIBUTION
A method for detecting anomalies in input image data, in particular from a camera, by detecting to what extent the input image data match at least one predefined distribution of image data or deviate from this predefined distribution. In the method: at least one transformation is provided, which maps input image data to data that have been information-reduced with regard to at least one aspect; at least one neural reconstruction network is provided, which is trained to reconstruct original image data from information-reduced data, which were obtained by applying the transformation to original image data from the predefined distribution; the input image data are mapped to information-reduced data by applying the transformation; the information-reduced data are mapped to reconstructed image data using the neural reconstruction network; the reconstructed image data are used to assess to what extent the input image data match or deviate from the predefined distribution.
ENHANCED TARGET DETECTION
Image data are input to a machine learning program. The machine learning program is trained with a virtual boundary model based on a distance between a host vehicle and a target object and a loss function based on a real-world physical model. An identification of a threat object is output from the machine learning program. A subsystem of the host vehicle is actuated based on the identification of the threat object.
OBSTACLE DETECTION APPARATUS, OBSTACLE DETECTION METHOD, AND OBSTACLE DETECTION PROGRAM
In an obstacle detection apparatus, the captured image of a vicinity of a vehicle from an imaging apparatus is acquired. A three-dimensional estimation image showing a three-dimensional position of a feature point in the captured image is generated, and a three-dimensional position of an object is estimated. An attribute image in which an object in the captured image is classified into one or more classes that include at least a road-surface class is generated. The three-dimensional estimation image and the attribute image are fused, the feature points and the classes are associated, and road-surface points associated with the road-surface class are extracted. Based on the road-surface points, a road-surface height in the vicinity of the vehicle is estimated. Based on the estimated road-surface height, the three-dimensional position of the object is corrected. Based on the three-dimensional position of the object, an obstacle in the vicinity of the vehicle is detected.
RUT DETECTION FOR ROAD INFRASTRUCTURE
A computer-implemented method for rut detection is provided. The method includes detecting, by a rut detection system, areas in a road-scene image that include ruts with pixel-wise probability values, wherein a higher value indicates a better chance of being a rut. The method further includes performing at least one of rut repair and vehicle rut avoidance responsive to the pixel-wise probability values. The detecting step includes performing neural network-based, pixel-wise semantic segmentation with context information on the road-scene image to distinguish rut pixels from non-rut pixels on a road depicted in the road-scene image.