Patent classifications
G06T2207/30261
Navigation based on free space determination
Systems and methods navigate a vehicle by determining a free space region in which the vehicle can travel. In one implementation, a system may include at least one processor programmed to receive from an image capture device, a plurality of images associated with the environment of a vehicle, analyze at least one of the plurality of images to identify a first free space boundary on a driver side of the vehicle and extending forward of the vehicle, a second free space boundary on a passenger side of the vehicle and extending forward of the vehicle, and a forward free space boundary forward of the vehicle and extending between the first free space boundary and the second free space boundary. The first free space boundary, the second free space boundary, and the forward free space boundary may define a free space region forward of the vehicle. The at least one processor of the system may be further programmed to determine a navigational path for the vehicle through the free space region and cause the vehicle to travel on at least a portion of the determined navigational path within the free space region forward of the vehicle.
Rendering operations using sparse volumetric data
A ray is cast into a volume described by a volumetric data structure, which describes the volume at a plurality of levels of detail. A first entry in the volumetric data structure includes a first set of bits representing voxels at a lowest one of the plurality of levels of detail, and values of the first set of bits indicate whether a corresponding one of the voxels is at least partially occupied by respective geometry. A set of second entries in the volumetric data structure describe voxels at a second level of detail, which represent subvolumes of the voxels at the first lowest level of detail. The ray is determined to pass through a particular subset of the voxels at the first level of detail and at least a particular one of the particular subset of voxels is determined to be occupied by geometry.
Navigation based on partially occluded pedestrians
Systems and methods are provided for navigating a host vehicle. In an embodiment, a processing device may be configured to receive a captured image acquired by a camera onboard the host vehicle; provide the captured image to an analysis module configured to generate an output including an indicator of a contact position of the occluded pedestrian with the ground surface, the analysis module including a trained model trained based a plurality of training images having been modified to occlude a region where a training pedestrian contacts a training ground surface; receive from the analysis module the generated output, including the indicator of the contact position of the occluded pedestrian with the ground surface; and cause at least one navigational action by the host vehicle based on the indicator of the contact position of the occluded pedestrian with the ground surface.
SYSTEM AND METHOD FOR OCCLUDING CONTOUR DETECTION
A system method for occluding contour detection using a fully convolutional neural network is disclosed. A particular embodiment includes: receiving an input image; producing a feature map from the input image by semantic segmentation; learning an array of upscaling filters to upscale the feature map into a final dense feature map of a desired size; applying the array of upscaling filters to the feature map to produce contour information of objects and object instances detected in the input image; and applying the contour information onto the input image.
THREE-DIMENSIONAL OBJECT DETECTION
An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.
Vehicle collision warning prevention method using optical flow analysis
A vehicle collision warning prevention method includes the steps of: (a) extracting a forward video of a vehicle and video recognition information from a video recognition module mounted in a vehicle, and detecting a size change rate of a forward object included in the video recognition information at each frame of the forward video; (b) calculating an average OFCR of a predetermined frame section; (c) determining whether a value obtained by subtracting the average OFCR from a current OFCR is less than a predetermined threshold value; (d) determining that a brake operation signal is generated when it is determined that the value is less than the threshold value; (e) determining whether a collision warning signal is generated within a predetermined time after the step (d); and (f) preventing an output of the collision warning signal when the collision warning signal is generated at the step (e).
METHOD AND DEVICE FOR THE ESTIMATION OF CAR EGO-MOTION FROM SURROUND VIEW IMAGES
A method and device for determining an ego-motion of a vehicle are disclosed. Respective sequences of consecutive images are obtained from a front view camera, a left side view camera, a right side view camera and a rear view camera and merged. A virtual projection of the images to a ground plane is provided using an affine projection. An optical flow is determined from the sequence of projected images, an ego-motion of the vehicle is determined from the optical flow and the ego-motion is used to predict a kinematic state of the car.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND VEHICLE
An information processing device includes processing circuitry. The processing circuitry obtain target information that indicates at least one of a distance to a target object or a position of the target object. The processing circuitry generate, based on the target information, map information of a space including a plurality of areas, the map information indicating presence or absence of the target object in a first area included in the plurality of areas, and a detailed position of the target object in the first area.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
According to an embodiment, an information processing apparatus includes an attribute determiner and a setter. Each of acquired first sets indicates a combination of observation information indicating a result of observation of an area surrounding a moving body and position information. The attribute determiner is configured to determine, based on the observation information, an attribute of each of areas into which the area surrounding the moving body is divided, and to generate second sets, each indicating a combination of attribute information indicating the attribute of each area and the position information. The setter is configured to set, based on the second sets, reliability of the attribute of the area of a target second set, from correlation between the attribute of the area of the target second set and the attribute of corresponding areas each indicating the area corresponding to the target area in the areas of the other second sets.
VEHICLE CONTROL SYSTEM BASED ON USER INPUT AND METHOD THEREOF
A vehicle control system based on a user input includes: a user information input device receiving a region of interest from a user; a vehicle movement information calculator calculating movement information of a vehicle; and a vehicle position relationship tracker performing a vehicle control based on the region of interest and the movement information of the vehicle.