G06T2207/30261

Point cloud feature-based obstacle filter system

A method, apparatus, and system for filtering obstacle candidates determined based on outputs of a LIDAR device in an autonomous vehicle is disclosed. A point cloud comprising a plurality of points is generated based on outputs of the LIDAR device. One or more obstacle candidates are determined based on the point cloud. The one or more obstacle candidates are filtered to remove a first set of obstacle candidates in the one or more obstacle candidates that correspond to noise based at least in part on characteristics associated with points that correspond to each of the one or more obstacle candidates. One or more recognized obstacles comprising the obstacle candidates that have not been removed are determined. Operations of an autonomous vehicle are controlled based on the recognized obstacles.

ON-FLOOR OBSTACLE DETECTION METHOD AND MOBILE MACHINE USING THE SAME
20220343530 · 2022-10-27 ·

On-floor obstacle detection using an RGB-D camera is disclosed. An obstacle on a floor is detected by receiving an image including depth channel data and RGB channel data through the RGB-D camera, estimating a ground plane corresponding to the floor based on the depth channel data, obtaining a foreground of the image corresponding to the ground plane based on the depth channel data, performing a distribution modeling on the foreground of the image based on the RGB channel data to obtain a 2D location of the obstacle, and transforming the 2D location of the obstacle into a 3D location of the obstacle based on the depth channel data.

THREE-DIMENSIONAL MAP ESTIMATION APPARATUS AND OBSTACLE DETECTION APPARATUS
20220343536 · 2022-10-27 ·

According to one embodiment, a three-dimensional map estimation apparatus includes a processor that selects an imaging apparatus from a plurality of imaging apparatuses and then estimates a position and orientation for a moving object on which the selected imaging apparatus is mounted based on images captured by the selected imaging apparatus. The processor outputs a first position and orientation estimation result for the moving object based on images from selected imaging apparatuses. The processor calculates a second position and orientation estimation result indicating an estimated position and orientation for the moving object using the first position and orientation estimation result. The processor estimates a three-dimensional map for the surroundings of the moving object based on the second position and orientation estimation result.

VEHICLE USING SPATIAL INFORMATION ACQUIRED USING SENSOR, SENSING DEVICE USING SPATIAL INFORMATION ACQUIRED USING SENSOR, AND SERVER
20230077393 · 2023-03-16 ·

A method of sensing a three-dimensional (3D) space using at least one sensor is proposed. The method can include acquiring spatial information over time for the sensed 3D space, applying a neural network based object classification model to the acquired spatial information over time to identify at least one object in the sensed 3D space. The method can also include tracking the sensed 3D space including the identified at least one object, and using information related to the tracked 3D space.

SYSTEM AND METHOD FOR FUTURE FORECASTING USING ACTION PRIORS
20230081247 · 2023-03-16 ·

A system and method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data to classify dynamic objects as agents and to detect and annotate actions that are completed by the agents that are located within the surrounding environment of the ego vehicle and analyzing the dynamic data to process an ego motion history that is associated with the ego vehicle that includes vehicle dynamic parameters during a predetermined period of time. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle based on the annotated actions.

OBSTACLE DETECTION DEVICE
20230079994 · 2023-03-16 ·

An obstacle detecting device includes: an image converting portion for converting, into a circular cylindrical image, an image captured by a camera installed on a vehicle; a detection subject candidate image detecting portion for detecting a detection subject candidate image through pattern matching; an optical flow calculating portion for calculating an optical flow; an outlier removing portion for removing an optical flow that is not a detection subject; a TTC calculating portion for calculating a TTC (TTCX, TTCY); a tracking portion for generating a region of the detection subject on the circular cylindrical image by tracking the detection subject candidate; and a collision evaluating portion for evaluating whether or not there is the risk of a collision, wherein the optical flow calculating portion calculates the optical flow based on the detection subject candidate image and the region.

DETECTION DEVICE, TRACKING DEVICE, DETECTION PROGRAM, AND TRACKING PROGRAM

A tracking device includes full-spherical cameras arranged on the right and left. The tracking device pastes a left full-spherical camera image captured with the full-spherical camera on a spherical object, and is installed with a virtual camera inside the spherical object. The virtual camera may freely rotate in a virtual image capturing space formed inside the spherical object, and acquire an external left camera image. Similarly, the tracking device is also installed with a virtual camera that acquires a right camera image, and forms a convergence stereo camera by means of the virtual cameras. The tracking device tracks a location of a subject by means of a particle filter by using the convergence stereo camera formed in this way. In a second embodiment, the full-spherical cameras are vertically arranged and the virtual cameras are vertically installed.

SYSTEMS AND METHODS FOR UTILIZING MODELS TO DETECT DANGEROUS TRACKS FOR VEHICLES

A device may receive accelerometer data and video data for a vehicle and may identify bounding boxes and object classes for objects near the vehicle. The device may identify tracks for the objects and may filter out tracks that are not associated with vehicles or vulnerable road users to generate one or more tracks or an indication of no tracks. The device may generate a collision cone identifying a drivable area of the vehicle to identify objects more likely to be involved in a collision and may filter out tracks from the one or more tracks, based on the bounding boxes, and to generate a subset of tracks or another indication of no tracks. The device may determine scores for the subset of tracks and may identify a track of the subset of tracks with a highest score. The device may perform actions based on the identified track.

OPTICAL SYSTEM, IMAGE PICKUP APPARATUS, IN-VEHICLE SYSTEM, AND MOVING APPARATUS
20230080794 · 2023-03-16 ·

A system includes an aperture diaphragm, a first lens configured to be disposed next to the aperture diaphragm on an object side, and to include one or more positive lenses and one or more negative lenses, and a second lens configured to be disposed next to the aperture diaphragm on an image side, and to include one or more positive lenses and one or more negative lenses. Predetermined conditions are satisfied.

Obstacle Detection and Avoidance System for Autonomous Aircraft and Other Autonomous Vehicles
20230082486 · 2023-03-16 ·

A method of providing a collision avoiding travel path for an autonomous vehicle. A sensor system obtains stereo image data of a scene in the environment ahead of the normal travel path. This image data is used to generate a disparity image. The disparity image is processed to generate an occupancy map that assigns values to areas of the scene based on levels of visual clutter. The occupancy map is then converted to a potential field, which assigns each pixel in the scene with a force value that corresponds to its proximity to one or more obstacles. These force value are summed and used to modify the vehicle's path is a collision is likely.