G06T2207/30256

DRIVING CONTROL SYSTEM AND METHOD OF CONTROLLING THE SAME USING SENSOR FUSION BETWEEN VEHICLES
20230154199 · 2023-05-18 · ·

The present disclosure relates to a driving control system and a method of controlling the same using sensor fusion between vehicles, and the driving control system allows vehicles to share sensor data, fuses the sensor data, matches the adjacent vehicle and the sensor data, improves recognition performance in respect to a periphery, controls driving in accordance with a change in peripheral environment and a traveling state of another vehicle, receives sensor data of another vehicle, converts a coordinate of sensor data of the matched vehicle, and fuses host vehicle sensor information, which makes it possible to improve recognition performance and accuracy in respect to a surrounding environment and object, enable stable autonomous driving, and improve stability of the vehicle.

SYSTEMS AND METHODS FOR ESTIMATING CUBOID HEADINGS BASED ON HEADING ESTIMATIONS GENERATED USING DIFFERENT CUBOID DEFINING TECHNIQUES
20230150543 · 2023-05-18 ·

Disclosed herein are systems, methods, and computer program products for operating a robotic system. For example, the method includes: obtaining a first cuboid generated based on an image, a second cuboid generated based on a lidar dataset and/or a third cuboid generated by a heuristic algorithm using the lidar dataset; using a machine learning model to generate a heading for an object in proximity to the robotic system based on the first cuboid, second cuboid and/or third cuboid; generating a bounding box geometry and a bounding box location based on the second cuboid or third cuboid; and generating a fourth cuboid using the bounding box geometry, the bounding box location, and the heading generated using the machine learning model.

System and method for object and obstacle detection and classification in collision avoidance of railway applications

A system for detection and identification of objects and obstacles near, between or on railway comprise several forward-looking imagers adapted to cover each different range forward and preferably to be sensitive each to different wavelength of radiation, including visible light, LWIR, and SWIR. The substantially homogeneous temperature along the rail the image of which is included in an imager frame assists in identifying and distinguishing the rail from the background. Image processing is applied to define living creature in the image frame and to distinguish from a man-made object based on temperature of the body. Electro optic sensors (e.g. thermal infrared imaging sensor and visible band imaging sensor) are used to survey and monitor railway scenes in real time.

Detecting angles of objects

A LIDAR system for use in a vehicle is provided. The LIDAR system may include at least one processor configured to control at least one light source for illuminating a field of view and scan a field of view by controlling movement of at least one deflector at which the at least one light source is directed. The at least one processor may also be configured to receive, from at least one sensor, reflections signals indicative of light reflected from an object in the field of view. The at least one processor may further be configured to detect at least one temporal distortion in the reflections signals, and determine from the at least one temporal distortion an angular orientation of at least a portion of the object.

LANE LINE DETECTION METHOD AND RELATED DEVICE
20230144209 · 2023-05-11 ·

This disclosure discloses lane line detection methods and devices. In an implementation, features extracted by different layers of the neural network are fused to obtain a fused second feature map, so that the second feature map obtained through fusion processing has a plurality of layers of features. The fused second feature map has a related feature of a low-layer receptive field and a related feature of a high-layer receptive field. Afterwards, an output predicted lane line set is divided into groups, where each predicted lane line in each group has an optimal prediction interval.

Guided batching

The present invention provides a method of generating a robust global map using a plurality of limited field-of-view cameras to capture an environment. Provided is a method for generating a three-dimensional map comprising: receiving a plurality of sequential image data wherein each of the plurality of sequential image data comprises a plurality of sequential images, further wherein the plurality of sequential images is obtained by a plurality of limited field-of-view image sensors; determining a pose of each of the plurality of sequential images of each of the plurality of sequential image data; determining one or more overlapping poses using the determined poses of the sequential image data; selecting at least one set of images from the plurality of sequential images wherein each set of images are determined to have overlapping poses; and constructing one or more map portions derived from each of the at least one set of images.

ADVERSE ENVIRONMENT DETERMINATION DEVICE AND ADVERSE ENVIRONMENT DETERMINATION METHOD
20230148097 · 2023-05-11 ·

An adverse environment determination device includes: an environment determination unit that is configured to determine, based on the recognition result of the target planimetric feature acquired by the image recognition information acquisition unit, whether a surrounding environment of the vehicle is an adverse environment for a device that is configured to perform object recognition using the image captured by the imaging device; and a recognition distance evaluation unit that is configured to determine, based on the image recognition information, an actual recognition distance that is an actual distance within which the target planimetric feature is actually recognized.

Lane curvature determination

A computer includes a processor and a memory storing instructions executable by the processor to receive a series of sample coordinate points of a projected path of travel of a vehicle, generate interpolated coordinate points along the projected path between the sample coordinate points, fit a curve to the sample coordinate points and interpolated coordinate points, and output a curvature of a lane at a reported coordinate point along the projected path based on the curve.

Self-localization estimation device

A self-localization estimation unit of a self-localization estimation device determines, based on mutual relationships between the in-lane position and the absolute position including the error, whether there is lane-relevant candidate information, the lane-relevant candidate information representing that one or more in-vehicle positions are each estimated to be in which of lanes identified by the lane information; and estimates, based on a result of the determination of whether there is lane-relevant candidate information, a localization of the own vehicle corresponding to the map information.

Surface profile estimation and bump detection for autonomous machine applications

In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.