G06T2207/30261

VEHICLE PERIPHERY MONITORING DEVICE, VEHICLE PERIPHERY MONITORING METHOD AND NON-TRANSITORY STORAGE MEDIUM

A vehicle periphery monitoring device includes a plurality of sensors including a rear camera, a rear right-side camera, and a rear left-side camera, a display unit that is provided inside a vehicle cabin, memory, and a processor that is coupled to the memory. The processor is configured so as to acquire a rear image that includes a first image of a vehicle rear side acquired by the rear camera, a second image of a vehicle rear right-side acquired by the rear right-side camera, and a third image of a vehicle rear left-side acquired by the rear left-side camera, acquire from the plurality of sensors relative positions of a target that is present in areas including the vehicle rear side, the vehicle rear right-side, and the vehicle rear left-side relative to a host vehicle, determine whether or not the acquired relative positions of the target are mutually consistent with each other in the plurality of sensors, display a single composite image that is created by combining the first image, the second image, and the third image at the display unit in a case in which it is determined that the relative positions of the target are mutually consistent with each other, and display the first image, the second image, and the third image individually and adjacently to each other on the display unit in a case in which it is determined that the relative positions of the target are not mutually consistent with each other.

AUTONOMOUS DRIVING CRASH PREVENTION
20210403050 · 2021-12-30 ·

Autonomous vehicles must accommodate various road configurations such as straight roads, curved roads, controlled intersections, uncontrolled intersections, and many others. Autonomous driving systems must make decisions about the speed and distance of traffic and about obstacles including obstacles that obstruct the view of the autonomous vehicle's sensors. For example, at intersections, the autonomous driving system must identify vehicles in the path of the autonomous vehicle or potentially in the path based on a planned path, estimate the distance to those vehicles, and estimate the speeds of those vehicles. Then, based on those and the road configuration and environmental conditions, the autonomous driving system must decide whether it is safe to proceed along the planned path or not, and when it is safe to proceed.

DETERMINATION DEVICE, DETERMINATION METHOD, AND STORAGE MEDIUM STORING PROGRAM

A determination device includes a processor. The processor is configured to detect an object in an image captured by an image capture section provided at a vehicle, generate a determination area in accordance with a direction of movement of the vehicle based on travel information of the vehicle and based on a position and a speed of the object, and determine danger to be present in a case in which the object is present in the determination area.

SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING

In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

Map Generation Using Two Sources of Sensor Data
20210404814 · 2021-12-30 ·

Examples disclosed herein may involve a computing system that is operable to (i) receive first data of one or more geographical environments from a first type of localization sensor, (ii) receive second data of the one or more geographical environments from a second type of localization sensor, (iii) determine constraints from the first data and the second data, (iv) determine shared pose data associated with both of the first data and the second data using the constraints determined from both the first data and the second data by determining one or more sequences of common poses between respective poses generated from each of the first and second data, wherein the shared pose data provides a common coordinate frame for the first data and the second data, and (v) generate a map of the one or more geographical environments using the determined shared pose data.

DISPLAY DEVICE FOR VEHICLE, DISPLAY METHOD FOR VEHICLE, AND STORAGE MEDIUM
20210407299 · 2021-12-30 ·

A display device for a vehicle, including: a rear imaging section that captures a rear image; a rear lateral imaging section that captures rear lateral images; and a control section that generates a combined image in which a rear processed image obtained by processing the rear image with a first parameter, and rear lateral processed images obtained by processing rear lateral images with a second parameter, are combined, wherein the control section further identifies an object existing in the rear image or the rear lateral images; acquires relative information including relative position and relative speed of the object with respect to the vehicle, and blind spot regions of the combined image; and, based thereon, if the object will disappear from the combined image due to entering into the blind spot regions, changes the blind spot regions by adjusting the first and second parameters.

Method for position detection, device, and storage medium

Embodiments of the present disclosure disclose a method for position detection, a device, and a storage medium. The method includes: detecting a first lane line in a current image captured by a camera; performing an optimization on an initial transformation matrix reflecting a mapping relationship between a world coordinate system and a camera coordinate system based on a detection result of the first lane line; and obtaining a first 3D coordinate of a target object in the current image according to a transformation matrix optimized, and determining an ultimate 3D coordinate of the target object according to the first 3D coordinate.

METHODS FOR DETERMINING A PLANES, METHODS FOR DISPLAYING AUGMENTED REALITY DISPLAY INFORMATION AND CORRESPONDING DEVICES
20210398353 · 2021-12-23 ·

The present invention provides a method for determining a plane, a method for displaying Augmented Reality (AR) display information and corresponding devices. The method comprises the steps of: performing region segmentation and depth estimation on multimedia information; determining, according to the result of region segmentation and the result of depth estimation, 3D plane information of the multimedia information; and, displaying AR display information according to the 3D plane information corresponding to the multimedia information. With the method for determining a plane, the method for displaying AR display information and the corresponding devices provided by the present invention, virtual display information can be added into a 3D plane, the reality of the display effect of enhanced display can be improved, and the user experience can be improved.

TESTING METHOD AND APPARATUS FOR VEHICLE PERCEPTION SYSTEM, DEVICE, AND STORAGE MEDIUM
20210398366 · 2021-12-23 ·

The present application discloses a testing method and apparatus for a vehicle perception system, a device and a storage medium, and relates to data processing and, in particular to the field of artificial intelligence such as automatic driving, intelligent transportation, etc. A specific implementation scheme lies in: acquiring an actual speed and a perceptual speed of a test object, where the perceptual speed of the test object is a speed of the test object perceived by the vehicle perception system; determining, according to the actual speed of the test object and the perceptual speed of the test object, speed reporting delay time of the vehicle perception system, where the speed reporting delay time is used to reflect sensitivity of the vehicle perception system to perceive a speed of an obstacle.

DEPTH ESTIMATION IN IMAGES OBTAINED FROM AN AUTONOMOUS VEHICLE CAMERA
20210398310 · 2021-12-23 ·

Image processing techniques are described to receive bounding box information that describes a bounding box located around a detected obj ect in an image, determine one or more positions of one or more reference points on the bounding box, determine, for each reference point, 3D world coordinates of a point of intersection of the reference point and the road surface, and assign the 3D world coordinates of the one or more reference points to a location of the detected object.