G06T2207/30256

Device and method for generating vehicle data, and system

A vehicle may include a camera for capturing a region around the vehicle, a positioning sensor for measuring a position of the vehicle, a database for storing a precise map, and a learning data generating apparatus for generating data for learning based on the captured region, the position of the vehicle, and the precise map.

POSITION DETERMINATION DEVICE
20220215674 · 2022-07-07 · ·

A position determination device comprises a camera to capture an image ahead of a vehicle. The position determination device acquires position of the vehicle; estimates a first position of the vehicle, based on a position of a lane included in the image, the position of the vehicle that has been acquired, and map information; estimates a second position of the vehicle, based on a position of the vehicle that was determined most recently and a movement amount of the vehicle; and synthesizes the first position and the second position to determine the position of the vehicle, wherein the device weights the second position while the vehicle is traveling at a speed equal to or lower than a predetermined speed, and synthesizes the first position and the second position.

Systems and methods for representing objects using a six-point bounding box
11410356 · 2022-08-09 · ·

System, methods, and other embodiments described herein relate to improving a representation of objects in a surrounding environment. In one embodiment, a method includes, in response to receiving sensor data depicting the surrounding environment including a corridor that defines a left boundary and a right boundary, identifying at least one object from the sensor data. The method includes transforming segmented data from the sensor data that represents the object into a bounding box by defining the bounding box according to six points relative to the corridor. The method includes providing the six points of the bounding box as a reduced representation of the object.

Detecting road edges by fusing aerial image and telemetry evidences

A method to detect a roadway edge includes calculating a first likelihood of a roadway edge from an aerial image of a roadway by shifting a centerline of the roadway perpendicular to the centerline and overlapping the centerline with image gradients. A second likelihood of the roadway edge is determined using a vehicle telemetry fitting a probability distribution to telemetry points along the roadway. The first likelihood of the roadway edge and the second likelihood of the roadway edge are fused to identify a final likelihood of the roadway edge.

COMBINING VISIBLE LIGHT CAMERA AND THERMAL CAMERA INFORMATION
20220230018 · 2022-07-21 ·

In some examples, one or more processors may receive at least one first visible light image and a first thermal image. Further, the processor(s) may generate, from the at least one first visible light image, an edge image that identifies edge regions in the at least one first visible light image. At least one of a lane marker or road edge region may be determined based at least in part on information from the edge image. In addition, one or more first regions of interest in the first thermal image may be determined based on at least one of the lane marker or the road edge region. Furthermore, a gain of a thermal sensor may be adjusted based on the one or more first regions of interest in the first thermal image.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM FOR ESTIMATING MOVEMENT AMOUNT OF MOVING OBJECT
20220301185 · 2022-09-22 ·

An information processing apparatus, for estimating a movement amount of a moving object to which the information processing apparatus is attached, captures an image taken of an exterior view seen from the moving object. The apparatus estimates the movement amount of the moving object based on a first taken image taken at a first time and a second taken image taken at a second time earlier than the first time. The apparatus estimates the movement amount of the moving object based on a difference between image information on a predetermined region of each manipulated image obtained by manipulating the second taken image based on mutually different predicted movement amounts and image information on a predetermined region cut out from the first taken image.

SYSTEM AND METHOD FOR AUTOMATIC ASSESSMENT OF COMPARATIVE NEGLIGENCE FOR ONE OR MORE VEHICLES INVOLVED IN AN ACCIDENT

A system and a method for automatic assessment of comparative negligence of vehicle(s) involved in an accident. The system receives one or more of a video input, an accelerometer data, a gyroscope data, a magnetometer data, a GPS data, a Lidar data, a Radar data, a radio navigation data and a vehicle state data for vehicle(s). The system automatically detects an occurrence of an accident and its timestamp. The system then detects an accident type of the accident, and a trajectory of the vehicle(s) based on the received data for the detected timestamp. A scenario of the accident is generated and compared with a parametrized accident guideline to generate a comparative negligence assessment for the vehicle(s) involved in the accident.

IMAGE COLLECTION SYSTEM AND IMAGE COLLECTION METHOD

An image collection system includes: a captured-image analysis unit that determines whether a captured image, captured by a camera mounted on a mobile object, of surroundings of the mobile object is a first captured image including an image portion of a specified monitoring target object; an image-capturing-condition recognition unit that recognizes a first image-capturing condition that is an image-capturing condition at the time when the camera captures the first captured image; and a monitoring-target-object-information providing unit that transmits monitoring-target-object image information in which the first captured image and the first image-capturing condition are associated with each other, to a specified provision destination.

SYSTEMS AND METHODS FOR ANALYZING THE IN-LANE DRIVING BEHAVIOR OF A ROAD AGENT EXTERNAL TO A VEHICLE

Systems and methods for analyzing the in-lane driving behavior of an external road agent are disclosed herein. One embodiment generates a sequence of sparse 3D point clouds based on a sequence of depth maps corresponding to a sequence of images of a scene; performs flow clustering based on the sequence of depth maps and a sequence of flow maps to identify points across the sequence of sparse 3D point clouds that belong to a detected road agent; generates a dense 3D point cloud by combining at least some of the points across the sequence of sparse 3D point clouds that belong to the detected road agent; detects one or more lane markings and projects them into the dense 3D point cloud to generate an annotated 3D point cloud; and analyzes the in-lane driving behavior of the detected road agent based, at least in part, on the annotated 3D point cloud.

PREDICTIVE SHADOWS TO SUPPRESS FALSE POSITIVE LANE MARKING DETECTION
20220277163 · 2022-09-01 ·

Systems and methods for the detection of road markings affected by shadows are described. At least one object is identified from a database. A shadow position associated with the at least one object is determined. The shadow position estimates a shadow from the at least one objected projected on a road. Road marking detection data for the road may be modified in response to the determined shadow position. A map layer may be generated to indicate where the shadow impacts the road marking detection data.