Patent classifications
G06T2207/30256
SYSTEM AND METHOD FOR FREE SPACE ESTIMATION
A system and method for estimating free space including applying a machine learning model to camera images of a navigation area, where the navigation area is broken into cells, synchronizing point cloud data from the navigation area with the processed camera images, and associating probabilities that the cell is occupied and object classifications of objects that could occupy the cells with cells in the navigation area based on sensor data, sensor noise, and the machine learning model.
AUTOMATIC EXTRINSIC CALIBRATION USING SENSED DATA AS A TARGET
Provided are systems and methods for auto calibrating a vehicle using a calibration target that is generated from the vehicle's sensor data. In one example, the method may include receiving sensor data associated with a road captured by one or more sensors of a vehicle, identifying lane line data points within the sensor data, generating a representation which includes positions of a plurality of lane lines of the road based on the identified lane line data points, and adjusting a calibration parameter of a sensor from among the one or more sensors of the vehicle based on the representation of the plurality of lane lines.
SYSTEMS AND METHODS FOR MONITORING TRAFFIC LANE CONGESTION
Systems and methods are provided for predicting blind spot incursions for a host vehicle. In one implementation, a navigation system for a host vehicle may comprise a processor. The processor may be programmed to receive, from an image capture device located on a rear of the host vehicle, at least one image representative of an environment of the host vehicle. The processor may be programmed to analyze the at least one image to identify an object in the environment of the host vehicle and to determine kinematic information associated with the object. The processor may further be programmed to predict, based on the kinematic information, that the object will travel in a region outside of a field of view of the image capture device and perform a control action based on the prediction.
XR DEVICE AND METHOD FOR CONTROLLING THE SAME
The present disclosure relates to an XR device and a method for controlling the same, and more particularly, is applicable to a 5G communication technology field, a robot technology field, an autonomous technology field and an artificial intelligence (AI) technology field. The method for controlling an XR device of a vehicle includes acquiring a camera view by capturing an image in front of the vehicle; acquiring position information of the vehicle by detecting a position of the vehicle, acquiring movement information of the vehicle by detecting movement of the vehicle, and providing navigation of an augmented reality (AR) mode displaying at least one virtual object for guiding a path by overlapping the at least one virtual object on the camera view based on at least the position information of the vehicle or the movement information of the vehicle.
Spatial light modulator (SLM) controller for headlights
A controller is provided that includes a bit plane generation component and a processor configured to receive one or more headlight commands and to configure the bit plane generation component to generate bit planes of a headlight frame responsive to the one or more headlight commands, wherein the bit plane generation component includes bit generation pipelines configured to operate in parallel to generate respective bits of consecutive bits of a bit plane of the headlight frame.
Localization based on semantic objects
Techniques for determining a location of a vehicle in an environment using sensors and determining calibration information associated with the sensors are discussed herein. A vehicle can use map data to traverse an environment. The map data can include semantic map objects such as traffic lights, lane markings, etc. The vehicle can use a sensor, such as an image sensor, to capture sensor data. Semantic map objects can be projected into the sensor data and matched with object(s) in the sensor data. Such semantic objects can be represented as a center point and covariance data. A distance or likelihood associated with the projected semantic map object and the sensed object can be optimized to determine a location of the vehicle. Sensed objects can be determined to be the same based on matching with the semantic map object. Epipolar geometry can be used to determine if sensors are capturing consistent data.
Infrastructure monitoring system on autonomous vehicles
Provided herein are platforms for determining a non-navigational quality of at least one infrastructure by a plurality of autonomous or semi-autonomous land vehicles through infrastructure recognition and assessment.
Bounding box estimation and object detection
Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
SENSORS, AGRICULTURE HARVESTER WITH THE SENSORS AND METHODS FOR STEERING OR GUIDING AGRICULTURE HARVESTERS
The present invention relates to a system for steering or guiding an agriculture harvesting machine (agriculture harvester) with a high degree of precision, without the need for the agriculture harvesting machine to have a guide stick or shoe in physical or mechanical contact with the agriculture to be harvested or to be in connection with remote navigation systems, such as GPS. The system includes a sensor mounted at the front of the harvester and a processor for processing information from the sensor to determine a boundary line and for steering the sod harvester along the boundary line.
VEHICULAR IMAGING SYSTEM WITH MISALIGNMENT CORRECTION OF CAMERA
A vehicular imaging system includes a camera disposed behind a windshield of a vehicle and viewing through a portion of the windshield. Image data captured by the camera is provided to a control. The control receives, via a communication bus of the vehicle, at least one selected from the group consisting of (i) vehicle pitch information relating to pitch of the vehicle, (ii) vehicle yaw information relating to yaw of the vehicle and (iii) vehicle steering information relating to steering of the vehicle. The system automatically corrects for misalignment of the camera. Image data captured by the camera is processed at the control for a lane departure warning system of the vehicle and for at least one selected from the group consisting of (i) an automatic headlamp control system of the vehicle, (ii) a collision avoidance system of the vehicle and (iii) an adaptive front lighting system of the vehicle.