Patent classifications
G01S13/865
Cross-validating sensors of an autonomous vehicle
Methods and systems are disclosed for cross-validating a second sensor with a first sensor. Cross-validating the second sensor may include obtaining sensor readings from the first sensor and comparing the sensor readings from the first sensor with sensor readings obtained from the second sensor. In particular, the comparison of the sensor readings may include comparing state information about a vehicle detected by the first sensor and the second sensor. In addition, comparing the sensor readings may include obtaining a first image from the first sensor, obtaining a second image from the second sensor, and then comparing various characteristics of the images. One characteristic that may be compared are object labels applied to the vehicle detected by the first and second sensor. The first and second sensors may be different types of sensors.
Methods for forming 3D image data and associated apparatuses
A method for forming 3D image data representative of the subsurface of infrastructure located in the vicinity of a moving vehicle. The method includes: rotating a directional antenna, mounted to the moving vehicle, about an antenna rotation axis; performing, using the directional antenna whilst it is rotated about the antenna rotation axis, a plurality of collection cycles in which the directional antenna emits RF energy and receives reflected RF energy; collecting, during each of the plurality of collection cycles performed by the directional antenna.
Method and apparatus with vehicle radar control
A method and apparatus with vehicle radar control is disclosed. An apparatus with vehicle radar control includes a radio frequency (RF) transceiver including a transmitting antenna array and a receiving antenna array, and at least one processor configured to collect environmental information of the vehicle, determine a radar mode of the vehicle based on the collected environmental information, generate one or more control signal configured to control one or more of the transmitting antenna array and the receiving antenna array based on the determined radar mode, and provide the generated one or more control signals to the RF transceiver, wherein one or more of the transmitting antenna array and the receiving antenna array operate according to the one or more generated control signals.
DEVICE AND METHOD FOR DETECTING REAR COLLISION OF VEHICLE
A device for detecting a rear collision of a vehicle, the device including a first sensor unit that is disposed on one side of a back of the vehicle and detects a target vehicle positioned behind the vehicle to generate first sensing data, a second sensor unit that is disposed on the other side of the back of the vehicle and detects the target vehicle to generate second sensing data, an ultrasonic sensor that is mounted on the back of the vehicle, and detects a proximity of the target vehicle to generate third sensing data, and a controller that determines a relative speed and a relative distance with the target vehicle using the first sensing data and the second sensing data, determines the proximity of the target vehicle using the third sensing data, and determines an output of a command of unfolding an airbag outwardly mounted on the back of the vehicle and an output of a command of controlling a vehicle headrest.
Systems and methods for estimating vehicle speed based on radar
Systems, methods, and other embodiments relate to determining the speed of a vehicle. In one embodiment, a method includes receiving a first frame of data generated by a first sensor of a vehicle, the first frame of data including a first set of angular positions associated with a first set of objects in the environment. The method includes receiving a second frame of data generated by a second sensor of the vehicle, the second frame of data including a second set of angular positions associated with a second set of objects in the environment. The method includes generating a speed estimate for the vehicle in relation to the first set of objects and the second set of objects based at least in part on the first set of angular positions of the first frame of data and the second set of angular positions of the second frame of data.
Secure vehicle communications architecture for improved blind spot and driving distance detection
Disclosed are techniques for improving an advanced driver-assistance system (ADAS) using a secure channel area. In one embodiment, a method is disclosed comprising establishing a secure channel area extending from at least one side of a first vehicle; detecting a presence of a second vehicle in the secure channel area; establishing a secure connection with the second vehicle upon detecting the presence; exchanging messages between the first vehicle and the second vehicle, the messages including a position and speed of a sending vehicle; taking control of a position and speed of the first vehicle based on the contents of the messages; and releasing control of the position and speed of the first vehicle upon detecting that the secure connection was released.
Systems and methods for streaming processing for autonomous vehicles
Generally, the present disclosure is directed to systems and methods for streaming processing within one or more systems of an autonomy computing system. When an update for a particular object or region of interest is received by a given system, the system can control transmission of data associated with the update as well as a determination of other aspects by the given system. For example, the system can determine based on a received update for a particular aspect and a priority classification and/or interaction classification determined for that aspect whether data associated with the update should be transmitted to a subsequent system before waiting for other updates to arrive.
Sensor-cluster apparatus
A sensor-cluster apparatus, in which a sensor configured to detect and collect external environment information is mounted in a case. The sensor-cluster apparatus includes a body member on which one kind or more of sensors are mounted on one surface thereof, a case in which an inner space is provided, and one surface thereof is opened to define an opening and on which the body member is mounted so that each of the sensors is exposed to the opening, and a position control device mounted inside the case to adjust a mounting position or a mounting angle of the body member.
MACHINE LEARNING ARCHITECTURES FOR CAMERA-BASED DETECTION AND AVOIDANCE ON AIRCRAFTS
A monitoring system for an aircraft uses sensors configured to sense objects around the aircraft to generate a recommendation that is ultimately used to determine a possible route that the aircraft can follow to avoid colliding with a sensed object. A first algorithm generates guidance to avoid encounters with sensed airborne aircrafts. A second algorithm generates guidance to avoid encounters with sensed non-aircraft airborne obstacles and ground obstacles. The second algorithm sends inhibiting information to the first algorithm in a feedback loop based on the position of sensed non-aircraft objects. The first algorithm considers this inhibiting information when generating avoidance guidance regarding airborne aircrafts.
IMAGE PROCESSING DEVICE, IMAGER, INFORMATION PROCESSING DEVICE, DETECTOR, ROADSIDE UNIT, IMAGE PROCESSING METHOD, AND CALIBRATION METHOD
An image processing device 10 includes an image interface 18, a memory 19, and a controller 20. The image interface 18 acquires a captured image. The positions of specific feature points in a world coordinate system and reference positions of the specific feature points are stored in the memory 19. The controller 20 detects the specific feature points in the captured image. In a case where discrepancy between the position in the captured image and the reference position is found with regard to a predetermined percentage or more of the specific feature points, the controller 20 recalculates a calibration parameter.