G01S13/87

SENSOR AIMING DEVICE, DRIVING CONTROL SYSTEM, AND CORRECTION AMOUNT ESTIMATION METHOD

A sensor aiming device includes: a target positional relationship processing unit for outputting positional relationship information of first and second targets; a sensor observation information processing unit configured to convert the observation result of the first and second targets into a predetermined unified coordinate system according to a coordinate conversion parameter, perform time synchronization at a predetermined timing, and extract first target information indicating a position of the first target and second target information indicating a position of the second target; a position estimation unit configured to estimate a position of the second target using the first target information, the second target information, and the positional relationship information; and a sensor correction amount estimation unit configured to calculate a deviation amount of the second sensor using the second target information and an estimated position of the second target and estimate a correction amount.

Safety system and method using a safety system
20230037937 · 2023-02-09 ·

A safety system for the localization of at least one spatially variable object having at least one control and evaluation unit, having at least one radio location system, having at least one spatially resolving sensor for the detection of an object in a detection zone of the spatially resolving sensor, wherein the radio location system has at least three arranged radio stations, wherein at least one radio transponder is arranged at the object, wherein position data of the radio transponder and thus position data of the object can be determined by means of the radio location system, wherein the position data can be transmitted from the radio station of the radio location system to the control and evaluation unit, wherein the control and evaluation unit is configured to cyclically detect the position data of the radio transponder and information on the object in the detection zone can be determined by means of the spatially resolving sensor.

SYNTHETIC GEOREFERENCED WIDE-FIELD OF VIEW IMAGING SYSTEM
20230039414 · 2023-02-09 ·

An imaging system for an aircraft is disclosed. A plurality of image sensors are attached, affixed, or secured to the aircraft. Each image sensor is configured to generate sensor-generated pixels based on an environment surrounding the aircraft. Each of the sensor-generated pixels is associated with respective pixel data including, position data, intensity data, time-of-acquisition data, sensor-type data, pointing angle data, latitude data, and longitude data. A controller generates a buffer image including synthetic-layer pixels, maps the sensor-generated pixels to the synthetic-layer pixels in the buffer image, fills a plurality of regions of the buffer image with the sensor-generated pixels, and presents the buffer image on a head-mounted display (HMD) to a user of the aircraft.

Object recognition device and object recognition method
11555913 · 2023-01-17 · ·

In this object recognition device, an association between a first object detection result and a second object detection result is taken in a region excluding an occlusion area. When the first object detection result and the second object detection result are determined to be detection results for an identical object, a recognition result of the surrounding object is calculated from the first object detection result and the second object detection result. Thus, occurrences of erroneous recognition of an object can be decreased as compared to a conventional object recognition device of a vehicle.

Object recognition device and object recognition method
11555913 · 2023-01-17 · ·

In this object recognition device, an association between a first object detection result and a second object detection result is taken in a region excluding an occlusion area. When the first object detection result and the second object detection result are determined to be detection results for an identical object, a recognition result of the surrounding object is calculated from the first object detection result and the second object detection result. Thus, occurrences of erroneous recognition of an object can be decreased as compared to a conventional object recognition device of a vehicle.

Dual Pulsed Mode FMCW Radar Retrofit Conversion with Adaptive Sweep Configuration

A retrofit system applied to existing FMCW radars in order to convert them into pulsed linear frequency-modulated radars with the ability to dynamically switch between two pulsed modes and an FMCW mode based on the estimated range of a target. This retrofit also includes provisions for adaptively configuring chirp and sweep parameters to optimize range resolution. The result is a retrofit system capable of converting an FMCW radar into a dual pulsed mode radar with adaptive sweep configuration.

Dual Pulsed Mode FMCW Radar Retrofit Conversion with Adaptive Sweep Configuration

A retrofit system applied to existing FMCW radars in order to convert them into pulsed linear frequency-modulated radars with the ability to dynamically switch between two pulsed modes and an FMCW mode based on the estimated range of a target. This retrofit also includes provisions for adaptively configuring chirp and sweep parameters to optimize range resolution. The result is a retrofit system capable of converting an FMCW radar into a dual pulsed mode radar with adaptive sweep configuration.

GENERATING A SUBTERRANEAN MAP WITH GROUND PENETRATING RADAR
20230007871 · 2023-01-12 ·

A system and a method for generating a subterranean map with ground penetrating radar are described. The system includes multiple ground penetrating radar transmitters, multiple ground penetrating radar receivers, and a controller. A first subset of the transmitters radiate a first signal at a first frequency bandwidth, a second subset of the transmitters radiate a second signal at a second frequency bandwidth different than the first frequency bandwidth, and a third subset of the transmitters radiate a third signal at a third frequency bandwidth different than the first and second frequency bandwidths. The receivers receive a first return signal at the first frequency bandwidth, a second return signal at the second frequency bandwidth, and a third return signal at the third frequency bandwidth and transmit the return signals. The controller operates the ground penetrating radar transmitters, receives the return signals, and generates a subterranean map from the return signals.

GENERATING A SUBTERRANEAN MAP WITH GROUND PENETRATING RADAR
20230007871 · 2023-01-12 ·

A system and a method for generating a subterranean map with ground penetrating radar are described. The system includes multiple ground penetrating radar transmitters, multiple ground penetrating radar receivers, and a controller. A first subset of the transmitters radiate a first signal at a first frequency bandwidth, a second subset of the transmitters radiate a second signal at a second frequency bandwidth different than the first frequency bandwidth, and a third subset of the transmitters radiate a third signal at a third frequency bandwidth different than the first and second frequency bandwidths. The receivers receive a first return signal at the first frequency bandwidth, a second return signal at the second frequency bandwidth, and a third return signal at the third frequency bandwidth and transmit the return signals. The controller operates the ground penetrating radar transmitters, receives the return signals, and generates a subterranean map from the return signals.

ASSOCIATION OF CAMERA IMAGES AND RADAR DATA IN AUTONOMOUS VEHICLE APPLICATIONS
20230038842 · 2023-02-09 ·

The described aspects and implementations enable fast and accurate object identification in autonomous vehicle (AV) applications by combining radar data with camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar image of a first hypothetical object in an environment of the AV, obtaining a camera image of a second hypothetical object in the environment of the AV, and processing the radar image and the camera image using one or more machine-learning models MLMs to obtain a prediction measure representing a likelihood that the first hypothetical object and the second hypothetical object correspond to a same object in the environment of the AV.