G01S2013/9316

Driving control method and driving control apparatus

A driving control method is provided in which a processor configured to control driving of a vehicle acquires detection information around a vehicle on the basis of a detection condition that can be set for each point; extracts events which the vehicle encounters, on the basis of the detection information; creates a driving plan in which a driving action is defined for each of the events on the basis of the detection information acquired in the events; executes a driving control instruction for the vehicle in accordance with the driving plan; and determines the detection condition on the basis of the content of the driving action defined for each of the events.

Multi-radar coexistence using slow rate interference identification and suppression

Certain aspects provide a method for radar detection by an apparatus. The method generally includes transmitting a radar waveform in sets of transmission time intervals (TTIs), using a common set of radar transmission parameters in each set of TTIs, to perform detection of a target object, varying at least one of the common set of radar transmission parameters between sets of TTIs, and identifying interfering signals based on observed changes in monitored parameters of received signals across sets of TTIs due to the varying.

Methods for radar coexistence

A method and apparatus for selecting frequency modulated continuous wave waveform parameters for multiple radar coexistence by a user equipment is described. The user equipment may transmit a radar waveform consisting of a number of chirps, with each chirp having a same duration. The user equipment may vary waveform parameters of the radar waveform for at least a subset of the number of chirp, where the waveform parameters may be chosen from a codebook comprising at least one codeword of parameters. Reflected radar waveforms are received and processed where the processing includes applying a fast time discrete Fourier transform to reflected radar waveforms to produce a one dimension peak in a time delay dimension for each reflected waveform; and applying a slow time discrete Fourier transform to the reflected radar waveforms, where peaks for the reflected waveforms are added.

Method for operating a sensor unit of a vehicle

A method for operating a sensor unit of a vehicle. The method includes providing a time signal for the sensor unit of a first vehicle, it also being possible to provide the time signal for at least one sensor unit of a second vehicle, and controlling sensor devices of the sensor unit using the time signal for detecting an environment of the vehicle in a temporally defined manner; and providing the detected environment data.

Sensor system for vehicle
11584315 · 2023-02-21 · ·

A sensor system for a vehicle includes a central module and a plurality of sub modules mounted in a frame of the vehicle, the sub modules being independently removable. The sub modules include sensors configured to capture image data and distance data in a vicinity of the vehicle. The central module is connected to each of the plurality of sub modules through a first network including a switching hub. The sub modules are individually connected to an external processor through a second network. The central processor is configured to synchronize the sub modules based on absolute time information through the first network, and the sub modules are configured to output the captured image data and distance data appended with synchronized time information to the external processor by communicating through the second network.

System of configuring active lighting to indicate directionality of an autonomous vehicle

Systems, apparatus and methods may be configured to implement actively-controlled light emission from a robotic vehicle. A light emitter(s) of the robotic vehicle may be configurable to indicate a direction of travel of the robotic vehicle and/or display information (e.g., a greeting, a notice, a message, a graphic, passenger/customer/client content, vehicle livery, customized livery) using one or more colors of emitted light (e.g., orange for a first direction and purple for a second direction), one or more sequences of emitted light (e.g., a moving image/graphic), or positions of light emitter(s) on the robotic vehicle (e.g., symmetrically positioned light emitters). The robotic vehicle may not have a front or a back (e.g., a trunk/a hood) and may be configured to travel bi-directionally, in a first direction or a second direction (e.g., opposite the first direction), with the direction of travel being indicated by one or more of the light emitters.

Super-resolution radar for autonomous vehicles
11587204 · 2023-02-21 · ·

Examples disclosed herein relate to an autonomous driving system in an vehicle. The autonomous driving system includes a radar system configured to detect a target in a path and a surrounding environment of the vehicle and produce radar data with a first resolution that is gathered over a continuous field of view on the detected target. The system includes a super-resolution network configured to receive the radar data with the first resolution and produce radar data with a second resolution different from the first resolution using first neural networks. The system also includes a target identification module configured to receive the radar data with the second resolution and to identify the detected target from the radar data with the second resolution using second neural networks. Other examples disclosed herein include a method of operating the radar system in the autonomous driving system of the vehicle.

Method for determining the position of a vehicle

A method is described for determining the position of a vehicle equipped with a radar system that includes at least one radar sensor adapted to receive radar signals emitted from at least one radar emitter of the radar system and reflected the radar sensor. The method comprises: acquiring at least one radar scan comprising a plurality of radar detection points, wherein each radar detection point is evaluated from a radar signal received at the radar sensor and representing a location in the vicinity of the vehicle; determining, from a database, a predefined map, wherein the map comprises at least one element representing a static landmark in the vicinity of the vehicle; matching at least a subset of the plurality of radar detection points of the at least one scan and the at least one element of the map; deter-mining the position of the vehicle based on the matching.

GEOGRAPHICALLY DISPARATE SENSOR FUSION FOR ENHANCED TARGET DETECTION AND IDENTIFICATION IN AUTONOMOUS VEHICLES
20230052240 · 2023-02-16 ·

Examples disclosed herein relate to an autonomous driving system in an ego vehicle. The autonomous driving system includes a radar system configured to detect and identify a target in a path and a surrounding environment of the ego vehicle. The autonomous driving system also includes a sensor fusion module configured to receive radar data on the identified target from the radar system and compare the identified target with one or more targets identified by a plurality of perception sensors that are geographically disparate from the radar system. Other examples disclosed herein include a method of operating the radar system in the autonomous driving system of the ego vehicle.

Deep learning for object detection using pillars
11500063 · 2022-11-15 · ·

Among other things, we describe techniques for detecting objects in the environment surrounding a vehicle. A computer system is configured to receive a set of measurements from a sensor of a vehicle. The set of measurements includes a plurality of data points that represent a plurality of objects in a 3D space surrounding the vehicle. The system divides the 3D space into a plurality of pillars. The system then assigns each data point of the plurality of data points to a pillar in the plurality of pillars. The system generates a pseudo-image based on the plurality of pillars. The pseudo-image includes, for each pillar of the plurality of pillars, a corresponding feature representation of data points assigned to the pillar. The system detects the plurality of objects based on an analysis of the pseudo-image. The system then operates the vehicle based upon the detecting of the objects.