G01S13/865

Motorized Mounting Device for Positioning an Optical Element Within a Field-of-View of an Optical Sensor and Method of Use
20230027882 · 2023-01-26 ·

A mounting device for selectively positioning an optical element within a field-of-view of an optical sensor of a vehicle includes: a housing defining an opening sized to fit over an aperture of the optical sensor; a holder for the optical element connected to the housing and positioned such that, when the holder is in a first position, the optical element is at least partially within the field-of-view of the optical sensor; and a motorized actuator. The motorized actuator can be configured to move the holder to adjust the position of the optical element relative to the field-of-view of the optical sensor.

OPTIMIZED MULTICHANNEL OPTICAL SYSTEM FOR LIDAR SENSORS
20230023043 · 2023-01-26 ·

The subject matter of this specification can be implemented in, among other things, systems and methods of optical sensing that utilize optimized processing of multiple sensing channels for efficient and reliable scanning of environments. The optical sensing includes multiple optical communication lines that include coupling portions configured to facilitate efficient collection of various received beams. The optical sensing system further includes multiple light detectors configured to process collected beams and produce data representative of a velocity of an object that generated the received beam and/or a distance to that object.

INNOVATIVE METHOD FOR THE DETECTION OF DEFORMED OR DAMAGED STRUCTURES BASED ON THE USE OF SINGLE SAR IMAGES

The invention concerns a method (1) to detect deformations of, and/or damages to, structures permanently arranged on the earth's surface. In particular, said method (1) comprises: acquiring (11) georeferencing data indicative of geographical reference positions of predefined points of interest of a given structure to be monitored permanently arranged on the earth's surface, wherein said predefined points of interest are representative of a 3D geometry of the given structure without deformations and damages; acquiring (12) a SAR image of an area of the earth's surface where the given structure is arranged, wherein said SAR image is associated with a given reference coordinate system; transforming (13) the geographical reference positions of the predefined points of interest into corresponding expected positions in the given reference coordinate system associated with the acquired SAR image so as to carry out a reprojection of the 3D geometry of the given structure without deformations and damages on the acquired SAR image; identifying (14) in the acquired SAR image the predefined points of interest of the given structure; determining (15) actual positions in the given reference coordinate system associated with the acquired SAR image of the predefined points of interest identified in said SAR image; making a comparison (16) between the expected positions of the predefined points of interest and the corresponding actual positions in the acquired SAR image; and detecting (17) one or more deformations of, and/or one or more damages to, said given structure on the basis of the comparison made.

METHOD AND APPARATUS FOR COORDINATING MULTIPLE COOPERATIVE VEHICLE TRAJECTORIES ON SHARED ROAD NETWORKS

A vehicle coordination system is provided for coordinating the trajectories of vehicles on a road network. The vehicle coordination system comprises a plurality of vehicles each having respective vehicle position tracking assemblies that are in communication with respective vehicle communication systems for transmitting vehicle state messages including positions of the vehicles. A task assignment allocator is provided that is arranged to generate task assignments for each of the plurality of vehicles, including destinations in the road network for the vehicles. A vehicle coordination assembly is in communication with the vehicle communication systems via a data network for receiving the vehicle state messages. The vehicle coordination assembly is configured to determine respective paths for each vehicle to arrive at their respective destinations and determine trajectory control commands for each vehicle to traverse their respective paths whilst optimizing a predetermined objective and avoiding active interactions of two or more of the vehicles occurring in any shared areas of the paths. The vehicle coordination assembly is configured to transmit the trajectory control commands to each vehicle. The predetermined objective may be an aggregate traversal time for the vehicles.

VEHICLE AND CONTROL METHOD THEREOF

A vehicle includes a front camera, a front radar, a corner radar, a corner LiDAR, and a controller configured to generate a first fusion mode by processing image data and radar data or to generate a second fusion mode by processing radar data and LiDAR data, wherein the controller changes the first fusion mode to the second fusion mode when the controller detects an abnormality of the front camera while performing avoidance control of the vehicle based on the first fusion mode, and performs the avoidance control based on the second fusion mode for a predetermined time period.

Multi-model switching on a collision mitigation system

Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes receiving data indicative of an operating mode of the vehicle, wherein the vehicle is configured to operate in a plurality of operating modes. The method includes determining one or more response characteristics of the vehicle based at least in part on the operating mode of the vehicle, each response characteristic indicating how the vehicle responds to a potential collision. The method includes controlling the vehicle based at least in part on the one or more response characteristics.

System and method of providing a multi-modal localization for an object
11561553 · 2023-01-24 · ·

An example method includes gathering, via a first module of a first type, first simultaneous localization and mapping data and gathering, via a second module of a second type, second simultaneous localization and mapping data. The method includes generating, via a simultaneous localization and mapping module, a first map based on the first simultaneous localization and mapping data and the second simultaneous localization and mapping data, the first map being of a first map type and generating, via the simultaneous localization and mapping module, a second map based on the first simultaneous localization and mapping data and the second simultaneous localization and mapping data, the second map being of a second map type. The map of the first type is used by vehicles with module(s) of the first and/or second types and the map of the second type is used by vehicles with a module of the second type exclusively.

SITUATIONAL AWARENESS SYSTEM FOR AN AUTONOMOUS OR SEMI-AUTONOMOUS VEHICLE

A situational awareness system for a vehicle comprising a cyber-physical system, wherein the situational awareness system is configured to generate an imaging dataset for processing by the cyber-physical system for enabling semi-autonomous or autonomous operational mode of the vehicle, wherein the situational awareness system includes a sensory system with a first electro-optical unit for imaging the surroundings of the vehicle, a second electro-optical unit configured for imaging a ground area in a direct vicinity of the vehicle, a radar unit for detecting objects, and a third electro-optical unit for object identification, wherein the situational awareness system further includes a data synchronization system configured to synchronize the imaging dataset obtained by means of each unit of the sensory system, wherein the data synchronization system is configured to provide the synchronized imaging dataset to the cyber-physical system of the vehicle.

Semantic segmentation of radar data

Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.

Systems and methods to determine risk distribution based on sensor coverages of a sensor system for an autonomous driving vehicle
11702104 · 2023-07-18 · ·

Systems and methods of determining a risk distribution associated with a multiplicity of coverage zones covered by a multiplicity of sensors of an autonomous driving vehicle (ADV) are disclosed. The method includes for each coverage zone covered by at least one sensor of the ADV, obtaining MTBF data of the sensor(s) covering the coverage zone. The method further includes determining a mean time between failure (MTBF) of the coverage zone based on the MTBF data of the sensor(s). The method further includes computing a performance risk associated with the coverage zone based on the determined MTBF of the coverage zone. The method further includes determining a risk distribution based on the computed performance risks associated with the multiplicity of coverage zones.