G01S17/90

Transferring synthetic lidar system data to real world domain for autonomous vehicle training applications
11734935 · 2023-08-22 · ·

Methods and systems are disclosed for correlating synthetic LiDAR data to a real-world domain for use in training an model for use by autonomous vehicle when operating in an environment. To do this, the system will obtain a data set of synthetic LiDAR data, along with images of a real-world environment. The system will transfer the synthetic LiDAR data to a two-dimensional representation, use the two-dimensional representation and the images to train a model that a vehicle can use to operate in a real-world environment.

DETECTOR WITH DEFLECTING ELEMENTS FOR COHERENT IMAGING
20210364813 · 2021-11-25 ·

A detection device for a coherent imaging system includes a detector comprising a matrix array of pixels, each pixel comprising a photodetector component having a photosensitive surface, the detector being designed to be illuminated by a coherent beam, called the image beam consisting of grains of light called speckle grains, a matrix array of transmissive deflecting elements configured to be individually orientable by means of an electrical signal, so as to deflect a fraction of the image beam incident on the group, and thus modify the spatial distribution of the speckle grains in the plane of the photosensitive surface, each group of one or more pixels further comprising a feedback loop associated with the deflecting element and configured to actuate the deflecting element so as to optimize the signal-to-noise ratio from the light detected by the one or more photodetector components of the group of pixels, the feedback loop comprising a feedback circuit.

DETECTOR WITH DEFLECTING ELEMENTS FOR COHERENT IMAGING
20210364813 · 2021-11-25 ·

A detection device for a coherent imaging system includes a detector comprising a matrix array of pixels, each pixel comprising a photodetector component having a photosensitive surface, the detector being designed to be illuminated by a coherent beam, called the image beam consisting of grains of light called speckle grains, a matrix array of transmissive deflecting elements configured to be individually orientable by means of an electrical signal, so as to deflect a fraction of the image beam incident on the group, and thus modify the spatial distribution of the speckle grains in the plane of the photosensitive surface, each group of one or more pixels further comprising a feedback loop associated with the deflecting element and configured to actuate the deflecting element so as to optimize the signal-to-noise ratio from the light detected by the one or more photodetector components of the group of pixels, the feedback loop comprising a feedback circuit.

Vehicle localization using cameras

According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.

Vehicle localization using cameras

According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.

Processing of multispectral sensors for autonomous flight

A system and method are disclosed for design of a suite of multispectral (MS) sensors and processing of enhanced data streams produced by the sensors for autonomous aircraft flight. The suite of MS sensors is specifically configured to produce data streams for processing by an autonomous aircraft object identification and positioning system processor. Multiple, diverse MS sensors image naturally occurring, or artificial features (towers buildings etc.) and produce data streams containing details which are routinely processed by the object identification and positioning system yet would be unrecognizable to a human pilot. The object identification and positioning system correlates MS sensor output with a-priori information stored onboard to determine position and trajectory of the autonomous aircraft. Once position and trajectory are known, the object identification and positioning system sends the data to the autonomous aircraft flight management system for autopilot control of the autonomous aircraft.

Processing of multispectral sensors for autonomous flight

A system and method are disclosed for design of a suite of multispectral (MS) sensors and processing of enhanced data streams produced by the sensors for autonomous aircraft flight. The suite of MS sensors is specifically configured to produce data streams for processing by an autonomous aircraft object identification and positioning system processor. Multiple, diverse MS sensors image naturally occurring, or artificial features (towers buildings etc.) and produce data streams containing details which are routinely processed by the object identification and positioning system yet would be unrecognizable to a human pilot. The object identification and positioning system correlates MS sensor output with a-priori information stored onboard to determine position and trajectory of the autonomous aircraft. Once position and trajectory are known, the object identification and positioning system sends the data to the autonomous aircraft flight management system for autopilot control of the autonomous aircraft.

Encoding LiDAR scanned data for generating high definition maps for autonomous vehicles
11754716 · 2023-09-12 · ·

Embodiments relate to methods for efficiently encoding sensor data captured by an autonomous vehicle and building a high definition map using the encoded sensor data. The sensor data can be LiDAR data which is expressed as multiple image representations. Image representations that include important LiDAR data undergo a lossless compression while image representations that include LiDAR data that is more error-tolerant undergo a lossy compression. Therefore, the compressed sensor data can be transmitted to an online system for building a high definition map. When building a high definition map, entities, such as road signs and road lines, are constructed such that when encoded and compressed, the high definition map consumes less storage space. The positions of entities are expressed in relation to a reference centerline in the high definition map. Therefore, each position of an entity can be expressed in fewer numerical digits in comparison to conventional methods.

Encoding LiDAR scanned data for generating high definition maps for autonomous vehicles
11754716 · 2023-09-12 · ·

Embodiments relate to methods for efficiently encoding sensor data captured by an autonomous vehicle and building a high definition map using the encoded sensor data. The sensor data can be LiDAR data which is expressed as multiple image representations. Image representations that include important LiDAR data undergo a lossless compression while image representations that include LiDAR data that is more error-tolerant undergo a lossy compression. Therefore, the compressed sensor data can be transmitted to an online system for building a high definition map. When building a high definition map, entities, such as road signs and road lines, are constructed such that when encoded and compressed, the high definition map consumes less storage space. The positions of entities are expressed in relation to a reference centerline in the high definition map. Therefore, each position of an entity can be expressed in fewer numerical digits in comparison to conventional methods.

Apparatus and method for scanning and ranging with eye-safe pattern

An optical apparatus comprises: a light source configured to emit light composed of a sequence of shots; and a steering device optically coupled to the light source and configured to steer the shots emitted by the light source in accordance with a predefined scan pattern such that at least one intermediate shot is emitted by the light source between a first shot directed by the steering device within an aperture defined by an eye safety regulation and a subsequent, second shot directed by the steering device within the same aperture, each intermediate shot being directed by the steering device outside the aperture.