G01C21/28

Audio Processing Apparatus
20230213349 · 2023-07-06 ·

An apparatus configured to: determine, with a position sensor, position information; determine at least one keyword within at least one audio signal, wherein at least the at least one keyword is configured to be spatially processed; obtain at least one spatial processing parameter based at least partially, on the position information, wherein the at least one spatial processing parameter is configured to be used to spatially process at least the at least one keyword to be perceived from a direction during rendering, wherein the direction indicates a navigation direction; generate at least one processed audio signal, comprising processing at least the at least one keyword based on the at least one spatial processing parameter; and provide the at least one processed audio signal, comprising the at least one processed keyword, for generation of a virtual audio image.

Audio Processing Apparatus
20230213349 · 2023-07-06 ·

An apparatus configured to: determine, with a position sensor, position information; determine at least one keyword within at least one audio signal, wherein at least the at least one keyword is configured to be spatially processed; obtain at least one spatial processing parameter based at least partially, on the position information, wherein the at least one spatial processing parameter is configured to be used to spatially process at least the at least one keyword to be perceived from a direction during rendering, wherein the direction indicates a navigation direction; generate at least one processed audio signal, comprising processing at least the at least one keyword based on the at least one spatial processing parameter; and provide the at least one processed audio signal, comprising the at least one processed keyword, for generation of a virtual audio image.

METHOD FOR OBTAINING CONFIDENCE OF MEASUREMENT VALUE BASED ON MULTI-SENSOR FUSION AND AUTONOMOUS VEHICLE
20230213343 · 2023-07-06 ·

The present disclosure provides a method for obtaining confidence of a measurement value based on multi-sensor fusion and an autonomous vehicle, which includes that: a first measurement value position of a positioning component on a target vehicle is determined at a first moment, and a second measurement value position of the positioning component is determined at a second moment, where the first moment is earlier than the second moment; first distance information is acquired according to the first measurement value position and the second measurement value position; inertial measurement information and wheel speedometer information of the target vehicle from the first moment to the second moment are determined; second distance information is acquired based on the inertial measurement information and the wheel speedometer information; and confidence of a target measurement value corresponding to the second moment is acquired according to the first distance information and the second distance information.

METHOD FOR OBTAINING CONFIDENCE OF MEASUREMENT VALUE BASED ON MULTI-SENSOR FUSION AND AUTONOMOUS VEHICLE
20230213343 · 2023-07-06 ·

The present disclosure provides a method for obtaining confidence of a measurement value based on multi-sensor fusion and an autonomous vehicle, which includes that: a first measurement value position of a positioning component on a target vehicle is determined at a first moment, and a second measurement value position of the positioning component is determined at a second moment, where the first moment is earlier than the second moment; first distance information is acquired according to the first measurement value position and the second measurement value position; inertial measurement information and wheel speedometer information of the target vehicle from the first moment to the second moment are determined; second distance information is acquired based on the inertial measurement information and the wheel speedometer information; and confidence of a target measurement value corresponding to the second moment is acquired according to the first distance information and the second distance information.

Technologies for providing guidance for autonomous vehicles in areas of low network connectivity

Techniques are disclosed herein for providing guidance for autonomous vehicles in areas of low network connectivity, such as rural areas. According to an embodiment, a guidance system receives a request to exchange data with a vehicle within a specified radius thereof over a wireless connection (e.g., a radio frequency protocol-based connection). The data is stored by the guidance system and is indicative of navigation information within the specified radius. The guidance system transmits the stored data to the vehicle. The guidance system also receives, from the vehicle, data indicative of navigation information for a path previously passed by the vehicle.

Technologies for providing guidance for autonomous vehicles in areas of low network connectivity

Techniques are disclosed herein for providing guidance for autonomous vehicles in areas of low network connectivity, such as rural areas. According to an embodiment, a guidance system receives a request to exchange data with a vehicle within a specified radius thereof over a wireless connection (e.g., a radio frequency protocol-based connection). The data is stored by the guidance system and is indicative of navigation information within the specified radius. The guidance system transmits the stored data to the vehicle. The guidance system also receives, from the vehicle, data indicative of navigation information for a path previously passed by the vehicle.

SYSTEMS AND METHODS FOR RADIO FREQUENCY (RF) RANGING-AIDED LOCALIZATION AND MAP GENERATION
20230213664 · 2023-07-06 ·

Systems, methods, and devices for radio frequency (RF) ranging-aided localization and crowdsourced mapping are provided. In one aspect, a method performed by a user equipment (UE) includes obtaining sensor data comprising first radio frequency (RF) ranging data and imaging data. The method further includes tagging the first RF ranging data with location information and semantic information, wherein the semantic information is based on the imaging data, and wherein the semantic information indicates a first portion of the RF ranging data is associated with a static object type and a second portion of the RF ranging data is associated with a temporary-static object type different from the static object type. The method further includes transmitting, to a RF ranging assistance server, the first RF ranging data tagged with the location information and the semantic information.

SYSTEMS AND METHODS FOR RADIO FREQUENCY (RF) RANGING-AIDED LOCALIZATION AND MAP GENERATION
20230213664 · 2023-07-06 ·

Systems, methods, and devices for radio frequency (RF) ranging-aided localization and crowdsourced mapping are provided. In one aspect, a method performed by a user equipment (UE) includes obtaining sensor data comprising first radio frequency (RF) ranging data and imaging data. The method further includes tagging the first RF ranging data with location information and semantic information, wherein the semantic information is based on the imaging data, and wherein the semantic information indicates a first portion of the RF ranging data is associated with a static object type and a second portion of the RF ranging data is associated with a temporary-static object type different from the static object type. The method further includes transmitting, to a RF ranging assistance server, the first RF ranging data tagged with the location information and the semantic information.

Systems and methods for self-supervised residual flow estimation

A method includes generating a first warped image based on a pose and a depth estimated from a current image and a previous image in a sequence of images captured by a camera of the agent. The method also includes estimating a motion of dynamic object between the previous image and the target image. The method further includes generating a second warped image from the first warped image based on the estimated motion. The method still further includes controlling an action of an agent based on the second warped image.

Systems and methods for self-supervised residual flow estimation

A method includes generating a first warped image based on a pose and a depth estimated from a current image and a previous image in a sequence of images captured by a camera of the agent. The method also includes estimating a motion of dynamic object between the previous image and the target image. The method further includes generating a second warped image from the first warped image based on the estimated motion. The method still further includes controlling an action of an agent based on the second warped image.