G01S5/28

Audio distance estimation for spatial audio processing

A method for spatial audio signal processing including: obtaining, from a first capture device, at least one first audio signal and at least one first direction parameter for at least one frequency band; obtaining, from a second capture device, at least one second audio signal and at least one second direction parameter for the at least one frequency band; obtaining a first position associated with the first capture device; obtaining a second position associated with the second capture device; determining a distance parameter for the at least one frequency band in relation to the first position based, at least partially, on the at least one first direction parameter and the at least one second direction parameter; and enabling an output and/or store of the at least one first audio signal, the at least one first direction parameter and the distance parameter.

Sound source distance estimation

An apparatus for generating at least one distance estimate to at least one sound source within a sound scene comprising the least one sound source, the apparatus configured to: receive at least two audio signals from a microphone array located within the sound scene; receive at least one further audio signal associated with the at least one sound source; determine at least one portion of the at least two audio signals from a microphone array corresponding to the at least one further audio signal associated with the at least one sound source; determine a distance estimate to the at least one sound source based on the at least one portion of the at least two audio signals from a microphone array corresponding to the at least one further audio signal associated with the at least one sound source.

Sound source distance estimation

An apparatus for generating at least one distance estimate to at least one sound source within a sound scene comprising the least one sound source, the apparatus configured to: receive at least two audio signals from a microphone array located within the sound scene; receive at least one further audio signal associated with the at least one sound source; determine at least one portion of the at least two audio signals from a microphone array corresponding to the at least one further audio signal associated with the at least one sound source; determine a distance estimate to the at least one sound source based on the at least one portion of the at least two audio signals from a microphone array corresponding to the at least one further audio signal associated with the at least one sound source.

Associating Audio with Three-Dimensional Objects in Videos
20170366896 · 2017-12-21 ·

Disclosed is a system and method for generating a model of the geometric relationships between various audio sources recorded by a multi-camera system. The spatial audio scene module associates source signals, extracted from recorded audio, of audio sources to visual objects identified in videos recorded by one or more cameras. This association may be based on estimated positions of the audio sources based on relative signal gains and delays of the source signal received at each microphone. The estimated positions of audio sources are tracked indirectly by tracking the associated visual objects with computer vision. A virtual microphone module may receive a position for a virtual microphone and synthesize a signal corresponding to the virtual microphone position based on the estimated positions of the audio sources.

DETECTION OF DEVICE PROVIDING AUDIBLE NOTIFICATION AND PRESENTATION OF ID/LOCATION OF DEVICE IN RESPONSE

In one aspect, a first device may include at least one processor and storage accessible to the at least one processor. The storage may include instructions executable by the at least one processor to receive input from at least one microphone, with the input indicating an audible notification from a second device different from the first device. The instructions may then be executable to, based on the input from the at least one microphone, provide an output indicating a location of the second device and/or an identifier of the second device.

UNDERWATER ACOUSTIC RANGING AND LOCALIZATION

A method is provided for localizing an underwater vehicle using acoustic ranging. The method includes receiving, using an acoustic receiver, a time series signal based on one or more acoustic signals transmitted from an acoustic source having a known location; determining a travel time of the waveform from the known location of the acoustic source to the acoustic receiver; and determining a range of the underwater vehicle with respect to the acoustic source based on the travel time of the waveform and a sound speed field taken along a ray trajectory extending from the known location of the acoustic source and intersecting with the acoustic receiver at an expected arrival time and depth of the acoustic signal at the underwater vehicle.

System for Receiving Communications
20230168331 · 2023-06-01 ·

Methods and systems for spatial filtering transmitters and receivers capable of simultaneous communication with one or more receivers and transmitters, respectively, the receivers capable of outputting source directions to humans or devices. The methods and systems use spherical wave field partial wave expansion (PWE) models for transmitted and received fields at antennas and for waves generated by contributing sources. The source PWE models have expansion coefficients expressed as functions of directional coordinates of the sources. For spatial filtering receivers a processor uses the output signals from at least one sensor outputting signals consistent with Nyquist criteria representative of the wave field and the source PWE model to determines directional coordinates of sources (wherein the number of floating point operations are reduced) and outputs the directional coordinates and communications to a reporter configured for reporting information to humans. For spatial filtering transmitters a processor uses known receiver directions and source partial wave expansions to generate signals for transducers producing a composite total wave field conveying communications to the specified receivers. The methods and communications reduce the processing required for transmitting and receiving spatially filtered communications.

System and methods for non-parametric technique based geolocation and cognitive sensor activation
09804253 · 2017-10-31 · ·

The present invention relates to a geolocation system and method for a multi-path environment. The geolocation system comprises one or more emitters (201a . . . 201n), one or more sensors (202a . . . 202n) comprising at least one processor. A first processor (204) estimates angle of arrival (AOA) and time of arrival (TOA) from the signals received from said one or more emitters (201a . . . 201n). A second processor (205) determines clusters based on the (AOA) and (TOA) data. The system also comprises a central node (207) in communication with at least one sensor (202a . . . 202n) and configured to estimate geolocation of one or more emitters (201a . . . 201n) wherein, said second processor (205) clusters data for the one or more emitters (201a . . . 201n) by executing a non-parametric Bayesian technique and said central node (207) utilizes hybrid angle of arrival-time difference of arrival (AOA-TDOA) technique to determine geolocation of each of the emitters (201a . . . 201n).

System and methods for non-parametric technique based geolocation and cognitive sensor activation
09804253 · 2017-10-31 · ·

The present invention relates to a geolocation system and method for a multi-path environment. The geolocation system comprises one or more emitters (201a . . . 201n), one or more sensors (202a . . . 202n) comprising at least one processor. A first processor (204) estimates angle of arrival (AOA) and time of arrival (TOA) from the signals received from said one or more emitters (201a . . . 201n). A second processor (205) determines clusters based on the (AOA) and (TOA) data. The system also comprises a central node (207) in communication with at least one sensor (202a . . . 202n) and configured to estimate geolocation of one or more emitters (201a . . . 201n) wherein, said second processor (205) clusters data for the one or more emitters (201a . . . 201n) by executing a non-parametric Bayesian technique and said central node (207) utilizes hybrid angle of arrival-time difference of arrival (AOA-TDOA) technique to determine geolocation of each of the emitters (201a . . . 201n).

WEARABLE AUDITORY FEEDBACK DEVICE
20170303052 · 2017-10-19 ·

A wearable auditory feedback device includes a frame, a plurality of microphone arrays, a plurality of feedback motors, and a processor. The frame is wearable on a user's head or neck. The microphone arrays are embedded in the frame on a left side, a right side, and a rear side with respect to the user. The feedback motors are also embedded in the frame on the left side, the right side, and the rear side with respect to the user. The processor is configured to receive a plurality of sound waves collected with the microphone arrays from a sound wave source, determine an originating direction of the sound waves, and activate a feedback motor on a side the frame corresponding to the originating direction.