G01S3/8086

POSITION DETERMINATION SYSTEM HAVING A DECONVOLUTION DECODER
20240288531 · 2024-08-29 · ·

The present disclosure relates to an acoustic position determination system that includes a mobile communication device and at least one base transmitter unit. The mobile communication device is configured to transmit and receive acoustic signals. Due to relative movements between the mobile communication device and the base transmitter unit, frequencies of the received signals shift due to Doppler effect. The mobile communication device is configured to compensate Doppler frequency shifts in the received acoustic signals prior to performing a deconvolution decoding process. The mobile communication device is further configured to compensate Doppler frequency shifts and perform deconvolution decoding process on acoustic signals received from multiple signal transmission paths.

SYSTEMS AND METHODS FOR ANALYZING AND DISPLAYING ACOUSTIC DATA

Some systems include an acoustic sensor array configured to receive acoustic signals, an electromagnetic imaging tool configured to receive electromagnetic radiation, a user interface, a display, and a processor. The processor can receive electromagnetic data from the electromagnetic imaging tool and acoustic data from the acoustic sensor array. The processor can generate acoustic image data of the scene based on the received acoustic data, generate a display image comprising combined acoustic image data and electromagnetic image data, and present the display image on the display. The processor can receive an annotation input from the user interface and update the display image based on the received annotation input. The processor can be configured to determine one or more acoustic parameters associated with the received acoustic signal and determine a criticality associated with the acoustic signal. A user can annotated the display image with determined criticality information or other determined information.

RADIATION ULTRASONIC WAVE VISUALIZATION METHOD AND ELECTRONIC APPARATUS FOR PERFORMING RADIATION ULTRASONIC WAVE VISUALIZATION METHOD

A radiation ultrasonic wave visualization method in which an ultrasonic wave radiated by a sound source is visualized, comprises: heterodyne-converting ultrasonic signals in a band of at least 20 KHz or more, which are acquired by an ultrasonic sensor array constituted by a plurality of ultrasonic sensors and converting the ultrasonic signals into a low-frequency signal and thereafter, beamforming the converted low-frequency signals or beamforming the converted low-frequency signals based on resampling signals, thereby handling the low-frequency signals without distorting ultrasonic sound location information to reduce a data handling amount in the beamforming step.

PORTABLE ULTRASONIC FACILITIES DIAGNOSIS DEVICE

A portable facility failure diagnosis device using detection of radiation ultrasonic waves, comprising: an ultrasonic sensor array; a data acquisition board (DAQ board) in which an electronic circuit for acquiring ultrasonic signals at a sampling frequency of the ultrasonic signals sensed by the ultrasonic sensor array is mounted on a substrate of the data acquisition board (DAQ board); a main board in which an operation processing device that processes the ultrasonic signals received from the DAQ board is mounted on the substrate and the processed ultrasonic sound source information to a display device; a data storage medium storing data processed in the operation processing device of the main board; a display device visually displaying the data processed; and an optical camera picking up an image of a direction.

LOCALIZATION ALGORITHM FOR SOUND SOURCES WITH KNOWN STATISTICS
20180299527 · 2018-10-18 ·

The proposed method for localizing a target sound source from a plurality of sound sources, wherein a multi-channel recording signal of the plurality of sound sources comprises a plurality of microphone channel signals, comprises converting each microphone channel signal into a respective channel spectrogram in a time-frequency domain, blindly separating the channel spectrograms to obtain a plurality of separated source signals, identifying, among the plurality of separated source signals, the separated source signal that best matches a target source model, estimating, based on the identified separated source signal, a binary mask reflecting where the target sound source is active in the channel spectrograms in terms of time and frequency, applying the binary mask on the channel spectrograms to obtain masked channel spectrograms, and localizing the target sound source from the plurality of sound sources based on the masked channel spectrograms.

Detection of audio communication signals present in a high noise environment

A system comprises error microphones disposed in a predetermined pattern to capture audio signals including speech and one or more reference sensors to capture a reference noise signal from a noise source. A portion of the reference noise signal is removed from the captured audio signals to generate partially processed audio signals, which are paired to form signal pairs. For each signal pair, a modified cross-correlation vector, with effects of low frequency contents removed, is generated, then converted to a rotated angular domain cross-correlation vector based in part on a physical angle associated with locations of an associated pair of error microphones. The rotated angular domain cross-correlation vectors are summed, and a weighting vector is applied to the sum to identify direction information of a desired audio signal associated with the speech. The directional information is utilized to beamform the partially processed audio signals and output the desired audio signal.

Systems and methods for representing acoustic signatures from a target scene

Acoustic imaging systems can include an acoustic sensing array, an electromagnetic imaging tool, a display, and an audio device. A processor can receive data from the acoustic sensor array and the electromagnetic imaging tool to generate a display image combining acoustic image data and electromagnetic image data. Systems can include an audio device that receives an audio output from the processor and outputs audio feedback signals to a user. The audio feedback signals can represent acoustic signals from an acoustic scene. Systems can provide a display image to a user including acoustic image data, and a user can select an acoustic signal for which to provide a corresponding audio output to an audio device. Audio outputs and display images can change dynamically in response to a change in pointing of the acoustic sensing array, such as by changing a stereo audio output.

Systems and methods for projecting and displaying acoustic data

Systems can include an acoustic sensor array configured to receive acoustic signals, an illuminator configured to emit electromagnetic radiation, an electromagnetic imaging tool configured to receive electromagnetic radiation, a distance measuring tool, and a processor. The processor can illuminate the target scene via the illuminator, receive electromagnetic image data from the electromagnetic imaging tool representative of the illuminated scene, receive acoustic data from the acoustic sensor array, and receive distance information from the distance measuring tool. The processor can be further configured to generate acoustic image data of the scene based on the received acoustic data and received distance information and generate a display image comprising combined acoustic image data and electromagnetic image data. The processor can determine depths of various acoustic signals within a scene and generate a representation of the scene the shows the determined depths, including floorplan and volumetric representations.

SYSTEMS AND METHODS FOR PROJECTING AND DISPLAYING ACOUSTIC DATA

Systems can include an acoustic sensor array configured to receive acoustic signals, an illuminator configured to emit electromagnetic radiation, an electromagnetic imaging tool configured to receive electromagnetic radiation, a distance measuring tool, and a processor. The processor can illuminate the target scene via the illuminator, receive electromagnetic image data from the electromagnetic imaging tool representative of the illuminated scene, receive acoustic data from the acoustic sensor array, and receive distance information from the distance measuring tool. The processor can be further configured to generate acoustic image data of the scene based on the received acoustic data and received distance information and generate a display image comprising combined acoustic image data and electromagnetic image data. The processor can determine depths of various acoustic signals within a scene and generate a representation of the scene the shows the determined depths, including floorplan and volumetric representations.

Position determination system having a deconvolution decoder
12449504 · 2025-10-21 · ·

The present disclosure relates to an acoustic position determination system that includes a mobile communication device and at least one base transmitter unit. The mobile communication device is configured to transmit and receive acoustic signals. Due to relative movements between the mobile communication device and the base transmitter unit, frequencies of the received signals shift due to Doppler effect. The mobile communication device is configured to compensate Doppler frequency shifts in the received acoustic signals prior to performing a deconvolution decoding process. The mobile communication device is further configured to compensate Doppler frequency shifts and perform deconvolution decoding process on acoustic signals received from multiple signal transmission paths.