G01S3/80

ULTRASOUND TRANSDUCER INCLUDING A COMBINATION OF A BENDING AND PISTON MODE

An ultrasound transducer of a vehicle system, comprising a membrane configured to vibrate to generate an ultrasound when voltage is applied and further configured to vibrate in an out-of-plane movement, wherein the membrane includes a first piezoelectric film at a center of the membrane, a supporting member including a second piezoelectric film, the supporting member supporting and surrounding the membrane, wherein in response to a translation of motion or actuation from the membrane, the supporting member mode does not move when there is the out-of-plane movement from the membrane.

ULTRASOUND TRANSDUCER INCLUDING A COMBINATION OF A BENDING AND PISTON MODE

An ultrasound transducer of a vehicle system, comprising a membrane configured to vibrate to generate an ultrasound when voltage is applied and further configured to vibrate in an out-of-plane movement, wherein the membrane includes a first piezoelectric film at a center of the membrane, a supporting member including a second piezoelectric film, the supporting member supporting and surrounding the membrane, wherein in response to a translation of motion or actuation from the membrane, the supporting member mode does not move when there is the out-of-plane movement from the membrane.

System for receiving communications
11536795 · 2022-12-27 ·

Methods and systems for spatial filtering transmitters and receivers capable of simultaneous communication with one or more receivers and transmitters, respectively, the receivers capable of outputting source directions to humans or devices. The methods and systems use spherical wave field partial wave expansion (PWE) models for transmitted and received fields at antennas and for waves generated by contributing sources. The source PWE models have expansion coefficients expressed as functions of directional coordinates of the sources. For spatial filtering receivers a processor uses the output signals from at least one sensor outputting signals consistent with Nyquist criteria representative of the wave field and the source PWE model to determines directional coordinates of sources (wherein the number of floating point operations are reduced) and outputs the directional coordinates and communications to a reporter configured for reporting information to humans. For spatial filtering transmitters a processor uses known receiver directions and source partial wave expansions to generate signals for transducers producing a composite total wave field conveying communications to the specified receivers. The methods and communications reduce the processing required for transmitting and receiving spatially filtered communications.

Methods and systems for acoustic machine perception for an aircraft
11531100 · 2022-12-20 · ·

In an example, a method is described. The method includes causing one or more sensors arranged on an aircraft to acquire, over a window of time, first data associated with a first object that is within an environment of the aircraft, where the one or more sensors include one or more of a light detection and ranging (LIDAR) sensor, a radar sensor, or a camera, causing an array of microphones arranged on the aircraft to acquire, over approximately the same window of time as the first data is acquired, first acoustic data associated with the first object, and training a machine learning model by using the first acoustic data as an input value to the machine learning model and by using an azimuth, a range, an elevation, and a type of the first object identified from the first data as ground truth output labels for the machine learning model.

Beamformer enhanced direction of arrival estimation in a reverberant environment with directional noise

An estimator of direction of arrival (DOA) of speech from a far-field talker to a device in the presence of room reverberation and directional noise includes audio inputs received from multiple microphones and one or more beamformer outputs generated by processing the microphone inputs. A first DOA estimate is obtained by performing generalized cross-correlation between two or more of the microphone inputs. A second DOA estimate is obtained by performing generalized cross-correlation between one of the one or more beamformer outputs and one or more of: the microphone inputs and other of the one or more beamformer outputs. A selector selects the first or second DOA estimate based on an SNR estimate at the microphone inputs and a noise reduction amount estimate at the beamformer outputs. The SNR and noise reduction estimates may be obtained based on the detection of a keyword spoken by a desired talker.

Wake and sub-sonic blast gunshot detection
11519696 · 2022-12-06 · ·

Trajectory estimate for a sub-sonic projectile can be derived from sampling a wake contribution of an acoustic signal detected at a multi-detector array. The wake contribution is sampled, in time, and the samples are processed to determine a bearing estimate for the projectile from which the acoustic wake derives.

Wake and sub-sonic blast gunshot detection
11519696 · 2022-12-06 · ·

Trajectory estimate for a sub-sonic projectile can be derived from sampling a wake contribution of an acoustic signal detected at a multi-detector array. The wake contribution is sampled, in time, and the samples are processed to determine a bearing estimate for the projectile from which the acoustic wake derives.

Orientation-based playback device microphone selection
11516610 · 2022-11-29 · ·

Aspects of a multi-orientation playback device including at least one microphone array are discussed. A method may include determining an orientation of the playback device which includes at least one microphone array and determining at least one microphone training response for the playback device from a plurality of microphone training responses based on the orientation of the playback device. The at least one microphone array can detect a sound input, and the location information of a source of the sound input can be determined based on the at least one microphone training response and the detected sound input. Based on the location information of the source, the directional focus of the at least one microphone array can be adjusted, and the sound input can be captured based on the adjusted directional focus.

Object-localization and tracking using ultrasonic pulses with reflection rejection
11486961 · 2022-11-01 ·

Methods and systems are disclosed for determining pose information for at least one of a transmitter and receiver, both of which comprise ultrasonic transducers. A relative position is determined between the transmitter and the receiver and an orientation for at least is also determined. After obtaining field of view data for at least one of the transmitter and receiver, a field of view between them is determined, based at least in part on the field of view data, the determined relative position and the determined orientation. The pose information is then determined by weighting measurements of an ultrasonic signal emitted by the transmitter and received by the receiver based at least in part on the determined field of view relationship.

SOUND SOURCE LOCALIZATION MODEL TRAINING AND SOUND SOURCE LOCALIZATION METHOD, AND APPARATUS

The present disclosure provides a method for training sound source localization model and a sound source localization method, and relates to the field of artificial intelligence technologies such as voice processing and deep learning. The method for training sound source localization model method includes: obtaining a sample audio according to an audio signal including a wake-up word; extracting an audio feature of at least one audio frame in the sample audio, and marking a direction label and a mask label of the at least one audio frame; and training a neural network model by using the audio feature of the at least one audio frame and the direction label and the mask label of the at least one audio frame, to obtain a sound source localization model. The sound source localization method includes: acquiring a to-be-processed audio signal, and extracting an audio feature of each audio frame in the to-be-processed audio signal; inputting the audio feature of each audio frame into a sound source localization model, to obtain sound source direction information outputted by the sound source localization model for each audio frame; determining a wake-up word endpoint frame in the to-be-processed audio signal; and obtaining a sound source direction of the to-be-processed audio signal according to sound source direction information corresponding to the wake-up word endpoint frame.