Patent classifications
H04R2410/01
Wearable Audio Device Feedforward Instability Detection
A system for detecting feedforward instability in a wearable audio device. The audio device includes an electro-acoustic transducer that is configured to develop sound for a user, a housing that holds the transducer, a feedforward microphone that is configured to detect sound outside of the housing and output a microphone signal, and an opening in the housing that emits sound pressure from the transducer that can reach the microphone. A feedforward instability detector is configured to apply two filters to the microphone signal. A first filter passes more energy in a frequency band than does a second filter, to develop a filtered signal. The filtered signal is compared to the microphone signal outside of the frequency band, to develop a comparison signal that is indicative of feedforward instability in the frequency band.
System and methods for wireless remote control over cameras with audio processing to generate a refined audio signal
Systems and methods for wireless remote control operation of cameras are provided. This system includes a remote controller which includes an interface for receiving commands from a user (such as turning on or off a camera), and a transceiver for transmitting the commands to one or more camera transceivers which are coupled to cameras. The remote controller may also include a display that indicates battery levels, camera status and even video feeds. Camera status and video feeds are transmitted from the camera transceiver which is coupled to the camera via an electrical bus interface. It may include a video converter that accepts raw video data from the camera and converts it into a video feed that is transmitted. Additionally, the camera transceiver may include an advanced audio circuit which subtracts measured pressure data from audio feeds to cancel out wind sounds.
Automated pause of media content playback based on sound level
An example system for playing media content with a media playback device in a vehicle can be programmed to obtain a sound measurement indicative of a sound level associated with playback of the media content by the media playback device in the vehicle. The example system also can determine a deviation in an expected sound level based upon the sound measurement. Finally, the system can modify playback of the media content by the media playback device (110) based upon the deviation.
AUDIO PROCESSING USING AN INTELLIGENT MICROPHONE
The present disclosure relates generally to improving audio processing using an intelligent microphone and, more particularly, to techniques for processing audio received at a microphone with integrated analog-to-digital conversion, digital signal processing, acoustic source separation, and for further processing by a speech recognition system. Embodiments of the present disclosure include intelligent microphone systems designed to collect and process high-quality audio input efficiently. Systems and method for audio processing using an intelligent microphone include an integrated package with one or more microphones, analog-to-digital converters (ADCs), digital signal processors (DSPs), source separation modules, memory, and automatic speech recognition. Systems and methods are also provided for audio processing using an intelligent microphone that includes a microphone array and uses a preprogrammed audio beamformer calibrated to the included microphone array.
Wind noise reduction by microphone placement
An image capture device includes a housing having a lens snout protruding from a front housing surface. A front microphone is mounted below the lens snout. A top microphone is mounted under a top housing surface. The top microphone is positioned to receive direct freestream air flow at a first pitched forward angle. The front microphone is positioned to receive turbulent air flow at a second pitched forward angle. The second pitched forward angle is greater than or equal to the first pitched forward angle. An audio processor receives a first audio signal and a second audio signal from the top microphone and front microphone, respectively. The audio processor generates frequency sub-bands from the first and second audio signals. The audio processor selects the frequency sub-bands with the lowest noise metric and combines them to generate an output audio signal.
Audio systems, devices, MEMS microphones, and methods thereof
In one embodiment, a MEMS microphone can be coupled to an acoustic horn to provide various benefits and improvements including, but not limited to, at-a-distance acoustic signal reception with improvements in signal-to-noise ratio and directional preference.
Method and device for processing audio signal, and storage medium
An original noisy signal of each of at least two microphones is acquired by acquiring, using the at least two microphones, an audio signal emitted by each sound source. For each frame in time domain, an estimated frequency-domain signal of each sound source is acquired according to the original noisy signal of each of the at least two microphones. A frequency collection containing a plurality of predetermined static frequencies and dynamic frequencies is determined in a predetermined frequency band range. A weighting coefficient of each frequency contained in the frequency collection is determined according to the estimated frequency-domain signal of the each frequency in the frequency collection. A separation matrix of the each frequency is determined according to the weighting coefficient. The audio signal emitted by each of the at least two sound sources is acquired based on the separation matrix and the original noisy signal.
Centrally controlling communication at a venue
One example may include a method that includes initiating an audio recording to capture audio data, comparing the audio data received from a microphone of a mobile device to an audio data range, determining whether the audio data is above an optimal level based on a result of the comparison, and queuing the audio data in an audio data queue when the audio data is above the optimal level.
System and method for listener controlled beamforming
A system and method for providing assistive listening for a plurality of listeners in an environment including a plurality of acoustic sources. A microphone array in combination with an acoustic beamforming processor configured to receive the acoustic signals within the environment and to process the acoustic signals based upon a target location of an acoustic signal selected on a listener-controlled interface device to generate a steered beam pattern. The acoustic beamforming processor further configured to transmit the steered beam pattern to the listener-controlled interface device based on the target location selected. The listener-controlled interface device configured to provide the steered beam pattern to an ear-level transducer of a hearing-impaired listener.
DIRECTIONAL ACOUSTIC SENSOR, AND METHODS OF ADJUSTING DIRECTIONAL CHARACTERISTICS AND ATTENUATING ACOUSTIC SIGNAL IN SPECIFIC DIRECTION USING THE SAME
Disclosed are a directional acoustic sensor, a method of adjusting directional characteristics using the directional acoustic sensor, and a method of attenuating an acoustic signal in a specific direction using the directional acoustic sensor. The directional acoustic sensor includes a plurality of resonance units arranged to have different directionalities and a signal processor configured to adjust directional characteristics by calculating at least one of a sum of and a difference between outputs of the resonance units. In this state, the signal processor attenuates an acoustic signal in a specific direction by using a plurality of directional characteristics obtained by calculating at least one of the sum of and the difference between the outputs of the resonance units at a certain ratio.