Patent classifications
H04R2430/20
Microphone occlusion detection
A system configured to perform microphone occlusion event detection. When a device detects a microphone occlusion event, the device will modify audio processing performed prior to speech processing, such as by disabling spatial processing and only processing audio data from a single microphone. The device detects the microphone occlusion event by determining inter-level difference (ILD) values between two microphone signals and using the ILD values as input features to a classifier. For example, when a far-end reference signal is inactive, the classifier may process a first ILD value within a high frequency band. However, when the far-end reference signal is active, the classifier may process the first ILD value and a second ILD value within a low frequency band.
Orientation-based playback device microphone selection
Aspects of a multi-orientation playback device including at least one microphone array are discussed. A method may include determining an orientation of the playback device which includes at least one microphone array and determining at least one microphone training response for the playback device from a plurality of microphone training responses based on the orientation of the playback device. The at least one microphone array can detect a sound input, and the location information of a source of the sound input can be determined based on the at least one microphone training response and the detected sound input. Based on the location information of the source, the directional focus of the at least one microphone array can be adjusted, and the sound input can be captured based on the adjusted directional focus.
Audio device, server, audio system, and method of controlling audio device
An audio device includes a network interface, an amplifier that amplifies an audio signal received through the network interface, and a processor configure to obtain an output value of a signal from the amplifier and sends the output value of the signal through the network interface.
Emergency sound localization
Techniques for determining information associated with sounds detected in an environment based on audio data are discussed herein. Audio sensors of a vehicle may determine audio data associated with sounds from the environment. Sounds may be caused by objects in the environment such as emergency vehicles, construction zones, non-emergency vehicles, humans, audio speakers, nature, etc. A model may determine a classification of the audio data and/or a probability value representing a likelihood that sound in the audio data is associated with the classification. A direction of arrival may be determined based on receiving classification values from multiple audio sensors of the vehicle, and other actions can be performed or the vehicle can be controlled based on the direction of arrival.
Splitting frequency-domain processing between multiple DSP cores
An audio processing system may split frequency-domain processing between multiple DSP cores. Processing multi-channel audio data—e.g., from devices with multiple speakers—may require more computing power than available on a single DSP core. Such processing typically occurs in the frequency domain; DSP cores, however, typically communicate via ports configured for transferring data in the time-domain. Converting frequency-domain data into the time domain for transfer requires additional resources and introduces lag. Furthermore, transferring frequency-domain data may result in scheduling issues due to a mismatch between buffer size, bit rate, and the size of the frequency-domain data chunks transferred. However, the buffer size and bit rate may be artificially configured to transfer a chunk of frequency-domain data corresponding to a delay in the communication mechanism used by the DSP cores. In this manner, frequency-domain data can be transferred with a proper periodicity.
Partial HRTF compensation or prediction for in-ear microphone arrays
In some embodiments, an ear-mounted sound reproduction system is provided. The system includes an ear-mountable housing that sits within the pinna of the ear and occludes the ear canal. In some embodiments, the ear-mountable housing includes a plurality of external-facing microphones. Because the external-facing microphones may be situated within the pinna of the ear but outside of the ear canal, the microphones will experience some, but not all, of the three-dimensional acoustic effects of the pinna. In some embodiments, sound is reproduced by an internal-facing driver element of the housing using a plurality of filters applied to the signals received by the plurality of external-facing microphones to preserve three-dimensional localization cues that would be present at the eardrum in the absence of the housing, such that the housing is essentially transparent to the user. In some embodiments, techniques are provided for deriving the plurality of filters.
Electronic device and method for voice recording in electronic device
An electronic device is provided. The electronic device includes a first housing structure including a first microphone, a second housing structure including a second microphone and foldably coupled to the first housing structure, a sensor module disposed in the first housing structure or the second housing structure, and a processor in the first housing structure or the second housing structure, operationally connected with the first microphone, the second microphone, and the sensor module, the processor configured to receive a folding event related to the first housing structure and the second housing structure through the sensor module while a specific voice recording mode is performed, identify folding state information in response to receiving the folding event, identify a recording function configuration related to the first microphone and the second microphone, based on the folding state information and the specific voice recording mode, and perform recording by applying the recording function configuration.
HEARING DEVICE WITH OMNIDIRECTIONAL SENSITIVITY
A method performed by a first hearing device comprising microphone(s) configured to generate a first input signal, a communication unit configured to receive a second input signal from a second hearing device, an output unit, and a processor, the method comprising: generating a first intermediate signal including or based on a first weighted combination of the first input signal and the second input signal, wherein the first weighted combination is based on a first gain value and/or a second gain value; and generating an output signal for the output unit based on the first intermediate signal; wherein one or both of the first gain value and the second gain value are determined in accordance with an objective of making a power of the first input signal and a power of the second input signal differ by a preset power level difference greater than 2 dB in the weighted combination.
Dual listener positions for mixed reality
A method of presenting audio comprises: identifying a first ear listener position and a second ear listener position in a mixed reality environment; identifying a first virtual sound source in the mixed reality environment; identifying a first object in the mixed reality environment; determining a first audio signal in the mixed reality environment, wherein the first audio signal originates at the first virtual sound source and intersects the first ear listener position; determining a second audio signal in the mixed reality environment, wherein the second audio signal originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position; determining a third audio signal based on the second audio signal and the first object; presenting, to a first ear of a user, the first audio signal; and presenting, to a second ear of the user, the third audio signal.
SYSTEMS AND METHODS FOR RADAR DETECTION HAVING INTELLIGENT ACOUSTIC ACTIVATION
The disclosed system and method for smart detection of an armament projectile can mitigate the detection of its radar by counter-radar systems. Particularly, the system may include an array of acoustic sensors for sensing one or more volleys associated with an armament projectile. An intelligent filtering module, coupled to the array of acoustic sensors, may select a volley based upon a learning algorithm, which can be applied to a target profile of historical system data logs. Based upon sensed parameters of the volley, the intelligent filtering module can calculate a radiation duration and a search fan width for radar transmission. Specifically, a controller, within the intelligent filtering module, may couple to actuate the radar at the calculated search fan width for the calculated radiation duration. In some embodiments, the intelligent filtering module can selectively actuate one radar based upon highest expanded detection probability relative to location and status.