Patent classifications
H04R5/04
XR RENDERING FOR 3D AUDIO CONTENT AND AUDIO CODEC
A device includes a memory configured to store instructions and also includes one or more processors configured to execute the instructions to obtain audio data corresponding to a sound source and metadata indicative of a direction of the sound source. The one or more processors are configured to execute the instructions to obtain direction data indicating a viewing direction associated with a user of a playback device. The one or more processors are configured to execute the instructions to determine a resolution setting based on a similarity between the viewing direction and the direction of the sound source. The one or more processors are also configured to execute the instructions to process the audio data based on the resolution setting to generate processed audio data.
SIMULTANEOUS DECONVOLUTION OF LOUDSPEAKER-ROOM IMPULSE RESPONSES WITH LINEARLY-OPTIMAL TECHNIQUES
One embodiment provides a method comprising determining stimuli for simultaneously exciting a plurality of speakers within a spatial area. The method further comprises simultaneously exciting the plurality of speakers by providing the stimuli to the plurality of speakers at the same time for reproduction. The method further comprises recording, during the reproduction, one or more measurements of sound arriving at one or more microphones within the spatial area. The method further comprises simultaneously deconvolving a plurality of impulse responses of the plurality of speakers based on the stimuli and the one or more measurements.
SIMULTANEOUS DECONVOLUTION OF LOUDSPEAKER-ROOM IMPULSE RESPONSES WITH LINEARLY-OPTIMAL TECHNIQUES
One embodiment provides a method comprising determining stimuli for simultaneously exciting a plurality of speakers within a spatial area. The method further comprises simultaneously exciting the plurality of speakers by providing the stimuli to the plurality of speakers at the same time for reproduction. The method further comprises recording, during the reproduction, one or more measurements of sound arriving at one or more microphones within the spatial area. The method further comprises simultaneously deconvolving a plurality of impulse responses of the plurality of speakers based on the stimuli and the one or more measurements.
AUDIO BEAM STEERING, TRACKING AND AUDIO EFFECTS FOR AR/VR APPLICATIONS
A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
AUDIO BEAM STEERING, TRACKING AND AUDIO EFFECTS FOR AR/VR APPLICATIONS
A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
Smart audio system capable of determining speaker type and position
There is provided a smart audio system including multiple audio devices and a central server. The central server confirms a model of every audio device and a position thereof in an operation area in a scan mode. The central server confirms a user position or a user state to accordingly control output power of a speaker of each of the multiple audio devices in an operation mode.
Acoustic monitoring using a sound masking emitter as a sensor
Example embodiments may include one or more of receiving an electrical sound emission signal from a sound controller, interrupting reception of the electrical sound emission signal, by a sound emission interruption circuit connected to a sound emitter, and receiving an electrical ambient sound signal via a sound detection circuit, based on ambient sound sensed by the sound emitter when the reception of the electrical sound emission signal is interrupted by the sound emission interruption circuit.
Acoustic monitoring using a sound masking emitter as a sensor
Example embodiments may include one or more of receiving an electrical sound emission signal from a sound controller, interrupting reception of the electrical sound emission signal, by a sound emission interruption circuit connected to a sound emitter, and receiving an electrical ambient sound signal via a sound detection circuit, based on ambient sound sensed by the sound emitter when the reception of the electrical sound emission signal is interrupted by the sound emission interruption circuit.
Audio apparatus and method of audio processing for rendering audio elements of an audio scene
An audio apparatus comprises a receiver (201) receiving data describing an audio scene. The data comprises audio data for a set of audio elements corresponding to audio sources in the scene and further includes metadata comprising at least an audio rendering property indicator for a first audio element of the set of audio elements. A first renderer (205) renders audio elements by generating a first set of audio signals for a set of loudspeakers and a second renderer (207) renders audio elements by generating a second set of audio signals for a headphone. Further, a selector (209) is arranged to select between the first renderer and the second renderer for rendering of at least a first part of the first audio element in response to the first audio rendering property indicator. The approach may for example provide improved virtual reality experiences using loudspeakers and headphone hybrid rendering.
Audio apparatus and method of audio processing for rendering audio elements of an audio scene
An audio apparatus comprises a receiver (201) receiving data describing an audio scene. The data comprises audio data for a set of audio elements corresponding to audio sources in the scene and further includes metadata comprising at least an audio rendering property indicator for a first audio element of the set of audio elements. A first renderer (205) renders audio elements by generating a first set of audio signals for a set of loudspeakers and a second renderer (207) renders audio elements by generating a second set of audio signals for a headphone. Further, a selector (209) is arranged to select between the first renderer and the second renderer for rendering of at least a first part of the first audio element in response to the first audio rendering property indicator. The approach may for example provide improved virtual reality experiences using loudspeakers and headphone hybrid rendering.