Patent classifications
H04S2400/01
Electronic system for producing a coordinated output using wireless localization of multiple portable electronic devices
Device localization (e.g., ultra-wideband device localization) may be used to provide coordinated outputs and/or receive coordinated inputs using multiple devices. Providing coordinated outputs may include providing partial outputs using multiple devices, modifying an output of a device based on its position and/or orientation relative to another device, and the like. In some cases, each device of a set of multiple devices may provide a partial output, which combines with partial outputs of the remaining devices to produce a coordinated output.
QUANTIZATION OF SPATIAL AUDIO DIRECTION PARAMETERS
A method for spatial audio signal encoding comprising: obtaining a plurality of audio direction parameters, wherein each parameter comprises an elevation value and an azimuth value and wherein each parameter has an ordered position; deriving for each of the plurality of audio direction parameters a corresponding derived audio direction parameter (SP) comprising an elevation and an azimuth value, corresponding derived audio direction parameters (SP) being arranged in a manner determined by a spatial utilization defined by the elevation values and the azimuth values of the plurality of audio direction parameters; rotating each derived audio direction parameter (SP) by the azimuth value (φ.sub.0) of an audio direction parameter in the first position of the plurality of audio direction parameters and quantizing the rotation to determine for each a corresponding quantized rotated derived audio direction parameter; changing the ordered position of an audio direction parameter to a further position coinciding with a position of a rotated derived audio direction parameter when the azimuth value of the audio direction parameter is closest to the azimuth value of the further rotated derived audio direction parameter compared to the azimuth values of other rotated derived audio direction parameters, followed by determining for each of the plurality audio direction parameters a difference between each audio direction parameter and their corresponding quantized rotated derived audio direction parameter; and quantizing a difference for each of the plurality of audio direction parameters, wherein a difference quantization resolution for each of the plurality of audio direction parameters is defined based on a spatial extent of the audio direction parameters.
IMMERSIVE AUDIO PLATFORM
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
SPATIAL AUDIO MONAURALIZATION VIA DATA EXCHANGE
A device includes a memory configure to store instructions and one or more processors configured to execute the instructions to obtain spatial audio data at a first audio output device. The one or more processors are further configured to perform data exchange, between the first audio output device and a second audio output device, of exchange data based on the spatial audio data. The one or more processors are also configured to generate first monaural audio output at the first audio output device based on the spatial audio data.
APPARATUS AND METHOD FOR PROCESSING MULTI-CHANNEL AUDIO SIGNAL
An apparatus for processing audio includes at least one processor configured to obtain a down-mixed audio signal from a bitstream, to obtain down-mixing-related information from the bitstream, to de-mix the down-mixing-related information by using down-mixing-related information, and to reconstruct an audio signal including at least one frame based on the de-mixed audio signal. The down-mixing-related information is information generated in units of frames by using an audio scene type.
SYSTEM AND METHOD FOR AUTOMATICALLY TUNING DIGITAL SIGNAL PROCESSING CONFIGURATIONS FOR AN AUDIO SYSTEM
Embodiments include a processing device communicatively coupled to a plurality of audio devices comprising at least one microphone and at least one speaker, and to a digital signal processing (DSP) component having a plurality of audio input channels for receiving audio signals captured by the at least one microphone, the processing device being configured to identify one or more of the audio devices based on a unique identifier associated with each of said one or more audio devices; obtain device information from each identified audio device; and adjust one or more settings of the DSP component based on the device information. A computer-implemented method of automatically configuring an audio conferencing system, comprising a digital signal processing (DSP) component and a plurality of audio devices including at least one speaker and at least one microphone, is also provided.
System for and method of generating an audio image
A system for and a method of generating an audio image for use in rendering audio. The method comprises accessing an audio stream; accessing positional information, the positional information comprising a first position, a second position and a third position; and generating an audio image. In some embodiments, generating the audio image comprises generating, based on the audio stream, a first virtual wave front to be perceived by a listener as emanating from the first position; generating, based on the audio stream, a second virtual wave front to be perceived by the listener as emanating from the second position; and generating, based on the audio stream, a third virtual wave front to be perceived by the listener as emanating from the third position.
Augmented reality and virtual reality feedback enhancement system, apparatus and method
- Chandrasekaran Sakthivel ,
- Michael Apodaca ,
- Kai Xiao ,
- Altug Koker ,
- Jeffery S. Boles ,
- Adam T. Lake ,
- Nikos Kaburlasos ,
- Joydeep Ray ,
- John H. Feit ,
- Travis T. Schluessler ,
- Jacek Kwiatkowski ,
- James M. Holland ,
- Prasoonkumar Surti ,
- Jonathan Kennedy ,
- Louis Feng ,
- Barnan Das ,
- Narayan Biswal ,
- Stanley J. Baran ,
- Gokcen Cilingir ,
- Nilesh V. Shah ,
- Archie Sharma ,
- Mayuresh M. Varerkar
Systems, apparatuses and methods may provide away to render augmented reality and virtual reality (VR/AR) environment information. More particularly, systems, apparatuses and methods may provide a way to selectively suppress and enhance VR/AR renderings of n-dimensional environments. The systems, apparatuses and methods may deepen a user's VR/AR experience by focusing on particular feedback information, while suppressing other feedback information from the environment.
Generating sound zones using variable span filters
The invention provides a method for generating output filters to a plurality of loudspeakers at respective positions for playback of a plurality of different input signals in respective spatially different sound zones by means of a processor system. The method comprising computing spatio-temporal correlation matrices in response to spatial information, e.g. measured transfer functions, and in response to desired sound pressures in the plurality of sound zones. Joint eigenvalue decomposition of the spatial correlation matrices are then computed, or at least an approximation thereof, to arrive at eigenvectors accordingly. Next, variable span filters a reformed from a linear combination of the eigenvectors in response to a desired trade-off between acoustic contrast and acoustic errors in the sound zones. Finally, output filter for each of the plurality of loudspeakers, for each of the plurality of input signals, in accordance with the variable span filters. The method is applicable also for optimization in one zone, e.g. for room equalization.
Emergency sound localization
Techniques for determining information associated with sounds detected in an environment based on audio data are discussed herein. Audio sensors of a vehicle may determine audio data associated with sounds from the environment. Sounds may be caused by objects in the environment such as emergency vehicles, construction zones, non-emergency vehicles, humans, audio speakers, nature, etc. A model may determine a classification of the audio data and/or a probability value representing a likelihood that sound in the audio data is associated with the classification. A direction of arrival may be determined based on receiving classification values from multiple audio sensors of the vehicle, and other actions can be performed or the vehicle can be controlled based on the direction of arrival.