H04S3/008

Display apparatus and method for processing audio

A display apparatus and a method for processing audio are provided, the display apparatus includes a circuit board provided with a hybrid circuit, a filter circuit and a speaker; the hybrid circuit is configured to receive an original audio signal and superpose a first sub-signal of the original audio signal on a second sub-signal of the original audio signal to obtain a hybrid audio signal; the first sub-signal includes at least one channel of audio signal, the second sub-signal includes at least two channels of audio signal; the filter circuit is configured to filter the hybrid audio signal according to a frequency characteristic of the first sub-signal and the second sub-signal to obtain a restored original audio signal; and the speaker, connected with the filter circuit, is configured to output the restored original audio signal.

Electronic system for producing a coordinated output using wireless localization of multiple portable electronic devices
11520550 · 2022-12-06 · ·

Device localization (e.g., ultra-wideband device localization) may be used to provide coordinated outputs and/or receive coordinated inputs using multiple devices. Providing coordinated outputs may include providing partial outputs using multiple devices, modifying an output of a device based on its position and/or orientation relative to another device, and the like. In some cases, each device of a set of multiple devices may provide a partial output, which combines with partial outputs of the remaining devices to produce a coordinated output.

QUANTIZATION OF SPATIAL AUDIO DIRECTION PARAMETERS
20220386056 · 2022-12-01 ·

A method for spatial audio signal encoding comprising: obtaining a plurality of audio direction parameters, wherein each parameter comprises an elevation value and an azimuth value and wherein each parameter has an ordered position; deriving for each of the plurality of audio direction parameters a corresponding derived audio direction parameter (SP) comprising an elevation and an azimuth value, corresponding derived audio direction parameters (SP) being arranged in a manner determined by a spatial utilization defined by the elevation values and the azimuth values of the plurality of audio direction parameters; rotating each derived audio direction parameter (SP) by the azimuth value (φ.sub.0) of an audio direction parameter in the first position of the plurality of audio direction parameters and quantizing the rotation to determine for each a corresponding quantized rotated derived audio direction parameter; changing the ordered position of an audio direction parameter to a further position coinciding with a position of a rotated derived audio direction parameter when the azimuth value of the audio direction parameter is closest to the azimuth value of the further rotated derived audio direction parameter compared to the azimuth values of other rotated derived audio direction parameters, followed by determining for each of the plurality audio direction parameters a difference between each audio direction parameter and their corresponding quantized rotated derived audio direction parameter; and quantizing a difference for each of the plurality of audio direction parameters, wherein a difference quantization resolution for each of the plurality of audio direction parameters is defined based on a spatial extent of the audio direction parameters.

SIGNALLING OF AUDIO EFFECT METADATA IN A BITSTREAM

Methods, systems, computer-readable media, and apparatuses for manipulating a soundfield are presented. Some configurations include receiving a bitstream that comprises metadata and a soundfield description; parsing the metadata to obtain an effect identifier and at least one effect parameter value; and applying, to the soundfield description, an effect identified by the effect identifier. The applying may include using the at least one effect parameter value to apply the identified effect to the soundfield description.

IMMERSIVE AUDIO PLATFORM

Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.

AUDIO PROCESSING METHOD AND APPARATUS, READABLE MEDIUM, AND ELECTRONIC DEVICE
20220386061 · 2022-12-01 ·

Provided are an audio processing method and apparatus, a readable medium, and an electronic device. The method includes: acquiring an original image captured by a terminal; determining a three-dimensional relative position of a target object relative to the terminal as a first three-dimensional relative position according to the original image; and performing three-dimensional effect processing on a target sound according to the first three-dimensional relative position to enable a sound source position of the target sound in audio obtained after the three-dimensional effect processing and the first three-dimensional relative position to conform to a positional relationship between the target object and a sound effect object corresponding to the target object, where the target sound is an effect sound corresponding to the sound effect object.

SPATIAL AUDIO MONAURALIZATION VIA DATA EXCHANGE
20220386054 · 2022-12-01 ·

A device includes a memory configure to store instructions and one or more processors configured to execute the instructions to obtain spatial audio data at a first audio output device. The one or more processors are further configured to perform data exchange, between the first audio output device and a second audio output device, of exchange data based on the spatial audio data. The one or more processors are also configured to generate first monaural audio output at the first audio output device based on the spatial audio data.

APPARATUS AND METHOD FOR PROCESSING MULTI-CHANNEL AUDIO SIGNAL

An apparatus for processing audio includes at least one processor configured to obtain a down-mixed audio signal from a bitstream, to obtain down-mixing-related information from the bitstream, to de-mix the down-mixing-related information by using down-mixing-related information, and to reconstruct an audio signal including at least one frame based on the de-mixed audio signal. The down-mixing-related information is information generated in units of frames by using an audio scene type.

SYSTEM AND METHOD FOR AUTOMATICALLY TUNING DIGITAL SIGNAL PROCESSING CONFIGURATIONS FOR AN AUDIO SYSTEM
20220386025 · 2022-12-01 ·

Embodiments include a processing device communicatively coupled to a plurality of audio devices comprising at least one microphone and at least one speaker, and to a digital signal processing (DSP) component having a plurality of audio input channels for receiving audio signals captured by the at least one microphone, the processing device being configured to identify one or more of the audio devices based on a unique identifier associated with each of said one or more audio devices; obtain device information from each identified audio device; and adjust one or more settings of the DSP component based on the device information. A computer-implemented method of automatically configuring an audio conferencing system, comprising a digital signal processing (DSP) component and a plurality of audio devices including at least one speaker and at least one microphone, is also provided.

Providing a multi-channel and a multi-zone audio environment

A multi-channel and multi-zone audio environment is provided. Various inventions are disclosed that allow playback devices on one or more networks to provide an effective multi-channel and a multi-zone audio environment using timing information. According to one example, timing information is used to coordinate playback devices connected over a low-latency network to provide audio along with a video display. In another example, timing information is used to coordinate playback devices connected over a mesh network to provide audio in one or more zones or zone groups.