H04S7/308

Graphical user interface and parametric equalizer in gaming systems

A system that incorporates the subject disclosure may include, for example, a gaming system that cooperates with a graphical user interface to enable user modification and enhancement of one or more audio streams associated with the gaming system. In embodiments, the audio streams may include a game audio stream, a chat audio stream of conversation among players of a video game, and a microphone audio stream of a player of the video game. Additional embodiments are disclosed.

Apparatus and Method for Rendering a Sound Scene Using Pipeline Stages
20230007435 · 2023-01-05 ·

Apparatus for rendering a sound scene, including: a first pipeline stage including a first control layer and a reconfigurable first audio data processor, wherein the reconfigurable first audio data processor is configured to operate in accordance with a first configuration of the reconfigurable first audio data processor; a second pipeline stage located, with respect to a pipeline flow, subsequent to the first pipeline stage, the second pipeline stage including a second control layer and a reconfigurable second audio data processor, wherein the reconfigurable second audio data processor is configured to operate in accordance with a first configuration of the reconfigurable second audio data processor; and a central controller for controlling the first control layer and the second control layer in response to the sound scene, so that the first control layer prepares a second configuration of the reconfigurable first audio data processor during or subsequent to an operation of the reconfigurable first audio data processor in the first configuration of the reconfigurable first audio data processor, or so that the second control layer prepares a second configuration of the reconfigurable second audio data processor during or subsequent to an operation of the reconfigurable second audio data processor in the first configuration of the reconfigurable second audio data processor, and wherein the central controller is configured to control the first control layer or the second control layer using a switch control to reconfigure the reconfigurable first audio data processor to the second configuration for the reconfigurable first audio data processor or to reconfigure the reconfigurable second audio data processor to the second configuration for the reconfigurable second audio data processor at a certain time instant.

SYSTEMS, METHODS AND APPARATUS FOR CONVERSION FROM CHANNEL-BASED AUDIO TO OBJECT-BASED AUDIO

Embodiments are disclosed for channel-based audio (CBA) (e.g., 22.2-ch audio) to object-based audio (OBA) conversion. The conversion includes converting CBA metadata to object audio metadata (OAMD) and reordering the CBA channels based on channel shuffle information derived in accordance with channel ordering constraints of the OAMD. The OBA with reordered channels is rendered in a playback device using the OAMD or in a source device, such as a set-top box or audio/video recorder. In an embodiment, the CBA metadata includes signaling that indicates a specific OAMD representation to be used in the conversion of the metadata. In an embodiment, pre-computed OAMD is transmitted in a native audio bitstream (e.g., AAC) for transmission (e.g., over HDMI) or for rendering in a source device. In an embodiment, pre-computed OAMD is transmitted in a transport layer bitstream (e.g., ISO BMFF, MPEG4 audio bitstream) to a playback device or source device.

Audio decoder and decoding method

A method for representing a second presentation of audio channels or objects as a data stream, the method comprising the steps of: (a) providing a set of base signals, the base signals representing a first presentation of the audio channels or objects; (b) providing a set of transformation parameters, the transformation parameters intended to transform the first presentation into the second presentation; the transformation parameters further being specified for at least two frequency bands and including a set of multi-tap convolution matrix parameters for at least one of the frequency bands.

Systems and methods to control spatial audio rendering
11564053 · 2023-01-24 · ·

A method of controlling spatial audio rendering includes comparing a first heartbeat pattern to a second heartbeat pattern to generate a comparison result. The first heartbeat pattern is based on sensor information associated with a first sensor of a first sensor type, and the second heartbeat pattern is based on sensor information associated with a second sensor of a second sensor type. The method also includes, based on the comparison result, controlling a spatial audio rendering function associated with media playback.

Multi-input push-to-talk switch with binaural spatial audio positioning

Various embodiments provide a multi-audio input, stereo audio output, push-to-talk (PTT) switch device. The device may include an audio processing unit configured to perform spatial separation/positioning for one or a plurality of audio sources. The audio processing unit of the device may apply unique head-related transfer functions (HRTFs) to produce left and right audio outputs that correspond to predetermined or dynamically positioned spatial locations for each of the incoming audio streams.

Converting Binaural Signals to Stereo Audio Signals
20220417691 · 2022-12-29 ·

An apparatus including circuitry configured to: obtain a binaural audio signal; obtain, based on the binaural audio signal, at least one direction parameter of at least one frequency band of the binaural audio signal; process the binaural audio signal to generate at least two audio signals for loudspeaker reproduction by modifying an inter-channel difference of the at least one frequency band of the binaural audio signal based on the at least one direction parameter for the at least one frequency band; and output the at least two audio signals for loudspeaker reproduction.

Apparatus and method for screen related audio object remapping

An apparatus for generating loudspeaker signals includes an object metadata processor configured to receive metadata, to calculate a second position of the audio object depending on the first position of the audio object and on a size of a screen if the audio object is indicated in the metadata as being screen-related, to feed the first position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being not screen-related, and to feed the second position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being screen-related. The apparatus further includes an object renderer configured to receive an audio object and to generate the loudspeaker signals depending on the audio object and on position information.

SYSTEM AND METHOD FOR AUTOMATICALLY TUNING DIGITAL SIGNAL PROCESSING CONFIGURATIONS FOR AN AUDIO SYSTEM
20220386025 · 2022-12-01 ·

Embodiments include a processing device communicatively coupled to a plurality of audio devices comprising at least one microphone and at least one speaker, and to a digital signal processing (DSP) component having a plurality of audio input channels for receiving audio signals captured by the at least one microphone, the processing device being configured to identify one or more of the audio devices based on a unique identifier associated with each of said one or more audio devices; obtain device information from each identified audio device; and adjust one or more settings of the DSP component based on the device information. A computer-implemented method of automatically configuring an audio conferencing system, comprising a digital signal processing (DSP) component and a plurality of audio devices including at least one speaker and at least one microphone, is also provided.

SYSTEM FOR INTELLIGENT AUDIO RENDERING USING HETEROGENEOUS SPEAKER NODES AND METHOD THEREOF
20220386026 · 2022-12-01 ·

A system for intelligent audio rendering using speaker nodes is provided. A source device determines a spatial location and speaker capability of one or more speaker nodes based on information embedded in a corresponding node of each of the one or more media devices, selects a first speaker most suitable for each audio channel based on the speaker capability and the spatial location of each of the one or more speakers, generates speaker profiles for the one or more speakers, maps an audio channel to each of the one or more speakers based on a speaker profile corresponding to each of the one or more speakers, estimates a media path between the source device and each of the one or more speakers, detects a change in the estimated media path, renders an audio on the one or more speakers based on the speaker profiles and the changes in the media paths corresponding to each of the one or more speakers in real-time.