H04S2420/01

MODIFYING AUDIO DATA TRANSMITTED TO A RECEIVING DEVICE TO ACCOUNT FOR ACOUSTIC PARAMETERS OF A USER OF THE RECEIVING DEVICE

A communication system provides audio content to one or more client devices capable of playing spatialized audio content. For example, the communication system receives audio content from a client device and transmits the audio content to other client devices to be played for users. The communication system dynamically modifies audio content transmitted to different client devices based on a payload including audio parameters (e.g., local area acoustic properties, an audiogram for a user, a head related transfer function for a user, etc.) received from a client device.

Arrangement for generating head related transfer function filters

Arrangement for acquiring images for producing a head related transfer function filter is disclosed. In the arrangement the camera of a mobile phone or similar portable device is adjusted for the imaging. All acquired images are analyzed and only suitable images are sent further for producing the head related transfer filter. The arrangement is further configured to provide instructions to the user so that the whole head and other relevant body parts are sufficiently covered.

Methods and system for adjusting level of tactile content when presenting audio content

An audio system presented herein includes a transducer array, a sensor array, and a controller. The transducer array presents audio content to a user. The controller controls the transducer array to adjust a level of tactile content imparted to the user via actuation of at least one transducer in the transducer array while presenting the audio content to the user. The audio system can be part of a headset.

Audio decoder and decoding method

A method for representing a second presentation of audio channels or objects as a data stream, the method comprising the steps of: (a) providing a set of base signals, the base signals representing a first presentation of the audio channels or objects; (b) providing a set of transformation parameters, the transformation parameters intended to transform the first presentation into the second presentation; the transformation parameters further being specified for at least two frequency bands and including a set of multi-tap convolution matrix parameters for at least one of the frequency bands.

Calibrating listening devices

Techniques for calibrating listening devices are disclosed herein. The techniques include emitting a predetermined audio signal using an outward-facing transducer located on a first portion of a head-mounted device worn by the user, receiving the predetermined audio signal at a microphone located on a second portion of the head-mounted device, the second portion being different from the first portion, determining a transfer function for the user based on the received predetermined audio signal, and applying the transfer function to audio signals transmitted to the user.

Systems and methods to control spatial audio rendering
11564053 · 2023-01-24 · ·

A method of controlling spatial audio rendering includes comparing a first heartbeat pattern to a second heartbeat pattern to generate a comparison result. The first heartbeat pattern is based on sensor information associated with a first sensor of a first sensor type, and the second heartbeat pattern is based on sensor information associated with a second sensor of a second sensor type. The method also includes, based on the comparison result, controlling a spatial audio rendering function associated with media playback.

Spherical harmonic decomposition of a sound field detected by an equatorial acoustic sensor array

An audio system includes an equatorial acoustic sensor array (EASA) that may be coupled to an object. The audio system is configured to detect, via the EASA, signals corresponding to a portion of a sound field in a local area. The detected signals are converted into a plurality of corresponding abstract representations that describe the portion of the sound field. Effects of scattering of the object are removed from the abstract representations to create adjusted abstract representations. A set of spherical harmonic (SH) coefficients is determined using the adjusted abstract representations. The set of SH coefficients describe an entirety of the sound field. And the set of SH coefficients and head related transfer functions of a user are used for binaural rendering of the reconstructed sound field to the user.

APPARATUS AND METHOD FOR RENDERING AN AUDIO SCENE USING VALID INTERMEDIATE DIFFRACTION PATHS
20230019204 · 2023-01-19 ·

An apparatus for rendering an audio scene comprising an audio source at an audio source position and a plurality of diffracting objects, comprises: a diffraction path provider for providing a plurality of intermediate diffraction paths through the plurality of diffracting objects, an intermediate diffraction path having a starting point and an output edge of the plurality of diffracting objects and an associated filter information for the intermediate diffraction path; a renderer for rendering the audio source at a listener position, wherein the renderer is configured for determining, based on the output edges of the intermediate diffraction paths and the listener position, one or more valid intermediate diffraction paths from the audio source position to the listener position, determining, for each valid intermediate diffraction path of the one or more valid intermediate diffraction paths, a filter representation for a full diffraction path, and calculating audio output signals for the audio scene using an audio signal associated to the audio source and the filter representation for each full diffraction path.

Multi-input push-to-talk switch with binaural spatial audio positioning

Various embodiments provide a multi-audio input, stereo audio output, push-to-talk (PTT) switch device. The device may include an audio processing unit configured to perform spatial separation/positioning for one or a plurality of audio sources. The audio processing unit of the device may apply unique head-related transfer functions (HRTFs) to produce left and right audio outputs that correspond to predetermined or dynamically positioned spatial locations for each of the incoming audio streams.

SPATIALIZED AUDIO CHAT IN A VIRTUAL METAVERSE

Implementations described herein relate to methods, systems, and computer-readable media to provide spatialized audio in virtual experiences. The spatialized audio may be used in voice communications such as, for example, voice and/or video chats. The chats may include spatialized audio that is combined at a client device, or at an online experience platform, and is targeted to a particular user. Individual audio streams may be collected from a plurality of avatars and other objects, and combined based on the target user. The audio may also include background and/or ambient sounds to provide a rich, immersive audio stream in virtual experiences.