H04S2400/11

Integration of remote audio into a performance venue
11700353 · 2023-07-11 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for validating and publishing workflows from remote environments. In some implementations, a link that includes at least one of audio data and video data is established between a wireless device of a remote participant and a computational mixer. A profile for the remote participant is referenced. A venue signal related to the at least one audio data and video data is generated based on the profile for the remote participant and using the computational mixer. The venue signal is transmitted.

Audio communication device

An audio communication device includes: a sound position determiner that determines sound localization positions for N audio signals in a virtual space having first and second walls; N sound localizers each performing sound localization processing to localize sound in the sound localization position determined by the sound position determiner, and outputting localized sound signals; an adder that sums the N localized sound signals, and outputs a summed localized sound signal. Each sound localizer performs the processing using: a first head-related transfer function (HRTF) assuming that a sound wave emitted from the sound localization position of the sound localizer determined by the sound position determiner directly reaches each ear of a hearer virtually present at the hearer position; and a second HRTF assuming that the sound wave emitted from the sound localization position reaches each ear of the hearer after being reflected by closer one of the first and second walls.

Audio Processing Apparatus
20230213349 · 2023-07-06 ·

An apparatus configured to: determine, with a position sensor, position information; determine at least one keyword within at least one audio signal, wherein at least the at least one keyword is configured to be spatially processed; obtain at least one spatial processing parameter based at least partially, on the position information, wherein the at least one spatial processing parameter is configured to be used to spatially process at least the at least one keyword to be perceived from a direction during rendering, wherein the direction indicates a navigation direction; generate at least one processed audio signal, comprising processing at least the at least one keyword based on the at least one spatial processing parameter; and provide the at least one processed audio signal, comprising the at least one processed keyword, for generation of a virtual audio image.

Audio Conferencing Using a Distributed Array of Smartphones
20230216965 · 2023-07-06 ·

Described is a method of hosting a teleconference among a plurality of client devices arranged in two or more acoustic spaces, each client device having an audio capturing capability and/or an audio rendering capability, the method comprising: grouping the plurality of client devices into two or more groups based on their belonging to respective acoustic spaces, receiving first audio streams from the plurality of client devices, generating second audio streams from the first audio streams for rendering by respective client devices among the plurality of client devices, based on the grouping of the plurality of client devices into the two or more groups, and outputting the generated second audio streams to respective client devices. Further described are corresponding computation devise, computer programs, and computer-readable storage media.

Devices, Methods, and User Interfaces for Adaptively Providing Audio Outputs
20230215415 · 2023-07-06 ·

An electronic device includes one or more pose sensors for detecting a pose of a user of the electronic device relative to a first physical environment and is in communication with one or more audio output devices. While a first pose of the user meets first presentation criteria, the electronic device provides audio content at a first simulated spatial location relative to the user. The electronic device detects a change in the pose of the user from the first pose to a second pose. In response to detecting the change in the pose of the user, and in accordance with a determination that the second pose of the user does not meet the first presentation criteria, the electronic device provides audio content at a second simulated spatial location relative to the user that is different from the first simulated spatial location.

Method and apparatus for an interactive user interface

A method, apparatus and computer program product are provided to facilitate user interaction with, such as modification of, respective audio objects. An example method may include causing a multimedia file to be presented that includes at least two images. The images are configured to provide animation associated with respective audio objects and representative of a direction of the respective audio objects. The method may also include receiving user input in relation to an animation associated with an audio object or the direction of the audio object represented by an animation. The method may further include causing replay of the audio object for which the user input was received to be modified.

PHYSICS-BASED AUDIO AND HAPTIC SYNTHESIS

Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.

TAZ GENE OR ENZYME REPLACEMENT THERAPY

Provided herein, in some aspects, are compositions and methods for treating Barth syndrome (BTHS) using human tafazzin gene therapy or enzyme replacement therapy. The present disclosure, in some aspects, provides compositions and methods (e.g., gene therapy or enzyme replacement therapy) for treating Barth syndrome (BTHS). It was demonstrated herein that certain human Tafazzin (hTAZ) isoforms and the full length protein, as well as nucleic acids encoding them, are effective in treating BTHS.

SPATIAL AUDIO SERVICE

A method, apparatus and computer program are provided for assessing a continuity of audio service by comparing a previous audio service with a first spatial audio service that uses user head-tracking and a second audio service to identify which of the first or second spatial audio service provides a continuity of audio service with respect to the previous audio service. If the first or second spatial audio service is assessed to provide continuity of audio service, the respective spatial audio service is selectively enabled. The first spatial audio service controls or sets at least one directional property of at least one sound source and the first spatial audio service is assessed to provide continuity of audio service if it can use head-tracking to control or set the at least one directional property of at least one sound source.

IN-VEHICLE INDEPENDENT SOUND ZONE CONTROL METHOD, SYSTEM AND RELATED DEVICE
20230217200 · 2023-07-06 ·

The present disclosure discloses an in-vehicle independent sound zone control method, a system and a related device, applied to a vehicle. The method includes the following steps: presetting a control area and a non-control area; arranging a speaker array behind a front seat of the vehicle for generating a first acoustic response, and arranging a headrest speaker at a headrest on a rear seat of the vehicle for generating a second acoustic response; fitting a virtual target speaker, wherein the virtual target speaker is configured to generate a target acoustic response within the control area; and controlling a sound quality of the in-vehicle independent sound zone through an audio algorithm processing on the target acoustic response, the first acoustic response and the second acoustic response.