H04S7/304

Spatial Audio Capture, Transmission and Reproduction
20230232182 · 2023-07-20 ·

An apparatus configured to: obtain at least one spatial audio signal that defines an audio scene forming at least in part an immersive media content; obtain metadata associated with the at least one spatial audio signal; obtain at least one augmentation control parameter associated with the at least one spatial audio signal; obtain at least one augmentation audio signal; render an output audio signal that is based, at least partially, on the at least one spatial audio signal, the metadata associated with the at least one spatial audio signal, the at least one augmentation control parameter, and the at least one augmentation audio signal; and obtain an indication that at least part of the at least one spatial audio signal has been omitted from the output audio signal based, at least partially, on at least part of the at least one augmentation audio signal included in the output audio signal.

Methods and system for adjusting level of tactile content when presenting audio content

An audio system presented herein includes a transducer array, a sensor array, and a controller. The transducer array presents audio content to a user. The controller controls the transducer array to adjust a level of tactile content imparted to the user via actuation of at least one transducer in the transducer array while presenting the audio content to the user. The audio system can be part of a headset.

Devices, methods, and user interfaces for adaptively providing audio outputs
11562729 · 2023-01-24 · ·

An electronic device includes one or more pose sensors for detecting a pose of a user of the electronic device relative to a first physical environment and is in communication with one or more audio output devices. While a first pose of the user meets first presentation criteria, the electronic device provides audio content at a first simulated spatial location relative to the user. The electronic device detects a change in the pose of the user from the first pose to a second pose. In response to detecting the change in the pose of the user, and in accordance with a determination that the second pose of the user does not meet the first presentation criteria, the electronic device provides audio content at a second simulated spatial location relative to the user that is different from the first simulated spatial location.

Spatial Audio for Wayfinding
20230228585 · 2023-07-20 ·

The technology employs spatial audio information to enhance wayfinding for pickup, drop-off and in-vehicle situations. The spatial information has a directional component, and a sense of distance can also be incorporated into the audio information. Audio cues or other spatial information is provided via headphones worn by a user. The spatial audio gives the user direction information, which can help locate the vehicle. In addition, this approach can be used when the rider is in the vehicle prior to exiting. For instance, spatial audio can be provided to the rider to give them contextual information about the environment outside the vehicle prior to exiting, such as whether a bicyclist is approaching on the side they will be exiting. This contextual information can alert the rider to wait or otherwise be more situationally aware when departing the vehicle.

Calibrating listening devices

Techniques for calibrating listening devices are disclosed herein. The techniques include emitting a predetermined audio signal using an outward-facing transducer located on a first portion of a head-mounted device worn by the user, receiving the predetermined audio signal at a microphone located on a second portion of the head-mounted device, the second portion being different from the first portion, determining a transfer function for the user based on the received predetermined audio signal, and applying the transfer function to audio signals transmitted to the user.

SCHEMES FOR EFFECTIVELY ESTIMATING USER BEHAVIOR TO ACHIEVE A VARIETY OF AUTOMATIC APPLICATIONS BY DETECTING THE ANGLE OF THE TRANSMITTED SIGNAL TO GENERATE HEAD POSE DIRECTION ESTIMATION
20230232181 · 2023-07-20 · ·

A method of wireless communication locator station to be disposed at specific location includes: detecting rotation angle information of client-based portable device, carried or worn by user, according to specific wireless communication standard between wireless communication locator station and client-based portable device when client-based portable device is within signal range of wireless communication locator station; generating head pose direction estimation according to calculated rotation angle information; and when head pose direction estimation indicates that a user turns face towards wireless communication locator station, sending packet signal from wireless communication locator station to server-based portable device, successfully paired with and security-connected with client-based portable device, so that server-based portable device can transfer packet signal to client-based portable device after receiving packet signal.

Systems and methods to control spatial audio rendering
11564053 · 2023-01-24 · ·

A method of controlling spatial audio rendering includes comparing a first heartbeat pattern to a second heartbeat pattern to generate a comparison result. The first heartbeat pattern is based on sensor information associated with a first sensor of a first sensor type, and the second heartbeat pattern is based on sensor information associated with a second sensor of a second sensor type. The method also includes, based on the comparison result, controlling a spatial audio rendering function associated with media playback.

Spherical harmonic decomposition of a sound field detected by an equatorial acoustic sensor array

An audio system includes an equatorial acoustic sensor array (EASA) that may be coupled to an object. The audio system is configured to detect, via the EASA, signals corresponding to a portion of a sound field in a local area. The detected signals are converted into a plurality of corresponding abstract representations that describe the portion of the sound field. Effects of scattering of the object are removed from the abstract representations to create adjusted abstract representations. A set of spherical harmonic (SH) coefficients is determined using the adjusted abstract representations. The set of SH coefficients describe an entirety of the sound field. And the set of SH coefficients and head related transfer functions of a user are used for binaural rendering of the reconstructed sound field to the user.

Devices, systems and processes for providing adaptive audio environments
11706568 · 2023-07-18 · ·

Devices, systems and processes for providing an adaptive audio environment are disclosed. For an embodiment, a system may include a wearable device and a hub. The hub may include an interface module configured to communicatively couple the wearable device and the hub and a processor, configured to execute non-transient computer executable instructions for a machine learning engine configured to apply a first machine learning process to at least one data packet received from the wearable device and output an action-reaction data set and for a sounds engine configured to apply a sound adapting process to the action-reaction data set and provide audio output data to the wearable device via the interface module.

APPARATUS AND METHOD FOR RENDERING AN AUDIO SCENE USING VALID INTERMEDIATE DIFFRACTION PATHS
20230019204 · 2023-01-19 ·

An apparatus for rendering an audio scene comprising an audio source at an audio source position and a plurality of diffracting objects, comprises: a diffraction path provider for providing a plurality of intermediate diffraction paths through the plurality of diffracting objects, an intermediate diffraction path having a starting point and an output edge of the plurality of diffracting objects and an associated filter information for the intermediate diffraction path; a renderer for rendering the audio source at a listener position, wherein the renderer is configured for determining, based on the output edges of the intermediate diffraction paths and the listener position, one or more valid intermediate diffraction paths from the audio source position to the listener position, determining, for each valid intermediate diffraction path of the one or more valid intermediate diffraction paths, a filter representation for a full diffraction path, and calculating audio output signals for the audio scene using an audio signal associated to the audio source and the filter representation for each full diffraction path.