H04S2400/11

METHOD AND APPARATUS FOR ESTIMATING SPATIAL CONTENT OF SOUNDFIELD AT DESIRED LOCATION
20220386063 · 2022-12-01 ·

In general, the present embodiments relate to a method and apparatus for estimating spatial content of a soundfield at a desired location, including a location that has actual sound content obstructed or distorted. According to certain aspects, the present embodiments aim at presenting a more natural, spatially accurate sound, for example to a user at the desired location who is wearing a helmet, mimicking the sound a user would experience if they were not wearing any headgear. Modes for enhanced spatial hearing may be applied which would include situation-dependent processing for augmented hearing. According to other aspects, methods and apparatuses record or capture the sound experienced by a number of the participants and devices on and near the field of play, analyze the captured sound for its various components and their associated spatial content, and make those components available to participants and spectators.

Audio processing
11516615 · 2022-11-29 · ·

A method for rendering a spatial audio signal that represents a sound field in a selectable viewpoint audio environment that includes one or more audio objects associated with respective audio content and a respective position in the audio environment. The method includes receiving an indication of a selected listening position and orientation in the audio environment; detecting an interaction concerning a first audio object on basis of one or more predefined interaction criteria; modifying the first audio object and one or more further audio objects linked thereto; and deriving the spatial audio signal that includes at least audio content associated with the modified first audio object in a first spatial position of the sound field that corresponds to its position in the audio environment in relation to said selected listening position and orientation, and audio content associated with the modified one or more further audio objects.

System for and method of generating an audio image
11516616 · 2022-11-29 · ·

A system for and a method of generating an audio image for use in rendering audio. The method comprises accessing an audio stream; accessing positional information, the positional information comprising a first position, a second position and a third position; and generating an audio image. In some embodiments, generating the audio image comprises generating, based on the audio stream, a first virtual wave front to be perceived by a listener as emanating from the first position; generating, based on the audio stream, a second virtual wave front to be perceived by the listener as emanating from the second position; and generating, based on the audio stream, a third virtual wave front to be perceived by the listener as emanating from the third position.

Augmented reality and virtual reality feedback enhancement system, apparatus and method

Systems, apparatuses and methods may provide away to render augmented reality and virtual reality (VR/AR) environment information. More particularly, systems, apparatuses and methods may provide a way to selectively suppress and enhance VR/AR renderings of n-dimensional environments. The systems, apparatuses and methods may deepen a user's VR/AR experience by focusing on particular feedback information, while suppressing other feedback information from the environment.

Emergency sound localization

Techniques for determining information associated with sounds detected in an environment based on audio data are discussed herein. Audio sensors of a vehicle may determine audio data associated with sounds from the environment. Sounds may be caused by objects in the environment such as emergency vehicles, construction zones, non-emergency vehicles, humans, audio speakers, nature, etc. A model may determine a classification of the audio data and/or a probability value representing a likelihood that sound in the audio data is associated with the classification. A direction of arrival may be determined based on receiving classification values from multiple audio sensors of the vehicle, and other actions can be performed or the vehicle can be controlled based on the direction of arrival.

Apparatus and Method for Reproducing a Spatially Extended Sound Source or Apparatus and Method for Generating a Description for a Spatially Extended Sound Source Using Anchoring Information
20220377489 · 2022-11-24 ·

An apparatus for reproducing a spatially extended sound source having a defined position or orientation and geometry in a space has an interface for receiving a listener position. The apparatus having a projector for calculating a projection of a two- or three-dimensional hull associated with the sound source onto a projection plane using the listener position, information on the geometry of the sound source, and on the position of the sound source; a sound position calculator for calculating positions of at least two sound sources for the spatially extended sound source using the projection plane; and a renderer for rendering the at least two sound sources at the positions to obtain a reproduction of the sound source having two or more output signals, configured to use different sound signals for the different positions.

METHODS, APPARATUS AND SYSTEMS FOR REPRESENTATION, ENCODING, AND DECODING OF DISCRETE DIRECTIVITY DATA

The present disclosure relates to a method of processing audio content including directivity information for at least one sound source, the directivity information comprising a first set of first directivity unit vectors representing directivity directions and associated first directivity gains. The disclosure further relates to corresponding methods of encoding and decoding audio content including directivity information for at least one sound source.

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, AND PROGRAM
20220377488 · 2022-11-24 ·

The present technology relates to an information processing apparatus and an information processing method, and a program capable of realizing content reproduction based on an intention of a content creator. An information processing apparatus includes: a listener position information acquisition unit that acquires listener position information of a viewpoint of a listener; a reference viewpoint information acquisition unit that acquires position information of a first reference viewpoint and object position information of an object at the first reference viewpoint, and position information of a second reference viewpoint and object position information of the object at the second reference viewpoint; and an object position calculation unit that calculates position information of the object at the viewpoint of the listener on the basis of the listener position information, the position information of the first reference viewpoint and the object position information at the first reference viewpoint, and the position information of the second reference viewpoint and the object position information at the second reference viewpoint. The present technology can be applied to content reproduction systems.

Modeling Acoustic Effects of Scenes With Dynamic Portals

The description relates to modeling acoustic effects in scenes with dynamic portals. One implementation includes obtaining a representation of a scene having a plurality of portals and simulating sound travel in the scene using a plurality of probes deployed in the scene. The implementation also includes determining acoustic parameters for initial sound traveling between respective probes based at least on the simulating. The implementation also includes identifying one or more intercepted portals in the scene that are intercepted by a particular initial sound path from a particular source location to a particular listener location, using particular acoustic parameters for the particular source location and the particular listener location.

AUDIO ENHANCED AUGMENTED REALITY
20220377486 · 2022-11-24 ·

Devices, media, and methods are presented for an audio enhanced augmented reality (AR) experience using an eyewear device. The eyewear device has a microphone system, a presentation system, a support structure configured to be head-mounted on a user, and a processor. The support structure supports the microphone system and the presentation system. The eyewear device is configured to capture, with the microphone system, audio information of an environment surrounding the eyewear device, identify an audio signal within the audio information, detect a direction of the audio signal with respect to the eyewear device, classify the audio signal, and present, by the presentation system, an application associated with the classification of the audio signal.