H04S7/306

Environment acoustics persistence

Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors configured to execute a method. A method for execution by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.

DETERMINING A VIRTUAL LISTENING ENVIRONMENT

One or more acoustic parameters of a current acoustic environment of a user may be determined based on sensor signals captured by one or more sensors of the device. One or more preset acoustic parameters may be determined based on the one or more acoustic parameters of the current acoustic environment of the user and an acoustic environment of an audio file comprising audio signals that is determined based on the audio signals of the audio file or metadata of the audio file. The audio signals may be spatially rendered by applying spatial filters that include the one or more preset acoustic parameters to the audio signals, resulting in binaural audio signals. The binaural audio signals may be used to drive speakers of a headset. Other aspects are described and claimed.

Headphone device for reproducing three-dimensional sound therein, and associated method
11653163 · 2023-05-16 ·

3D audio virtualization within headphone-type sound reproduction devices, comprises: deriving an HRTF, comprising a PRTF, that includes acoustical effects due to pinnae and ear canals, and a remainder HRTF, that includes acoustical effects due to head, shoulders, torso and other body parts while excluding acoustical effects from pinnae and ear canals; wherein the remainder HRTF is electronically implemented and omits acoustical effects due to pinnae and ear canal effects; and wherein the PRTF is acoustically implemented and personalized to the user through use of two or more transducers positioned such that a front plane of the transducer, the front plane of the transducer's diaphragm, the transducer's mechanical center or the transducer's acoustical center point are 25 mm or more from a user's ear canal entrance, and/or oriented so the 0° axis of acoustical output is aligned with the acoustical output axes of typical external loudspeakers positioned in the acoustical far-field.

SYSTEM AND METHOD FOR RENDERING REAL-TIME SPATIAL AUDIO IN VIRTUAL ENVIRONMENT
20230143917 · 2023-05-11 ·

A new real-time spatial audio rendering system includes a real-time spatial audio rendering computer software application adapted to run on a communication device. The application renders stereo audio from mono audio sources in a virtual room of a listener. The listener can be mobile. The stereo audio is rendered for each listener within the room. The real-time spatial audio rendering system has two different modes, with and without reverberation. Reverberation can provide the sense of the dimensions of the room. First, the anechoic processing module produces the anechoic stereo audio that provides the sense of direction and distance of spatial audio. When reverberation is desired, the reverberation processing module is also performed to provide the sense of the room's dimensions by the spatial audio.

Spatial Augmentation

An apparatus (131) comprising means for: obtaining media content (122), wherein the media content (122) comprises at least one object data; obtaining priority content information (126), the priority content information (126) comprising a priority identification identifying and classifying the at least one object; rendering the at least one object based on the priority content information (126).

Generating Binaural Audio in Response to Multi-Channel Audio Using at Least One Feedback Delay Network

In some embodiments, virtualization methods for generating a binaural signal in response to channels of a multi-channel audio signal, which apply a binaural room impulse response (BRIR) to each channel including by using at least one feedback delay network (FDN) to apply a common late reverberation to a downmix of the channels. In some embodiments, input signal channels are processed in a first processing path to apply to each channel a direct response and early reflection portion of a single-channel BRIR for the channel, and the downmix of the channels is processed in a second processing path including at least one FDN which applies the common late reverberation. Typically, the common late reverberation emulates collective macro attributes of late reverberation portions of at least some of the single-channel BRIRs. Other aspects are headphone virtualizers configured to perform any embodiment of the method.

Sound spatialization with room effect
09848274 · 2017-12-19 · ·

A method of sound spatialization, in which at least one filtering process, including summation, is applied, to at least two input signals, the filtering process comprising: the application of at least one first room effect transfer function, the first transfer function being specific to each input signal, and the application of at least one second room effect transfer function, the second transfer function being common to all input signals. The method is such that it comprises a step of weighting at least one input signal with a weighting factor, said weighting factor being specific to each of the input signals.

INFORMATION PROCESSING METHOD, RECORDING MEDIUM, AND SOUND REPRODUCTION DEVICE
20230199428 · 2023-06-22 ·

An information processing method includes: (i) determining whether a type of a predetermined sound and a type of an external sound match; (ii) determining whether the incoming direction of the predetermined sound and the incoming direction of the external sound overlap by comparing the incoming direction of the predetermined sound with the incoming direction of the external sound analyzed; and performing at least one of the following based on a result of (i) and a result of (ii): (a) adjusting at least one of a sound pressure of the predetermined sound or a sound pressure of the external sound; or (b) adjusting the incoming direction of the predetermined sound.

Methods and Apparatus to Assist Listeners in Distinguishing Between Electronically Generated Binaural Sound and Physical Environment Sound
20170359467 · 2017-12-14 ·

Methods and apparatus assist listeners in distinguishing between electronically generated binaural sound and physical environment sound while the listener wears a wearable electronic device that provides the binaural sound to the listener. The wearable electronic device generates a visual alert or audio alert when the electronically generated binaural sound occurs.

REAL-WORLD ROOM ACOUSTICS, AND RENDERING VIRTUAL OBJECTS INTO A ROOM THAT PRODUCE VIRTUAL ACOUSTICS BASED ON REAL WORLD OBJECTS IN THE ROOM
20230199420 · 2023-06-22 ·

Methods and systems are provided for augmenting voice output of a virtual character in an augmented reality (AR) scene. The method includes examining, by a server, the AR scene, said AR scene includes a real-world space and the virtual character overlaid into the real-world space at a location, the real-world space includes a plurality of real-world objects present in the real-world space. The method includes processing, by the server, to identify an acoustics profile associated with the real-world space, said acoustics profile including reflective sound and absorbed sound associated with real-world objects proximate to the location of the virtual character. The method includes processing, by the server, the voice output by the virtual character while interacting in the AR scene; the processing is configured to augment the voice output based on the acoustics profile of the real-world space, the augmented voice output being audible by an AR user viewing the virtual character in the real-world space. In this way, when the voice output of the virtual character is augmented, the augmented voice output may sound more realistic to the AR user as if the virtual character is physically present in the same real-world space as the AR user.