Patent classifications
H04S2420/01
ENVIRONMENT ACOUSTICS PERSISTENCE
Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
METHOD FOR GENERATING CUSTOMIZED SPATIAL AUDIO WITH HEAD TRACKING
A headphone for spatial audio rendering includes a first database having an impulse response pair corresponding to a reference speaker location. A head sensor provides head orientation information to a second database having rotation filters, the filters corresponding to different azimuth and elevation positions relative to the reference speaker location. A digital signal processor combines the rotation filters with the impulse response pair to generate an output binaural audio signal to transducers of the headphone. Efficiencies in creating impulse response or HRTF databases are achieved by sampling the impulse response less frequently than in conventional methods. This sampling at coarser intervals reduces the number of data measurements required to generate a spherical grid and reduces the time involved in capturing the impulse responses. Impulse responses for data points falling between the sampled data points are generated by interpolating in the frequency domain.
USER INTERFACE FOR MULTI-USER COMMUNICATION SESSION
The present disclosure generally relates to user interfaces for multi-user communication sessions. In some examples, a device initiates a live stream in a communication session. In some examples, a device transitions between streaming live audio and live video. In some examples, a device enables synchronizing media playback during a live stream. In some examples, a device displays synchronized media playback and plays a reaction from a first participant of the communication session.
MIXED REALITY VIRTUAL REVERBERATION
A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
THREE-DIMENSIONAL AUDIO SYSTEMS
A three-dimensional sound generation system includes one or more processors of a computing device, including capability to receive sound tracks, each of the sound tracks comprising one or more sound sources, each of the one or more sound sources corresponding to one or more respective sound categories, receive or determine a first configuration in a three-dimensional space, the first configuration comprising a listener position and a computing device location relative to the listener position, determine a second configuration comprising a change to at least one of the listener location or the computing device location relative to the listener position, generate, using the one or more sound tracks and the second configuration, one or more channels of sound signals, and provide the one or more channels of sound signals to drive one or more sound generation devices to generate a three-dimensional sound field.
LOCATION BASED AUDIO SIGNAL MESSAGE PROCESSING
A method of incorporating environmental acoustic sources into a virtual environment by measuring real environment acoustic sources and locations and incorporating them into a virtual environment with virtual acoustic sources.
APPARATUS AND METHOD FOR DETERMINING VIRTUAL SOUND SOURCES
An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. The boundaries around which to mirror in each iteration is determined (303) by a specific selection criterion including requiring that mirror directions cannot be reversed, cannot be in an excluded direction and cannot be repeated unless in a continuous series of mirrorings.
ACOUSTIC MEASUREMENT
A method for determining subject specific digital audio data can comprise providing at least one respective audio signal input to each of a plurality of loudspeaker elements supported in a predetermined spatial relationship, in which respective locations of an effective point source of each loudspeaker element all lie in an imaginary surface that at least partially contains a spatial region where at least one aural cavity of a subject is located, thereby providing a distance between each respective location and each aural cavity of less than 1.5 meters. Responsive to at least one audio signal output from at least one of the loudspeaker elements, via at least one microphone element located at or within an aural cavity of the subject, respective subject specific audio data output is provided and is processed via an audio processing system, the subject specific audio data output, thereby providing subject specific digital audio data.
APPARATUS AND METHOD FOR RENDERING A SOUND SCENE COMPRISING DISCRETIZED CURVED SURFACES
An apparatus for rendering a sound scene having reflection objects and a sound source at a sound source position, includes: a geometry data provider for providing an analysis of the reflection objects of the sound scene to determine a reflection object represented by a first polygon and a second adjacent polygon having associated a first image source position for the first polygon and a second image source position for the second polygon, wherein the first and second image source positions result in a sequence including a first visible zone related to the first image source position, an invisible zone and a second visible zone related to the second image source position; an image source position generator for generating an additional image source position such that the additional image source position is placed between the first image source position and the second image source position; and a sound renderer for rendering the sound source at the sound source position and, additionally for rendering the sound source at the first image source position, when a listener position is located within the first visible zone, for rendering the sound source at the additional image source position, when the listener position is located within the invisible zone, or for rendering the sound source at the second image source position, when the listener position is located within the second visible zone.
Apparatus and Method for Rendering a Sound Scene Using Pipeline Stages
Apparatus for rendering a sound scene, including: a first pipeline stage including a first control layer and a reconfigurable first audio data processor, wherein the reconfigurable first audio data processor is configured to operate in accordance with a first configuration of the reconfigurable first audio data processor; a second pipeline stage located, with respect to a pipeline flow, subsequent to the first pipeline stage, the second pipeline stage including a second control layer and a reconfigurable second audio data processor, wherein the reconfigurable second audio data processor is configured to operate in accordance with a first configuration of the reconfigurable second audio data processor; and a central controller for controlling the first control layer and the second control layer in response to the sound scene, so that the first control layer prepares a second configuration of the reconfigurable first audio data processor during or subsequent to an operation of the reconfigurable first audio data processor in the first configuration of the reconfigurable first audio data processor, or so that the second control layer prepares a second configuration of the reconfigurable second audio data processor during or subsequent to an operation of the reconfigurable second audio data processor in the first configuration of the reconfigurable second audio data processor, and wherein the central controller is configured to control the first control layer or the second control layer using a switch control to reconfigure the reconfigurable first audio data processor to the second configuration for the reconfigurable first audio data processor or to reconfigure the reconfigurable second audio data processor to the second configuration for the reconfigurable second audio data processor at a certain time instant.