H04S7/304

NON-TRANSITORY COMPUTER-READABLE MEDIUM HAVING COMPUTER-READABLE INSTRUCTIONS AND SYSTEM
20230239650 · 2023-07-27 · ·

A sound controlling system including a user terminal having a sound source, a wireless communication device, a digital to analog converter (DAC) and first processing electronics. The first processing electronics are configured to: provide data of a backing sound to the sound source; control the sound source to generate a sound signal based on the data; receive a first input instruction including a first instruction to transmit the sound signal and a second instruction to play back the backing sound; provide the sound signal to the wireless communication device as the first input instruction being the first instruction, and provide the sound signal to the DAC as being the second instruction; control the wireless communication device to convert the sound signal to a wireless signal and transmit the wireless signal; and convert the sound signal from a digital signal to an analog signal for play back of the backing sound.

ENVIRONMENT ACOUSTICS PERSISTENCE

Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.

METHOD FOR GENERATING CUSTOMIZED SPATIAL AUDIO WITH HEAD TRACKING

A headphone for spatial audio rendering includes a first database having an impulse response pair corresponding to a reference speaker location. A head sensor provides head orientation information to a second database having rotation filters, the filters corresponding to different azimuth and elevation positions relative to the reference speaker location. A digital signal processor combines the rotation filters with the impulse response pair to generate an output binaural audio signal to transducers of the headphone. Efficiencies in creating impulse response or HRTF databases are achieved by sampling the impulse response less frequently than in conventional methods. This sampling at coarser intervals reduces the number of data measurements required to generate a spherical grid and reduces the time involved in capturing the impulse responses. Impulse responses for data points falling between the sampled data points are generated by interpolating in the frequency domain.

MIXED REALITY VIRTUAL REVERBERATION
20230007332 · 2023-01-05 ·

A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.

THREE-DIMENSIONAL AUDIO SYSTEMS

A three-dimensional sound generation system includes one or more processors of a computing device, including capability to receive sound tracks, each of the sound tracks comprising one or more sound sources, each of the one or more sound sources corresponding to one or more respective sound categories, receive or determine a first configuration in a three-dimensional space, the first configuration comprising a listener position and a computing device location relative to the listener position, determine a second configuration comprising a change to at least one of the listener location or the computing device location relative to the listener position, generate, using the one or more sound tracks and the second configuration, one or more channels of sound signals, and provide the one or more channels of sound signals to drive one or more sound generation devices to generate a three-dimensional sound field.

LOCATION BASED AUDIO SIGNAL MESSAGE PROCESSING
20230007431 · 2023-01-05 · ·

A method of incorporating environmental acoustic sources into a virtual environment by measuring real environment acoustic sources and locations and incorporating them into a virtual environment with virtual acoustic sources.

APPARATUS AND METHOD FOR DETERMINING VIRTUAL SOUND SOURCES
20230007426 · 2023-01-05 ·

An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. The boundaries around which to mirror in each iteration is determined (303) by a specific selection criterion including requiring that mirror directions cannot be reversed, cannot be in an excluded direction and cannot be repeated unless in a continuous series of mirrorings.

AUDIO SCENE CHANGE SIGNALING

There is disclosed inter alia a method for rendering a virtual reality audio scene comprising: receiving information defining a limited area audio scene within the virtual reality audio scene (301), wherein the limited area audio scene defines a sub space of the virtual reality audio scene (304), wherein the information defines the limited area audio scene by defining an extent a user can move within the virtual audio scene; determining if the movement of the user within the limited area audio scene meets a condition of an audio scene change (302); and processing the audio scene change when the movement of the user within the limited area audio scene meets the condition of an audio scene change (306).

ACOUSTIC OUTPUT APPARATUS

The present disclosure discloses an acoustic output apparatus including at least one acoustic driver, a controller, and a supporting structure. The at least one acoustic driver may be configured to output sounds through at least two sound guiding holes. The at least two sound guiding holes may include a first sound guiding hole and a second sound guiding hole. The controller may be configured to control a phase and an amplitude of the sounds generated by the at least one acoustic driver using a control signal such that the sounds output by the at least one acoustic driver through the first and second sound guiding holes have opposite phases. The supporting structure may be provided with a baffle and configured to support the at least one acoustic driver such that the first and second sound guiding holes are located on both sides of the baffle.

SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
20230007430 · 2023-01-05 ·

The present technology relates to a signal processing device, a signal processing method, and a program that allow for prevention of distortion of a sound space. The signal processing device includes: a relative azimuth prediction unit configured to predict, on the basis of a delay time in accordance with a distance from a virtual sound source to a listener, a relative azimuth of the virtual sound source when a sound of the virtual sound source reaches the listener; and a BRIR generation unit configured to acquire a head-related transfer function of the relative azimuth for each one of a plurality of the virtual sound sources and generate a BRIR on the basis of a plurality of the acquired head-related transfer functions. The present technology can be applied to the signal processing device.