H04S7/306

AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
20230217201 · 2023-07-06 ·

An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.

Apparatus, method and computer program for providing notifications

An apparatus, method and computer program, the apparatus including means for determining that perspective mediated content is available within content provided to a rendering device; and means for adding a notification to the content indicative that perspective mediated content is available; wherein the notification includes spatial audio effects added to the content.

Sound reproduction system and sound quality control method

A device system includes an acoustic device, a sensor, and a sound processor. The acoustic device is configured to be worn by a user. A sensor is configured to detect a movement of a shielding object. The sound processor is configured to generate sound with a first sound quality for a block state in which the shielding object blocks a virtual sound source localized on an opposite side of the shielding object and emit the sound from the acoustic device. The sound processor is further configured to change a sound quality of the sound from the first sound quality to a second sound quality for a non-block state in which the shielding object does not block the virtual sound source, in response to the sensor detecting that the shielding object moves from a position blocking the virtual sound source.

System and method for an audio reproduction device

System and method for enhancing audio reproduced by an audio reproduction device with a first channel and second channel is described. X samples of audio signals are received and stored in a portion of an input buffer with 2x positions and rest of the x positions are padded with zero for both the channels. Contents of the input buffer are transformed to frequency domain (FD) components. FD components are multiplied with a first filter coefficient to generate FD components with short echo effect and with a second filter coefficient to generate FD components with long echo effect. Then, they are converted to time domain (TD) components with short echo effect and TD components with long echo effect. Selective TD components with short echo effect and long echo effect are combined to generate a convolved first channel output and a convolved second channel output.

Apparatus and method for processing volumetric audio

A method including receiving an audio scene including at least one source captured using at least one near field microphone and at least one far field microphone. The method includes determining at least one room-impulse-response associated with the audio scene based on the at least one near field microphone and the at least one far field microphone, accessing a predetermined scene geometry corresponding to the audio scene, and identifying best match to the predetermined scene geometry in a scene geometry database. The method also includes performing RIR comparison based on the at least one RIR and at least one geometric RIR associated with the best matching geometry and rendering a volumetric audio scene based on a result of the RIR comparison.

Extrapolation of acoustic parameters from mapping server

Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.

Own voice reinforcement using extra-aural speakers

A system including an audio source device having a first microphone and a first speaker for directing sound into an environment in which the audio source device is located and a wireless audio receiver device having a second microphone and a second speaker for directing sound into a user's ear. The audio source device is configured to 1) capture, using the first microphone, speech of the user as a first audio signal, 2) reduce noise in the first audio signal to produce a speech signal, and 3) drive the first speaker with the speech signal. The wireless audio receiver device is configured to 1) capture, using the second microphone, a reproduction of the speech produced by the first speaker as a second audio signal and 2) drive the second speaker with the second audio signal to output the reproduction of the speech.

Combined HRTF for spatial audio plus hearing aid support and other enhancements

A HRTF used for 3D spatialized audio is combined with, e.g., by concatenation, additional settings to provide a more comfortable, accessible, and enjoyable experience for a listener such as a player of a computer game listening to audio through a headset. A single transfer function is thus created that includes the other settings, and once the transfer function is computed the run-time processing can be treated as a single combined transfer function rather than multiple separate stages, resulting in computational savings. The additional settings pertain to hearing aids normally worn by the listener as well as a room-related function specific to a particular listening venue.

Modeling Acoustic Effects of Scenes With Dynamic Portals

The description relates to modeling acoustic effects in scenes with dynamic portals. One implementation includes obtaining a representation of a scene having a plurality of portals and simulating sound travel in the scene using a plurality of probes deployed in the scene. The implementation also includes determining acoustic parameters for initial sound traveling between respective probes based at least on the simulating. The implementation also includes identifying one or more intercepted portals in the scene that are intercepted by a particular initial sound path from a particular source location to a particular listener location, using particular acoustic parameters for the particular source location and the particular listener location.

SOUND EFFECT SIMULATION BY CREATING VIRTUAL REALITY OBSTACLE
20220377482 · 2022-11-24 ·

According to one embodiment, a method, computer system, and computer program product for modulating external sounds to reflect the acoustic effects of virtual objects in a mixed-reality environment is provided. The present invention may include creating a knowledge corpus, recording a sound effect occurring externally to a mixed-reality environment experienced by a user operating the mixed-reality device; identifying one or more objects within the mixed-reality environment, including at least one virtual object; modulating the sound effect based on the knowledge corpus to simulate one or more acoustic effects of the one or more objects within the MR environment; and playing the modulated sound effect to the user.