Patent classifications
H04S2400/11
XR RENDERING FOR 3D AUDIO CONTENT AND AUDIO CODEC
A device includes a memory configured to store instructions and also includes one or more processors configured to execute the instructions to obtain audio data corresponding to a sound source and metadata indicative of a direction of the sound source. The one or more processors are configured to execute the instructions to obtain direction data indicating a viewing direction associated with a user of a playback device. The one or more processors are configured to execute the instructions to determine a resolution setting based on a similarity between the viewing direction and the direction of the sound source. The one or more processors are also configured to execute the instructions to process the audio data based on the resolution setting to generate processed audio data.
Audio Processing Methods and Systems for a Multizone Augmented Reality Space
An illustrative audio processing system identifies an experience location with which an augmented reality presentation device is associated. The experience location is included within a multizone augmented reality space that is presented by the augmented reality presentation device. The audio processing system determines that the experience location is within both a first sound zone and a second sound zone of the multizone augmented reality space, and, based on the determining that the experience location is within both the first and second sound zones, generates a binaural audio stream for presentation by the augmented reality presentation device. The binaural audio stream includes an environmental audio component implemented by a mix of a first environmental audio stream associated with the first sound zone and a second environmental audio stream associated with the second sound zone. Corresponding methods and systems are also disclosed.
MULTI-TRACK AUDIO IN A SECURITY SYSTEM
A method, system, server and device are disclosed. According to one or more embodiments, a server is provided. A first audio track is received which includes first audio originating from a premises client at a premises location. A second audio track is received which includes second audio originating from a remote client. A first pan angle is determined for the first audio track and a second pan angle is determined for the second audio track. The second pan angle is different from the first pan angle. A stereo composite track is generated based on the first pan angle and the second pan angle, where the stereo composite track includes the first audio track and the second audio track.
AUDIO BEAM STEERING, TRACKING AND AUDIO EFFECTS FOR AR/VR APPLICATIONS
A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
ONE-TOUCH SPATIAL EXPERIENCE WITH FILTERS FOR AR/VR APPLICATIONS
A method to assess user condition for wearable devices using electromagnetic sensors is provided. The method includes receiving a signal from an electromagnetic sensor, the signal being indicative of a health condition of a user of a wearable device, selecting a salient attribute from the signal, and determining, based on the salient attribute, the health condition of the user of the wearable device. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
Reconstruction of audio scenes from a downmix
Audio objects are associated with positional metadata. A received downmix signal comprises downmix channels that are linear combinations of one or more audio objects and are associated with respective positional locators. In a first aspect, the downmix signal, the positional metadata and frequency-dependent object gains are received. An audio object is reconstructed by applying the object gain to an upmix of the downmix signal in accordance with coefficients based on the positional metadata and the positional locators. In a second aspect, audio objects have been encoded together with at least one bed channel positioned at a positional locator of a corresponding downmix channel. The decoding system receives the downmix signal and the positional metadata of the audio objects. A bed channel is reconstructed by suppressing the content representing audio objects from the corresponding downmix channel on the basis of the positional locator of the corresponding downmix channel.
Sound effect simulation by creating virtual reality obstacle
According to one embodiment, a method, computer system, and computer program product for modulating external sounds to reflect the acoustic effects of virtual objects in a mixed-reality environment is provided. The present invention may include creating a knowledge corpus, recording a sound effect occurring externally to a mixed-reality environment experienced by a user operating the mixed-reality device; identifying one or more objects within the mixed-reality environment, including at least one virtual object; modulating the sound effect based on the knowledge corpus to simulate one or more acoustic effects of the one or more objects within the MR environment; and playing the modulated sound effect to the user.
Surround sound location virtualization
A computer program product having a non-transitory computer-readable medium including computer program logic encoded thereon that, when performed on a surround sound audio system that is configured to render left front, right front, and center front audio signals, and also render left and right near-field binaurally-encoded audio signals, causes the surround sound audio system to develop the left and right near-field binaurally-encoded audio signals, and provide the left near-field binaurally-encoded audio signal to a left non-occluding near-field driver and provide the right near-field binaurally-encoded audio signal to a right non-occluding near-field driver.
DEVICE AND METHOD FOR THREE-DIMENSIONAL SOUND REPRODUCTION
Described is a device for the reproduction of three-dimensional sound, in particular headphones, including a pair of specular pads, having a shape such as to substantially form a portion of geoid or a hemisphere where each pad defines a concave inner surface having a plurality of recesses distributed according to a predetermined distribution. The device includes a plurality of loudspeakers, designed for sound reproduction and housed in these recesses. The device also includes a control unit, connected to the plurality of loudspeakers, configured to perform an analysis of a digital sound source and to determine a sound reproduction configuration of the loudspeakers as a function of the analysis of the sound source.
SOUND EFFECT ADJUSTMENT
A sound effect adjustment method is provided. In the method, a video frame and an audio signal of a corresponding time unit of a target video are obtained. A sound source orientation and a sound source distance of a sound source object in the video frame are determined. Scene information corresponding to the video frame is determined. The audio signal is filtered based on the sound source orientation and the sound source distance. An echo coefficient is determined according to the scene information. Further, an adjusted audio signal with an adjusted sound effect is generated based on the filtered audio signal and the echo coefficient.