Patent classifications
H04S7/302
AUDIO OUTPUT CONFIGURATION FOR MOVING DEVICES
Described herein is a system for recalibrating an audio configuration for mobile or moving devices. The system may configure a multi-device output group to generate synchronous output audio using multiple devices. For example, the output group may include a first device generating a first portion of output audio corresponding to a first channel and a second device generating a second portion of the output audio corresponding to a second channel. If the second device detects motion and/or movement indicating a change in its location, the system may recalibrate the output group to continue generating the output audio without the second device. For example, the first device or a new device can generate the second portion of the output audio instead of the second device. When the second device returns, the system can recalibrate the output group to include the second device again.
AUDIO BEAM STEERING, TRACKING AND AUDIO EFFECTS FOR AR/VR APPLICATIONS
A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
ONE-TOUCH SPATIAL EXPERIENCE WITH FILTERS FOR AR/VR APPLICATIONS
A method to assess user condition for wearable devices using electromagnetic sensors is provided. The method includes receiving a signal from an electromagnetic sensor, the signal being indicative of a health condition of a user of a wearable device, selecting a salient attribute from the signal, and determining, based on the salient attribute, the health condition of the user of the wearable device. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
IMMERSIVE SOUND REPRODUCTION USING MULTIPLE TRANSDUCERS
One or more embodiments include techniques for generating immersive audio for an acoustic system. The techniques include determining an apparent location associated with a portion of audio; calculating, for each speaker included in a plurality of speakers of the acoustic system, a perceptual distance between the speaker and the apparent location; selecting a subset of speakers included in the plurality of speakers based on the perceptual distances between the plurality of speakers and the apparent location; generating a set of filters based on the subset of speakers and one or more target characteristics of the acoustic system; and generating, for each speaker included in the subset of speakers, a speaker signal using one or more filters included in the set of filters.
ACOUSTIC SIGNAL PROCESSING APPARATUS, ACOUSTIC SIGNAL PROCESSING METHOD, AND PROGRAM
The present technology relates to an acoustic signal processing apparatus, an acoustic signal processing method, and a program for expanding a range of listening positions in which an effect of a transaural reproduction system can be obtained. First and second output signals for localizing a sound image in front of or behind and on the left of a first position located on the left of a listening position are output from first and second speakers, respectively. Third and fourth output signals for localizing a sound image in front of or behind and on the right of a second position located on the right of the listening position are output from third and fourth speakers, respectively. The first speaker is disposed in a first direction in front of or behind the listening position and on the left of the listening position. The second speaker is disposed in the first direction and on the right of the listening position. The third speaker is disposed in the first direction and on the left of the listening position and on the right of the first speaker. The fourth speaker is disposed in the first direction of the listening position and on the right of the second speaker. The present technology can be applied, for example, to an acoustic processing system.
Graphical user interface and parametric equalizer in gaming systems
A system that incorporates the subject disclosure may include, for example, a gaming system that cooperates with a graphical user interface to enable user modification and enhancement of one or more audio streams associated with the gaming system. In embodiments, the audio streams may include a game audio stream, a chat audio stream of conversation among players of a video game, and a microphone audio stream of a player of the video game. Additional embodiments are disclosed.
Moving an emoji to move a location of binaural sound
During an electronic communication between a first user and a second user, an electronic device of the second user displays a graphical representation at a located selected by the first user. The graphical representation provides an indication to the second user where binaural sound associated with the graphical representation will externally localize to the second user. Subsequent movement of the graphical representation changes a location where the binaural sound externally localizes to the second user.
MEDIA PLAYBACK BASED ON SENSOR DATA
Example techniques relate to playback based on acoustic signals in a system including a first network device and a second network device. A first network device may detect a presence of a user using a camera and/or infrared sensors. The first network device sends, in response to detecting the presence of the user, a particular signal via the first network interface. The second network device receives data corresponding to the particular signal and plays back an audio output corresponding to the particular signal.
THREE-DIMENSIONAL AUDIO SYSTEMS
A three-dimensional sound generation system includes one or more processors of a computing device, including capability to receive sound tracks, each of the sound tracks comprising one or more sound sources, each of the one or more sound sources corresponding to one or more respective sound categories, receive or determine a first configuration in a three-dimensional space, the first configuration comprising a listener position and a computing device location relative to the listener position, determine a second configuration comprising a change to at least one of the listener location or the computing device location relative to the listener position, generate, using the one or more sound tracks and the second configuration, one or more channels of sound signals, and provide the one or more channels of sound signals to drive one or more sound generation devices to generate a three-dimensional sound field.
Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Smoothing
Apparatus for processing an audio scene representing a sound field, the audio scene having information on a transport signal and a first set of parameters. The apparatus has a parameter processor for processing the first set of parameters to obtain a second set of parameters, wherein the parameter processor is configured to calculate at least one raw parameter for each output time frame using at least one parameter of the first set of parameters for the input time frame, to calculate a smoothing information such as a factor for each raw parameter in accordance with a smoothing rule, and to apply a corresponding smoothing information to the corresponding raw parameter to derive the parameter of the second set of parameters for the output time frame. The apparatus further has an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.