Patent classifications
H04S7/00
METHODS AND SYSTEMS FOR GENERATING AND RENDERING OBJECT BASED AUDIO WITH CONDITIONAL RENDERING METADATA
Methods and audio processing units for generating an object based audio program including conditional rendering metadata corresponding to at least one object channel of the program, where the conditional rendering metadata is indicative of at least one rendering constraint, based on playback speaker array configuration, which applies to each corresponding object channel, and methods for rendering audio content determined by such a program, including by rendering content of at least one audio channel of the program in a manner compliant with each applicable rendering constraint in response to at least some of the conditional rendering metadata. Rendering of a selected mix of content of the program may provide an immersive experience.
Binaural Sound in Visual Entertainment Media
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
NON-TRANSITORY COMPUTER-READABLE MEDIUM HAVING COMPUTER-READABLE INSTRUCTIONS AND SYSTEM
A sound controlling system including a user terminal having a sound source, a wireless communication device, a digital to analog converter (DAC) and first processing electronics. The first processing electronics are configured to: provide data of a backing sound to the sound source; control the sound source to generate a sound signal based on the data; receive a first input instruction including a first instruction to transmit the sound signal and a second instruction to play back the backing sound; provide the sound signal to the wireless communication device as the first input instruction being the first instruction, and provide the sound signal to the DAC as being the second instruction; control the wireless communication device to convert the sound signal to a wireless signal and transmit the wireless signal; and convert the sound signal from a digital signal to an analog signal for play back of the backing sound.
ENVIRONMENT ACOUSTICS PERSISTENCE
Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
METHOD FOR GENERATING CUSTOMIZED SPATIAL AUDIO WITH HEAD TRACKING
A headphone for spatial audio rendering includes a first database having an impulse response pair corresponding to a reference speaker location. A head sensor provides head orientation information to a second database having rotation filters, the filters corresponding to different azimuth and elevation positions relative to the reference speaker location. A digital signal processor combines the rotation filters with the impulse response pair to generate an output binaural audio signal to transducers of the headphone. Efficiencies in creating impulse response or HRTF databases are achieved by sampling the impulse response less frequently than in conventional methods. This sampling at coarser intervals reduces the number of data measurements required to generate a spherical grid and reduces the time involved in capturing the impulse responses. Impulse responses for data points falling between the sampled data points are generated by interpolating in the frequency domain.
THREE-DIMENSIONAL AUDIO SYSTEMS
A three-dimensional sound generation system includes one or more processors of a computing device, including capability to receive sound tracks, each of the sound tracks comprising one or more sound sources, each of the one or more sound sources corresponding to one or more respective sound categories, receive or determine a first configuration in a three-dimensional space, the first configuration comprising a listener position and a computing device location relative to the listener position, determine a second configuration comprising a change to at least one of the listener location or the computing device location relative to the listener position, generate, using the one or more sound tracks and the second configuration, one or more channels of sound signals, and provide the one or more channels of sound signals to drive one or more sound generation devices to generate a three-dimensional sound field.
LOCATION BASED AUDIO SIGNAL MESSAGE PROCESSING
A method of incorporating environmental acoustic sources into a virtual environment by measuring real environment acoustic sources and locations and incorporating them into a virtual environment with virtual acoustic sources.
APPARATUS AND METHOD FOR DETERMINING VIRTUAL SOUND SOURCES
An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. The boundaries around which to mirror in each iteration is determined (303) by a specific selection criterion including requiring that mirror directions cannot be reversed, cannot be in an excluded direction and cannot be repeated unless in a continuous series of mirrorings.
Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Conversion
An apparatus for processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene including information on a transport signal and a first set of parameters related to the virtual listener position includes a parameter converter for converting the first set of parameters into a second set of parameters related to a channel representation including two or more channels for a reproduction at predefined spatial positions for the two or more channels, and an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.
Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Smoothing
Apparatus for processing an audio scene representing a sound field, the audio scene having information on a transport signal and a first set of parameters. The apparatus has a parameter processor for processing the first set of parameters to obtain a second set of parameters, wherein the parameter processor is configured to calculate at least one raw parameter for each output time frame using at least one parameter of the first set of parameters for the input time frame, to calculate a smoothing information such as a factor for each raw parameter in accordance with a smoothing rule, and to apply a corresponding smoothing information to the corresponding raw parameter to derive the parameter of the second set of parameters for the output time frame. The apparatus further has an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.