Patent classifications
H04S7/305
Spatial audio for interactive audio environments
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The respective second intermediate audio signal is determined based on a location of the respective sound source, and further based on an acoustic property of the virtual environment. The respective second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first bus and the second bus.
System and a processing method for customizing audio experience
The present disclosure relates to a system and a processing method in association with the system for customizing audio experience. Customization of audio experience can be based on derivation of at least one customized audio response characteristic which can be applied to an audio device used by a person. The customized audio response characteristic(s) can be unique to the person.
Information processing device, method, and program
The present technology relates to an information processing device, a method, and a program that enable easy production of 3D Audio content. The information processing device includes a determination unit that determines one or more parameters constituting the metadata of an object on the basis of one or more pieces of attribute information of the object. The present technology can be applied to information processing devices.
COLORLESS GENERATION OF ELEVATION PERCEPTUAL CUES USING ALL-PASS FILTER NETWORKS
A system includes one or more computing devices that encode spatial perceptual cues into a monaural channel to generate a plurality of output channels. A computing device determines a target amplitude response for the mid and side channels of the plurality of output channels, defining a spatial perceptual associated with one or more frequency-dependent phase shifts. The computing device determines a transfer function of a single-input, multi-output allpass filter based on the target amplitude response and determines coefficients of the allpass filter based on the transfer function, and processes the monaural channel with the coefficients of the allpass filter to generate the plurality of channels having the encoded spatial perceptual cues. The allpass filter is configured to be colorless with respect to the individual output channels, allowing for the placement of spatial cues into the audio stream to be decoupled from the overall coloration of the audio.
Calibration Assistance
Example techniques relate to calibration interfaces that facilitate calibration of a playback device. An example implementation may involve outputting a sequence of prompts to guide calibration of a playback device during a calibration sequence comprising (i) a spatial calibration component and (ii) a spectral calibration component. Outputting the sequence of prompts includes outputting one or more first audio prompts representing a guide to perform the spatial calibration component of the calibration sequence. The spatial calibration component involves calibration of the playback device for a particular location within an environment. Outputting the sequence of prompts also includes outputting one or more first second prompts representing a guide to perform the spectral calibration component of the calibration sequence. The spectral calibration component involves calibration of the playback device for the environment.
METHOD AND SYSTEM FOR ARTIFICIAL REVERBERATION EMPLOYING REVERBERATION IMPULSE RESPONSE SYNTHESIS
The present embodiments relate to audio effect processing, and more particularly to a method for creating reverberation impulse responses from prerecorded or live source materials forms the basis of a family of reverberation effects. In one embodiment, segments of audio are selected and processed to form an evolving sequence of reverberation impulse responses that are applied to the original source material—that is, an audio stream reverberating itself. In another embodiment, impulse responses derived from one audio track are applied to another audio track. In a further embodiment, reverberation impulse responses are formed by summing randomly selected segments of the source audio, and imposing reverberation characteristics, including reverberation time, wet equalization, wet-dry mix, and predelay. By controlling the number and timing of the selected source audio segments, the method produces a collection of impulse responses that represent a trajectory through the source material. In so doing, the evolving impulse responses will have the character of room reverberation while also expressing the changing timbre and dynamics of the source audio.
Sound Localization for an Electronic Call
During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.
METHOD AND APPARATUS FOR PROCESSING ACOUSTIC SPATIAL INFORMATION
A method and apparatus for processing acoustic spatial information are provided. The method of processing acoustic spatial information includes identifying at least one mesh disposed in an acoustic space, setting a minimum cuboid surrounding the mesh as a bounding box, and generating acoustic spatial information including information about the bounding box.
METHOD OF PROCESSING MESH DATA FOR PROCESSING AUDIO SIGNAL FOR AUDIO RENDERING IN VIRTUAL REALITY SPACE
Provided is a method of processing mesh data for processing an audio signal for audio rendering in a virtual reality (VR) space. The method includes receiving mesh data defining geometry of a certain space and the mesh data includes data regarding three-dimensional (3D) coordinates of points for configuring spatial information of the certain space and processing the mesh data by identifying outermost points among the points.
APPARATUS FOR IMMERSIVE SPATIAL AUDIO MODELING AND RENDERING
Disclosed is an apparatus for immersive spatial audio modeling and rendering for effectively transmitting and playing immersive spatial audio content. The apparatus for immersive spatial audio modeling and rendering disclosed herein may model a spatial audio scene, generate and transmit parameters necessary for spatial audio rendering, and generate various spatial audio effects using the spatial audio parameters, to provide an immersive three-dimensional (3D) audio source coinciding with visual experience in a virtual reality space in response to free changes in the position and direction of a remote user in the space.