Patent classifications
H04S2400/15
IMPULSE RESPONSE GENERATION SYSTEM AND METHOD
A system for determining the impulse response of an environment, the system comprising an audio emitting unit operable to emit a predetermined sound in the environment, an audio detection unit operable to record the sound output by the audio emitting unit, and an impulse response generation unit operable to identify an impulse response of the environment in dependence upon a frequency response of the audio emitting unit and/or the audio detection unit, and a difference between the predetermined sound and the recorded sound.
TRACKING AND COMMUNICATION SYSTEM FOR MICROPHONES
A tracking and communication system for microphones (10), comprising a plurality of microphones (10) installed in one or more zones (14) of contiguous or separate environments. Each microphone (10) is equipped with a radio unit which is connected to the microphone (10). The tracking and communication system comprises also a network of radio transceivers (11), which are positioned in said one or more zones (14) and which are connected wirelessly or through one or more cable one to each other, being configured to receive signals from said microphones (10) and to transmit to them a plurality of data, and a command and control unit (12), to which said radio transceivers (11) are connected wirelessly or via cables.
SYSTEM FOR DELIVERABLES VERSIONING IN AUDIO MASTERING
Some implementations of the disclosure relate to using a model trained on mixing console data of sound mixes to automate the process of sound mix creation. In one implementation, a non-transitory computer-readable medium has executable instructions stored thereon that, when executed by a processor, causes the processor to perform operations comprising: obtaining a first version of a sound mix; extracting first audio features from the first version of the sound mix obtaining mixing metadata; automatically calculating with a trained model, using at least the mixing metadata and the first audio features, mixing console features; and deriving a second version of the sound mix using at least the mixing console features calculated by the trained model.
Three-dimensional audio systems
A sound generation system and related method include a user interface device and a processing device to obtain a specification of a three-dimensional space, obtain one or more sound tracks each comprising a corresponding sound signal associated with a corresponding sound source, present, in a user interface, representations representing one or more listeners and the one or more sound sources corresponding to the one or more sound signals in the three-dimensional space, responsive to a configuration of the locations of the one or more listeners or the locations of the one or more sound sources in the three-dimensional space in the user interface, determine filters based on the configuration and pre-determined locations of one or more loudspeakers, and apply the filters to the one or more sound signals to generate filtered sound signals for driving the one or more loudspeakers.
AUDIO LEVEL METERING FOR LISTENER POSITION AND OBJECT POSITION
Playback of an audio signal is simulated from a playback position to a listening position. The simulation is performed with respect to a model of a listening area. The resulting loudness of the audio, perceived at the listening position, is rendered to a display. Other aspects are described and claimed.
AUDIO PROCESSING APPARATUS AND METHOD, AND PROGRAM
The present technology relates to an audio processing apparatus and method and a program that make it possible to obtain sound of higher quality. An acquisition unit acquires an audio signal and metadata of an object. A vector calculation unit calculates, based on a horizontal direction angle and a vertical direction angle included in the metadata of the object and indicative of an extent of a sound image, a spread vector indicative of a position in a region indicative of the extent of the sound image. A gain calculation unit calculates, based on the spread vector, a VBAP gain of the audio signal in regard to each speaker by VBAP. The present technology can be applied to an audio processing apparatus.
Augmented reality with motion sensing
The technology relates to a motion sensory and imaging device capable of acquiring imaging information of the scene and providing at least a near real time pass-through of imaging information to a user. The sensory and imaging device can be used stand-alone or coupled to a wearable or portable device to create a wearable sensory system capable of presenting to the wearer the imaging information augmented with virtualized or created presentations of information.
Audio Rendering with Spatial Metadata Interpolation
An apparatus circuitry including configured to: obtain two or more audio signal sets, wherein each audio signal set is associated with a position; obtain at least one parameter value for at least two of the audio signal sets; obtain the positions associated with at least the at least two of the audio signal sets; obtain a listener position; generate at least one audio signal based on at least one audio signal from at least one of the two or more audio signal sets based on the positions associated with the at least the at least two of the audio signal sets and the listener position; generate at least one modified parameter value based on the obtained at least one parameter value for the at least two of the audio signal sets, the positions associated with the at least two of the audio signal sets and the listener position; and process the at least one audio signal based on the at least one modified parameter value to generate a spatial audio output.
Service for targeted crowd sourced audio for virtual interaction
An audio generation system is provided to enable coordinated control of multiple IoT devices for audio collection and distribution of one or more audio sources according to location and user preference. The audio generation system enables a location sensitive acoustic control of sound, both as a shaped envelope for a particular source, and as an individualized experience. The audio generation system also facilitates an interactive visual system for visualization and manipulation of the audio environment including via the use of augmented reality and/or virtual reality to depict soundscapes. The audio generation system can also facilitate a system for improving and achieving an audio environment (sound influence zone) and an intuitive way to understand where sounds will be heard.
Immersive sound for teleoperators
Immersive experiences for users are described herein. In an example, audio data from a plurality of audio sensors associated with a vehicle can be received by an audio data processing system. The audio data processing system can combine individual captured audio channels (e.g., from the plurality of audio sensors) into two or more audio channels for output via two or more speakers proximate a user. A first audio channel of the two or more audio channels can be output via a first speaker and second audio channel of the two or more audio channels to be output via a second speaker, wherein output of the first audio channel and the second audio channel causes a resulting sound corresponding to at least a portion of a sound scene associated with the vehicle. In an example, a user computing device operable by the user can receive an input from the user.