Patent classifications
H04S2400/15
SYSTEMS AND METHODS FOR GENERATING VIDEO-ADAPTED SURROUND-SOUND
Audiovisual presentations, such as film recordings, may have been originally created having an audio soundtrack with multiple audio tracks mixed for a surround sound system that includes a set of speakers physically surrounding a user. The present disclosure presents systems and methods to remix these soundtracks into 3D audio that when presented to the ears of a user can be perceived as a virtual surround sound system that mimics the physical system. What is more, the disclosed systems and methods can enhance the virtual surround sound system by adjusting virtual speakers of the virtual surround sound system according to video content of the audiovisual presentation. Further enhancement may be possible by adjusting the virtual speakers of the virtual surround sound system according to a sensed position of a user.
Augmented Reality with Motion Sensing
The technology disclosed relates to a motion sensory and imaging device capable of acquiring imaging information of the scene and providing at least a near real time pass-through of imaging information to a user. The sensory and imaging device can be used stand-alone or coupled to a wearable or portable device to create a wearable sensory system capable of presenting to the wearer the imaging information augmented with virtualized or created presentations of information.
MODULAR CONFERENCING SYSTEM
In some examples, a conferencing system includes a modular electronic device having a device housing configured to removably couple to each of a plurality of speaker modules; amplifier circuitry disposed within the device housing, wherein the amplifier circuitry is configured to amplify audio signals for output to a speaker module of the plurality of speaker modules while the electronic device is coupled to the speaker module; and processing circuitry disposed within the device housing, wherein the processing circuitry is configured to: determine one or more parameters associated with the speaker module after the device housing is coupled to the speaker module; and determine, based on the one or more parameters associated with the speaker module, a set of corresponding audio-configuration settings for processing audio during operation of the conferencing system.
Methods and systems for recording mixed audio signal and reproducing directional audio
Methods and systems are provided for recording mixed audio signal and reproducing directional audio. A method includes receiving a mixed audio signal via plurality of microphones; determining an audio parameter associated with the mixed audio signal received at each of the plurality of microphones; determining active audio sources and a number of the active audio sources from the mixed audio signal; determining direction and positional information of each of the active audio source; dynamically selecting a set of microphones from the plurality of microphones based on at least one of the number of the active audio sources, the direction of each of the active audio sources, the positional information of each of the active audio sources, the audio parameter, or a predefined condition; and recording, based on the selected set of microphones, the mixed audio signal for reproducing directional audio.
APPARATUS, METHODS AND COMPUTER PROGRAMS FOR ENABLING REPRODUCTION OF SPATIAL AUDIO SIGNALS
An apparatus (101) for enabling reproduction of spatial audio signals. The apparatus comprises means for obtaining (401) audio signals (501) comprising one or more channels and obtaining (403) spatial metadata (503) relating to the audio signals (501). The spatial metadata (503) comprises information that indicates how to spatially reproduce the audio signals. The apparatus also comprises means for obtaining (405) information relating to a field of view of video (505) wherein the video is for display on a display (205) of a rendering device (201) and wherein the video is associated with the audio signals (501). The apparatus also comprises means for aligning (407) spatial reproduction of the audio signals based, at least in part, on the obtained spatial metadata (503), with objects (309A, 309B) in the video according to the obtained information relating to the field of view of video; and enabling (409) reproduction of the audio signals based on the aligning (407).
RENDERING REVERBERATION
An apparatus comprising means configured to: obtain at least one impulse response; obtain at least one reflection filter based on the obtained at least one impulse response, wherein the at least one reflection filter is configured to determine at least one early reflection from an acoustic surface which is not overlapped in time by any other reflection, wherein a duration of the at least one early reflection is shorter than a duration of the obtained at least one impulse response. In addition, an apparatus comprising means configured to: obtain at least one impulse response, wherein the at least one impulse response is configured with a perceivable timbre during rendering; create a timbral modification filter; obtain at least one audio signal; and render at least one output audio signal based n the at least one audio signal, wherein the at least one output signal is based on an application of the timbral modification filter.
SOUND PROCESSING APPARATUS, SOUND PROCESSING SYSTEM, SOUND PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
A sound processing apparatus capable of appropriately providing a user with a sound related to a desired area in the field of the large venue is provided. A positional relation specifying unit (2) specifies a positional relation between a field and a one user. A sound acquisition unit (4) acquires a sound from at least one sound collection device that collects a sound in each of a plurality of areas of the field. A direction determination unit (6) determines a direction in which the user is facing. A gaze position determination unit (8) determines a gaze position which is a position viewed by the user based on the positional relation and the direction in which the user is facing. A sound providing unit (10) performs processing for providing the user with the sound related to the area corresponding to the gaze position.
SYSTEM AND METHOD FOR TRANSMITTING AT LEAST ONE MULTICHANNEL AUDIO MATRIX ACROSS A GLOBAL NETWORK
A system and method for transmitting at least one multichannel audio matrix across a global network is shown and described. The method for transmitting at least one multichannel audio matrix across a global network begins by capturing audio from a production source. The captured audio is then converted from an analog format to a digital format. The digital audio may then be encoded using an audio codec. The audio is sent to a network and received at a second location. The second location uses a specialized computing device to ensure the audio is properly received. If the audio was encoded, it is now decoded. Once decoded if needed the audio is converted back to analog format. This will allow for the audio to be mixed on a mixing device.
PROCESSING OF AUDIO SIGNALS FROM MULTIPLE MICROPHONES
A first device includes a memory configured to store instructions and one or more processors configured to receive audio signals from multiple microphones. The one or more processors are configured to process the audio signals to generate direction-of-arrival information corresponding to one or more sources of sound represented in one or more of the audio signals. The one or more processors are also configured to and send, to a second device, data based on the direction-of-arrival information and a class or embedding associated with the direction-of-arrival information.
Fiber microphone
A microphone, comprising at least two electrodes, spaced apart, configured to have a magnetic field within a space between the at least two electrodes; a conductive fiber, suspended between the at least two electrodes; in an air or fluid space subject to waves; wherein the conductive fiber has a radius and length such that a movement of at least a central portion of the conductive fiber approximates an oscillating movement of air or fluid surrounding the conductive fiber along an axis normal to the conductive fiber. An electrical signal is produced between two of the at least two electrodes, due to a movement of the conductive fiber within a magnetic field, due to viscous drag of the moving air or fluid surrounding the conductive fiber. The microphone may have a noise floor of less than 69 dBA using an amplifier having an input noise of 10 nV/√Hz.