Patent classifications
H04S3/00
Method and device for processing audio signal, using metadata
Disclosed is a device for processing an audio signal, which renders an audio signal. The device for processing an audio signal includes a processor. The processor receives metadata including an audio signal and first element reference distance information and renders a first element signal on the basis of the first element reference distance information, wherein the first element reference distance information indicates the reference distance of an element signal. The audio signal is capable of including a second element signal which may be simultaneously rendered with the first element signal, and the metadata is capable of including second element distance information indicating the distance of the second element signal. The number of bits required for representing the first element reference distance information is smaller than the number of bits required for representing the second element distance information.
CHAIR INCLUDING MULTI-CHANNEL SOUND SYSTEM
The present invention relates to a chair including a multi-channel sound system, and more particularly, to a chair including a multi-channel sound system capable of providing rich sound effects to a user who sits in the chair through sounds output from a plurality of channels.
Apparatus and method for screen related audio object remapping
An apparatus for generating loudspeaker signals includes an object metadata processor configured to receive metadata, to calculate a second position of the audio object depending on the first position of the audio object and on a size of a screen if the audio object is indicated in the metadata as being screen-related, to feed the first position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being not screen-related, and to feed the second position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being screen-related. The apparatus further includes an object renderer configured to receive an audio object and to generate the loudspeaker signals depending on the audio object and on position information.
Integration of high frequency audio reconstruction techniques
A method for decoding an encoded audio bitstream is disclosed. The method includes receiving the encoded audio bitstream and decoding the audio data to generate a decoded lowband audio signal. The method further includes extracting high frequency reconstruction metadata and filtering the decoded lowband audio signal with an analysis filterbank to generate a filtered lowband audio signal. The method also includes extracting a flag indicating whether either spectral translation or harmonic transposition is to be performed on the audio data and regenerating a highband portion of the audio signal using the filtered lowband audio signal and the high frequency reconstruction metadata in accordance with the flag. The high frequency regeneration is performed as a post-processing operation with a delay of 3010 samples per audio channel.
Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems
Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.
Sum-difference arrays for audio playback devices
In some embodiments, a method comprises receiving audio content comprising left input channel signals and right input channel signals, and generating first and second input signals from the left and right input channel signals. The first input signal is based on a sum of the left and right input channel signals, and the second input signal is based on a difference of the left and right input channel signals. An array transfer function is applied to the first and second input signals to produced audio output signals, which can be provided to a plurality of audio transducers on one or more playback devices.
METHOD FOR TRANSMITTING AUDIO DATA USING SHORT-RANGE COMMUNICATION IN WIRELESS COMMUNICATION SYSTEM, AND DEVICE FOR SAME
Disclosed are a method and apparatus for transmitting, by a first terminal, a first positioning reference signal (PRS) for relative positioning in a communication system that supports sidelink communication that supports a sidelink according to various embodiments. Disclosed are a method comprising the steps of: receiving a second PRS requesting transmission of the first PRS from a second terminal; measuring an angle of arrival (AoA) on the basis of the second PRS; determining a first PRS pattern of the first PRS on the basis of the AoA; determining a time resource region in which the transmission of the first PRS is requested on the basis of a second PRS pattern of the second PRS; and transmitting the first PRS on the basis of the first PRS pattern and the determined time resource region, and an apparatus therefor.
SPATIAL AUDIO CONTROLLER
A method performed a local device that is communicatively coupled with several remote devices, the method includes: receiving, from each remote device with which the local device is engaged in a communication session, an input audio stream; receiving, for each remote device, a set parameters; determining, for each input audio stream, whether the input audio stream is to be 1) rendered individually or 2) rendered as a mix of input audio streams based on the set of parameters; for each input audio stream that is determined to be rendered individually, spatial rendering the input audio stream as an individual virtual sound source that contains only that input audio stream; and for input audio streams that are determined to be rendered as the mix of input audio streams, spatial rendering the mix of input audio streams as a single virtual sound source that contains the mix of input audio streams.
Device, system and method for identifying a scene based on an ordered sequence of sounds captured in an environment
An identification device, method and system for identifying a scene in an environment. The environment includes at least one sound capture device. The identification device is configured to identify the scene based on at least two sounds captured in the environment. Each of the at least two sounds are associated respectively with at least one sound class. The scene is identified by taking account of a chronological order in which the at least two sounds were captured.
Display apparatus and method for processing audio
A display apparatus and a method for processing audio are provided, the display apparatus includes a circuit board provided with a hybrid circuit, a filter circuit and a speaker; the hybrid circuit is configured to receive an original audio signal and superpose a first sub-signal of the original audio signal on a second sub-signal of the original audio signal to obtain a hybrid audio signal; the first sub-signal includes at least one channel of audio signal, the second sub-signal includes at least two channels of audio signal; the filter circuit is configured to filter the hybrid audio signal according to a frequency characteristic of the first sub-signal and the second sub-signal to obtain a restored original audio signal; and the speaker, connected with the filter circuit, is configured to output the restored original audio signal.