Patent classifications
H04R5/00
Modal based architecture for controlling the directivity of loudspeaker arrays
A directivity pattern generator for producing sound patterns using a modal architecture is described. The directivity pattern generator may include a beam pattern mixing unit, which defines sound patterns to be emitted by an audio system in terms of a set of frequency invariant modes or modal patterns. The beam pattern mixing unit produces a set of modal gains representing the level or degree each of the predefined modal patterns is to be applied to a set of audio streams. Modal filters may be used to modal amplitudes that compensate for inefficiencies of the each modal pattern at low frequencies. The directivity pattern generator may include a modal decomposition unit for generating driving signals for each transducer in one or more loudspeaker arrays based on weighted values for the modal gains/amplitudes.
Method and apparatus for decoding a bitstream including encoded higher order ambisonics representations
Higher Order Ambisonics represents three-dimensional sound independent of a specific loudspeaker set-up. However, transmission of an HOA representation results in a very high bit rate. Therefore compression with a fixed number of channels is used, in which directional and ambient signal components are processed differently. For coding, portions of the original HOA representation are predicted from the directional signal components. This prediction provides side information which is required for a corresponding decoding. By using some additional specific purpose bits, a known side information coding processing is improved in that the required number of bits for coding that side information is reduced on average.
Audio processing to compensate for time offsets
A method of processing each of a first plurality of temporal windows of first and second input audio signals to generate first and second output audio signals comprises (a) detecting a time offset between respective portions of the first and second input audio signals corresponding to a given temporal window by: (i) detecting a correlation between one or more properties of the respective portions according to each of a group of candidate time offsets under test; and (ii) selecting, as a detected time offset for the given temporal window, an offset for which the detecting step (i) detects a correlation which meets a predetermined criterion such as greatest correlation; and (b) for each of a second plurality of temporal windows, generating a portion of the first and second output signals by applying a relative delay between portions of the first and second input audio signals in order to correct one or both of the input audio signals to generate a pair of output audio signals (such as a stereo pair) having a reduced temporal disparity between the audio content of the two signals.
Stereo parameters for stereo decoding
An apparatus includes a receiver and a decoder. The receiver is configured to receive a bitstream that includes a first frame and a second frame. The first frame includes a first portion of a mid channel and a first quantized stereo parameter. The second frame includes a second portion of the mid channel and a second quantized stereo parameter. The decoder is configured to generate a first portion of a channel based on the first portion of the mid channel and the first quantized stereo parameter. The decoder is configured to, in response to the second frame being unavailable for decoding operations, estimate the second quantized stereo parameter based on stereo parameters of one or more preceding frames and generate a second portion of the channel based on the estimated second quantized stereo parameter. The second portion of the channel corresponds to a decoded version of the second frame.
Display apparatus
A display apparatus is capable of outputting a stereo sound. The display apparatus includes a display panel configured to display an image; a sound generating device on a rear surface of the display panel; a rear cover on the rear surface of the display panel and configured to support the sound generating device; a partition member between the rear surface of the display panel and the rear cover and configured to divide the display panel into first, second, third, fourth and fifth areas; and first, second, third, fourth, and fifth sound generating devices attached to the rear surface of the display panel and configured to vibrate the display panel. The first, second, third, fourth and fifth sound generating devices are in the first, second, third, fourth and fifth areas, respectively.
Display apparatus and method of controlling the same
The disclosure is to provide a display apparatus capable of increasing user convenience by outputting a screen and a sound under predetermined conditions when outputting a split screen, and a method of controlling the display apparatus. The display apparatus of the disclosure includes a speaker; a display; and in response to input of an output command for outputting the plurality of contents to the plurality of regions of the display, a controller configured to output image signals of a plurality of contents to a plurality of regions, and to output sound signals of the plurality of contents to the speaker. Sizes of the plurality of regions and an output intensity of the sound signal corresponding to the plurality of contents are configured to be determined based on priority information predetermined based on the plurality of contents.
Display apparatus and method of controlling the same
The disclosure is to provide a display apparatus capable of increasing user convenience by outputting a screen and a sound under predetermined conditions when outputting a split screen, and a method of controlling the display apparatus. The display apparatus of the disclosure includes a speaker; a display; and in response to input of an output command for outputting the plurality of contents to the plurality of regions of the display, a controller configured to output image signals of a plurality of contents to a plurality of regions, and to output sound signals of the plurality of contents to the speaker. Sizes of the plurality of regions and an output intensity of the sound signal corresponding to the plurality of contents are configured to be determined based on priority information predetermined based on the plurality of contents.
AUDIO DECODER FOR AUDIO CHANNEL RECONSTRUCTION
A method and apparatus for reconstructing N audio channels from M audio channels is disclosed. The method includes receiving a bitstream containing an encoded audio signal representing the M audio channels and decoding the encoded audio signal to obtain a frequency domain representation of the M audio channels. The method further includes extracting a parameter from the bitstream and reconstructing at least one of the N audio channels using the parameter. The parameter represents an angle between two signals, at least one of which is included in the M audio channels.
AUDIO DECODER FOR AUDIO CHANNEL RECONSTRUCTION
A method and apparatus for reconstructing N audio channels from M audio channels is disclosed. The method includes receiving a bitstream containing an encoded audio signal representing the M audio channels and decoding the encoded audio signal to obtain a frequency domain representation of the M audio channels. The method further includes extracting a parameter from the bitstream and reconstructing at least one of the N audio channels using the parameter. The parameter represents an angle between two signals, at least one of which is included in the M audio channels.
TRANSFORM AMBISONIC COEFFICIENTS USING AN ADAPTIVE NETWORK FOR PRESERVING SPATIAL DIRECTION
A device includes a memory configured to store untransformed ambisonic coefficients at different time segments. The device includes one or more processors configured to obtain the untransformed ambisonic coefficients at the different time segments, where the untransformed ambisonic coefficients at the different time segments represent a soundfield at the different time segments. The one or more processors are configured to apply one adaptive network, based on a constraint that includes preservation of a spatial direction of one or more audio sources in the soundfield at the different time segments, to the untransformed ambisonic coefficients at the different time segments to generate transformed ambisonic coefficients at the different time segments, wherein the transformed ambisonic coefficients at the different time segments represent a modified soundfield at the different time segments, that was modified based on the constraint. The one or more processors are also configured to apply an additional adaptive network.