Binaural decoder to output spatial stereo sound and a decoding method thereof
09800987 · 2017-10-24
Assignee
Inventors
Cpc classification
H04S2420/07
ELECTRICITY
H04S2420/01
ELECTRICITY
H04S2420/03
ELECTRICITY
H04S7/30
ELECTRICITY
H04S3/008
ELECTRICITY
International classification
H04S7/00
ELECTRICITY
H04S3/00
ELECTRICITY
Abstract
A binaural decoder for an MPEG surround stream, which decodes an MPEG surround stream into a stereo 3D signal, and a decoding method thereof. The method includes dividing a compressed audio stream and head related transfer function (HRTF) data into subbands, selecting predetermined subbands of the HRTF data divided into subbands and filtering the HRTF data to obtain the selected subbands, decoding the audio stream divided into subbands into a stream of multi-channel audio data with respect to subbands according to spatial additional information, and binaural-synthesizing the HRTF data of the selected subbands with the multi-channel audio data of corresponding subbands.
Claims
1. A method of generating a binaural signal, the method comprising: generating a quadrature mirror filter (QMF)-domain audio signal by performing a QMF analysis on a time domain audio signal, the QMF domain audio signal comprising a plurality of frequency bands; generating a QMF-domain impulse response data for binaural by performing a QMF analysis on a impulse response data for binaural; and generating a QMF-domain binaural signal by processing the QMF-domain audio signal based on the QMF-domain impulse response data for binaural according to a predetermined number of bands.
2. The method of claim 1, wherein the QMF-domain impulse response data for binaural is applied to the QMF-domain audio signal based on result of comparing frequency band of the QMF-domain audio signal with frequency band for the predetermined number of bands.
3. The method of claim 1, wherein the processing is skipping to apply the QMF-domain impulse response data for binaural to the QMF-domain audio signal having frequency band higher than the frequency band for the predetermined number of bands.
4. The method of claim 1, wherein the QMF-domain impulse response data for binaural is applied to a part of QMF bands.
5. The method of claim 1, wherein the QMF-domain impulse response data for binaural comprises a head-related transfer function (HRTF).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and/or other aspects and utilities of the present general inventive concept will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(10) Reference will now be made in detail to the embodiments of the present general inventive concept, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present general inventive concept by referring to the figures.
(11)
(12) An encoder (not illustrated) generates an audio stream and channel additional information, by downmixing N-channels of audio data into M-channels of audio data.
(13) The binaural decoder 200 of
(14) First and second audio signals (input 1, input 2) encoded in the encoder (not illustrated), preset head related transfer function (HRTF) data, and spatial parameters corresponding to additional information are input to the binaural decoder 200. At this time, the spatial parameters are channel-related additional information, such as a channel time difference (CTD), a channel level difference (CLD), an inter-channel correlation (ICC), and a channel prediction coefficient (CPC).
(15) Also, the HRTF is a function obtained by mathematically modeling a path through which sound is transferred from a sound source to an eardrum of an ear of a listener. A characteristic of the HRTF varies with respect to a positional relation between a sound and the listener. The HRTF is a transfer function on a frequency plane that indicates propagation of the sound from the sound source to the ear of the listener, and a characteristic function which reflects frequency distortion occurring at a head, ear lobe and torso of the listener. Binaural synthesis reproduces a sound recorded at the two ears of a dummy-head imitating the shape of a human head by using this HRTF, to headphones or earphones. Accordingly, by the binaural synthesis causes the listener to experience a realistic stereo sound field, as can be experienced in a studio recording environment.
(16) The first QMF analysis unit 210 transforms the HRTF data in a time domain into data in a frequency domain, and divides the HRTF data with respect to subbands suitable for a frequency band of an MPEG surround stream.
(17) The second QMF analysis unit 220 transforms the input first audio stream (input 1) in the time domain into a first audio stream in the frequency domain and divides the stream with respect to the subbands.
(18) The third QMF analysis unit 230 transforms the input second audio stream (input 2) in the time domain into a second audio stream in the frequency domain and divides the stream with respect to the subbands.
(19) The subband filter unit 240 includes a band-pass filter and a subband filter. The subband filter unit 240 selects and filters pass bands that are important to recognition of a directivity effect and a spatial effect, from the HRTF data windowed with respect to the subbands in the first QMF analysis unit 210, and subband-filters the filtered HRTF data in detail with respect to the subbands of the input audio stream. Accordingly, the pass bands of the HRTF important to recognition of the directivity effect and the spatial effect have measurements of 100 Hz˜1.5 kHz, 100 Hz˜4 kHz, and 100 Hz˜8 kHz, which are selectively used with respect to resources of a system. The resources of the system include, for example, an operation speed of a digital signal processor (DSP) or a capacity of a memory of a binaural decoder.
(20) The spatial synthesis unit 250 decodes the first and second audio streams output from the second and third QMF analysis units 220 and 230, respectively, with respect to subbands, into streams of multi-channel audio data with respect to the subbands, by using spatial parameters such as the CTD, CLD, ICC and CPC.
(21) The binaural synthesis unit 260 outputs first and second channel audio data with respect to the subbands, by applying the HRTF data windowed in the subband filter unit 240 to the streams of the multi-channel audio data with respect to the subbands output from the spatial synthesis unit 250.
(22) The first QMF synthesis unit 270 subband-synthesizes, with respect to the subbands, the first channel audio data that is output from the binaural synthesis unit 260.
(23) The second QMF synthesis unit 280 subband-synthesizes, with respect to the subbands, the second channel audio data that is output from the binaural synthesis unit 260.
(24)
(25) The binaural decoder 300 of
(26) That is, the functions and structures of first and second QMF analysis units 310 and 320, a subband filter unit 340, a spatial synthesis unit 350, a binaural synthesis unit 360, and first and second QMF synthesis units 370 and 380 may be the same, respectively, as the first and second QMF analysis units 210 and 220, the subband filter unit 240, the spatial synthesis unit 250, the binaural synthesis unit 260, and the first and second QMF synthesis units 270 and 280 of
(27)
(28) Referring to
(29)
(30) Referring to
(31)
(32) Referring to
(33)
(34) Referring to
(35) Multipliers 701 through 705 of the k-th band convolute an input stream of 5-channel audio data (CH^k), ChbCk), CH3(k), CH^k), CH5(k)) of the k-th band with a stream of 5-channel HRTF data (HRTF^k), HRTFsCk), HRTFsCk), HRTF^k), HRTFsCk)) of the k-th band.
(36) Multipliers 711 through 715 of the (k+1)-th band convolute an input stream of 5-channel audio data (CH^k+1), CH2(k+1), CH3(k+1), CH^k+1), CH5(k+1)) of the k-th band with a stream of 5-channel HRTF data (HRTF^k+1), HRTF2(k+1), HRTFsCk+1), HRTF^k+1), HRTFsCk+1)) of the (k+1)-th band.
(37) Multipliers 721 through 725 of the (k+2)-th band convolute an input stream of 5-channel audio data (CH1(k+2), CH2(k+2), CH3(k+2), CH4(k+2), CH5(k+2)) of the (k+2)-th band with a stream of 5-channel HRTF data (HRTF1(k+2), HRTF2(k+2), HRTF3(k+2), HRTF4(k+2), HRTF5(k+2)) of the (k+2)-th band. Since the (n−1)-th band is out of the subbands as illustrated in
(38) Downmixers 730, 740, 750, 760, and 770 downmix the convoluted streams of multi-channel audio data through an ordinary linear combination and output a result as left and right channel audio signals.
(39) The first downmixer 730 downmixes a stream of 5-channel audio data (CHAO), CH2(0), CH3(0), CH4(0), CH5(0)) of the 0-th band into a first stream of 2-channel audio data.
(40) The second downmixer 740 downmixes a stream of 5-channel audio data (CH^k), CH2(k), CH3(k), CH4(k), CH5(k)) of the k-th band to which the HRTF of the k-th band has been applied by the k-th band multipliers 701 through 705, into a second stream of 2-channel audio data.
(41) The third downmixer 750 downmixes a stream of 5-channel audio data (CH^k+1), CH2(k+1), CHsCk+1), CH4(k+1), CHsCk+1)) of the (k+1)-th band to which the HRTF of the (k+1)-th band has been applied by the (k+1)-th band multipliers 711 through 715, into a third stream of 2-channel audio data.
(42) The fourth downmixer 760 downmixes a stream of 5-channel audio data (CH1(k+2), CH2(k+2), CH3(k+2), CH4(k+2), CH5(k+2)) of the (k+2)-th band to which the HRTF of the (k+2)-th band has been applied by the (k+2)-th band multipliers 721 through 725, into a fourth stream of 2-channel audio data.
(43) The fifth downmixer 770 downmixes a stream of 5-channel audio data (CH^n−1), CH2(n−1), CH3(n−1), CH4(n−1), CH5(n−1)) of the (n−1)-th band into a fifth stream of 2-channel audio data.
(44) As a result, the 2 channel audio data output from the downmixers 730, 740, 750, 760, and 770 are subband-synthesized to left and right audio channels, respectively, by the first and second QMF synthesis units 370 and 380 of
(45)
(46) Referring to
(47) The present general inventive concept can also be embodied as computer readable codes on a computer readable recording medium to perform the above-described method. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the Internet). The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
(48) According to the present general inventive concept as described above, HRTF data is transformed into data in frequency domain and only a band important to recognition of a directivity effect and a spatial effect among the HRTF data is binaural-synthesized. In this way, 3D MPEG surround service can be provided in a stereo environment or a mobile environment.
(49) Although a few embodiments of the present general inventive concept have been shown and described, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the appended claims and their equivalents.