SOUND IMAGE DIRECTION SENSE PROCESSING METHOD AND APPARATUS

20170223475 · 2017-08-03

    Inventors

    Cpc classification

    International classification

    Abstract

    According to a sound image direction sense processing method and apparatus, a left-ear channel signal, a right-ear channel signal, and a centered channel signal that are of a sound source are obtained; whether a direction of the sound source is a front direction is determined according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal; and when the direction of the sound source is the front direction, at least one type of the following processing: front direction enhancing processing or rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal. Therefore, a difference between front direction sense and rear direction sense of a sound image may be enlarged, so that accuracy of determining a direction of a sound source may be improved.

    Claims

    1. A sound image direction sense processing method, comprising: obtaining, by a sound image direction sense processing apparatus, a left-ear channel signal, a right-ear channel signal, and a centered channel signal, wherein the left-ear channel signal is obtained by transmitting a sound source signal to a left-ear channel, the right-ear channel signal is obtained by transmitting the sound source signal to a right-ear channel, and the centered channel signal is obtained by transmitting the sound source signal to a centered channel, wherein the centered channel is located in a mid-vertical plane between the left-ear channel and the right ear channel; determining, by the sound image direction sense processing apparatus, according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal, whether a direction of a sound source is a front direction, wherein the front direction is a direction that the centered channel faces; and when the direction of the sound source is the front direction, performing, by the sound image direction sense processing apparatus, front direction enhancing processing and/or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    2. The method according to claim 1, wherein determining whether the direction of the sound source is the front direction comprises: obtaining a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal and determining, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether the direction of the sound source is the front direction,

    3. The method according to claim 2, wherein obtaining the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal comprises: obtaining a Fourier coefficient H.sub.L(f) of the left-car channel signal according to the left-ear channel signal; obtaining a Fourier coefficient H.sub.R(f) of the right-ear channel signal according to the left-ear channel signal; obtaining a Fourier coefficient H.sub.C(f) of the centered channel signal according to the left-ear channel signal; obtaining a maximum value of φ.sub.LR(τ) according to φ LR ( τ ) = 0 x .Math. H L ( f ) .Math. H R * .Math. ( f ) .Math. df × e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of τ corresponding to the maximum value of φ.sub.LR(τ) as the delay difference between the left-car channel signal and the right-ear channel signal; obtaining maximum value φ.sub.LC(τ) according to φ LC ( τ ) = 0 x .Math. H L ( f ) .Math. H C * .Math. ( f ) .Math. df × e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of r corresponding to the maximum value of φ.sub.LC(τ) as the delay difference between the left-ear channel signal and the centered channel signal; and obtaining a maximum value of φ.sub.RC(τ) according to φ RC ( τ ) = 0 x .Math. H R ( f ) .Math. H C * .Math. ( f ) .Math. df × e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of τ corresponding to the maximum value of φ.sub.RC(τ) as the delay difference between the right-ear channel signal and the centered channel signal; wherein H*.sub.R(f) and H.sub.R(f) are H.sub.C(f) and H.sub.C(f) are conjugates, j represents a complex number, [0, x] represents a frequency range, and −1ms≦τ≦1ms.

    4. The method according to claim 2, wherein determining whether the direction of the sound source is the front direction comprises: when 0 c × ITD LR 2 .Math. a 2 2 , determining that an incident angle of the sound source signal is arcsin .Math. .Math. ( c × ITD LR 2 .Math. a ) , wherein: if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 135°, and is less than or equal to 180°; when - 2 2 c × ITD LR 2 .Math. a 0 , determining that an incident angle of the sound source signal is arcsin .Math. .Math. ( c × ITD LR 2 .Math. a ) , wherein: if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 180°, and is less than or equal to 225 ; when 2 2 c ITD LR 2 .Math. a 1 , determining that an incident angle of the sound signal is 45 .Math. ° - arcsin ( c ITD RC 2 .Math. a ) ; or when - 1 c ITD LR 2 .Math. a - 2 2 , determining that an incident angle of the sound source signal is arcsin ( c ITD LC 2 .Math. a ) - 45 .Math. ° ; wherein ITD.sub.LR is the delay difference between the left-ear channel signal and the right-ear channel signal, ITD.sub.RC is the delay difference between the right-ear channel signal and the centered channel signal, and ITD.sub.LC is the delay difference between the left-ear channel signal and the centered channel signal, wherein c represents a sound speed, and a represents a half of a distance between the left-ear channel and the right-ear channel; and determining, according to the result that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 90°, or the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 360°, that the direction of the sound source is the front direction; or determining, according to the result that the incident angle of the sound source signal is greater than 90°, and is less than 270°, that the direction of the sound source is a rear direction, wherein the rear direction is a direction away from which the centered channel faces.

    5. The method according to claim 4, wherein before performing front direction enhancing processing and/or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, the method further comprises: determining that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to a first preset angle, or that the incident angle of the sound source signal is greater than or equal to a second preset angle, and is less than or equal to 360°wherein the first preset angle is less than 90°, and the second preset angle is greater than 270°.

    6. The method according to claim 1, wherein performing front direction enhancing processing separately on the left-ear channel signal and the right-ear channel signal comprises: separately multiplying signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a first preset frequency band by a first gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the front direction enhancing processing, wherein the first gain coefficient is a value greater than 1, and an amplitude spectrum of a front head related transfer function(HRTF) corresponding to the first preset frequency band is greater than an amplitude spectrum of a rear HRTF corresponding to the first preset frequency hand; and wherein performing rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal comprises: separately multiplying signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a second preset frequency band by a second gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the rear direction weakening processing, wherein the second gain coefficient is a positive value less than or equal to 1, and the second preset frequency band is a frequency band other than the first preset frequency band.

    7. The method according to claim 6, wherein before performing front direction enhancing processing and/or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, the method further comprises: obtaining an average value of an amplitude spectrum of a head related transfer function (HRTF) in a front horizontal plane of a head phantom, and an average value of an amplitude spectrum of an HRTF in a rear horizontal plane of the head phantom; performing a subtraction between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, so as to obtain a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane; obtaining, according to the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range; and using a frequency band corresponding to a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane that is greater than the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range as the first preset frequency hand.

    8. A sound image direction sense processing apparatus, comprising a processor and a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed by the processor, facilitate performance of the following: a left-ear channel signal, a right-ear channel signal, and a centered channel signal, wherein the left-ear channel signal is obtained by transmitting a sound source signal to a left-ear channel, the right-ear channel signal is obtained by transmitting the sound source signal to a right-ear channel, and the centered channel signal is a obtained by transmitting the sound source signal to a centered channel, wherein the centered channel is located in a mid-vertical plane between the left-ear channel and the right-ear channel; determining, according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal whether a direction of the sound source is a front direction, wherein the front direction is a direction that the centered channel faces; and when the determining unit determines that the direction of the sound source is the front direction, performing front direction enhancing processing and/or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    9. The apparatus according to claim 8, wherein determining whether the direction of the sound is the front direction comprises: a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal; and determining, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-car channel signal and the centered channel signal, whether the direction of the sound source is the front direction.

    10. The apparatus according to claim 9, wherein obtaining the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal comprises: obtaining a Fourier coefficient H.sub.L(f) of the left-ear channel signal according to the left-ear channel signal; obtaining a Fourier coefficient H.sub.R(f) of the right-ear channel signal according to the left-ear channel signal; obtaining a Fourier coefficient H.sub.C(f) of the centered channel signal according to the left-ear channel signal; a maximum value of φ.sub.LR(τ) according to φ LR ( τ ) = 0 x .Math. H L ( f ) .Math. H R * ( f ) .Math. df e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of τ corresponding to the maximum value of φ.sub.LR(τ) as the delay difference between the left-ear channel signal and the right-ear channel signal; obtaining a maximum value φ.sub.LC(τ) according to φ LC ( τ ) = 0 x .Math. H L ( f ) .Math. H C * ( f ) .Math. df e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of τ corresponding to the maximum value of φ.sub.LC(τ) as the delay difference between the left-ear channel signal and the centered channel signal; and obtaining a maximum value of φ.sub.RC(τ) according to φ RC ( τ ) = 0 x .Math. H R ( f ) .Math. H C * ( f ) .Math. df e ( j .Math. .Math. 2 .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 , and using a value of τ corresponding to the maximum value of φ.sub.RC(τ) as the delay difference between the right-ear channel signal and the centered channel signal; wherein and H*.sub.R(f) and H.sub.R(f) are conjugates, H*.sub.C(f) and H.sub.C(f) are conjugates, j represents a complex number, [0, x] represents a frequency range, and —1ms≦τ≦1ms.

    11. The apparatus according to claim 9, wherein determining whether the direction of the sound source is the front direction comprises: 0 c ITD LR 2 .Math. a 2 2 , when determining that an incident angle of the sound source signal is arcsin ( c ITD LR 2 .Math. a ) , wherein: if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°; or if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 135°, and is less than or equal to 80°; when - 2 2 c ITD LR 2 .Math. a 0 , determining that an incident angle of the sound source signal is arcsin ( c ITD LR 2 .Math. a ) , wherein: if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°, or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 180°, and is less than or equal to 225°; when 2 2 c ITD LR 2 .Math. a 1 , determining that an incident angle of the sound source signal is 45 .Math. ° - arcsin ( c ITD RC 2 .Math. a ) ; or when - 1 c ITD LR 2 .Math. a - 2 2 , determining that an incident angle of the sound source signal is arcsin ( c ITD LC 2 .Math. a ) - 45 .Math. ° ; wherein ITD.sub.LR is the delay difference between the left-ear channel signal and the right-ear channel signal, ITD.sub.RC is the delay difference between the right-ear channel signal and the centered channel signal, and ITD.sub.LC is the delay difference between the left-ear channel signal and the centered channel signal, wherein c represents a sound speed, and a represents a half of a distance between the left-ear channel and the right-ear channel; and determining, according to the result that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 90°, or the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 360°, that the direction of the sound source is the front direction; or determining, according to the result that the incident angle of the sound source signal is greater than 90°, and is less than 270°, that the direction of the sound source is a rear direction, wherein the rear direction is a direction away from which the centered channel faces.

    12. The apparatus according to claim 11, wherein the processor-executable instructions, when executed, are further configured to facilitate: before performing front direction enhancing processing and/or rear direction weakening process separately on the left-ear channel signal and the right-ear channel signal, determining that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to a first preset angle, or the incident angle of the sound source signal is greater than or equal to a second preset angle, and is less than or equal to 360°, wherein the first preset angle is less than 90°, and the second preset angle is greater than 270°.

    13. The apparatus according to claim 8, wherein performing front direction enhancing processing separately on the left-ear channel signal and the right-ear channel signal comprises: separately multiplying signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a first preset frequency band by a first gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the front direction enhancing processing, wherein the first gain coefficient is a value greater than 1, and an amplitude spectrum of a front head related transfer function HRTF corresponding to the first preset frequency band is greater than an amplitude spectrum of a rear HRTF corresponding to the first preset frequency band: and wherein performing rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal separately multiplying signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a second preset frequency band by a second gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the rear direction weakening processing, wherein the second gain coefficient is a positive value less than or equal to 1, and the second preset frequency band is a frequency band other than the first preset frequency band.

    14. The apparatus according to claim 13, wherein the processor-executable instructions, when executed, are further configured to facilitate: before performing the front direction enhancing processing and/or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, obtaining an average value of an amplitude spectrum of a head related transfer function (HRTF) in a front horizontal plane of a head phantom, and an average value of an amplitude spectrum of an HRTF in a rear horizontal plane of the head phantom; performing a subtraction between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, so as to obtain a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane; obtaining, according to the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range; and using a frequency band corresponding to a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane that is greater than the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range as the first preset frequency band.

    Description

    BRIEF DESCRIPTION OF DRAWINGS

    [0039] To describe the technical solutions in the embodiments of the present disclosure more clearly, the following briefly describes the accompanying drawings for describing the embodiments. The accompanying drawings in the following description show some embodiments of the present disclosure, and persons of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.

    [0040] FIG. 1 is a flowchart of Embodiment 1 of a sound image direction sense processing method according to the present disclosure;

    [0041] FIG. 2 is a flowchart of Embodiment 2 of a sound image direction sense processing method according to the present disclosure;

    [0042] FIG. 3 is a schematic diagram of division of an incident angle of a sound source signal according to an embodiment of the present disclosure;

    [0043] FIG. 4 is a schematic structural diagram of Embodiment 1 of a sound image direction sense processing apparatus according to the present disclosure; and

    [0044] FIG. 5 is a schematic structural diagram of Embodiment 2 of a sound image direction sense processing apparatus according to the present disclosure.

    DESCRIPTION OF EMBODIMENTS

    [0045] To make the objectives, technical solutions, and advantages of the embodiments of the present disclosure clearer, the following clearly describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. The described embodiments are some but not all of the embodiments of the present disclosure. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.

    [0046] FIG. 1 is a flowchart of Embodiment 1 of a sound image direction sense processing method according to the present disclosure. As shown in FIG. 1, the method in this embodiment may include the following steps.

    [0047] S101. Obtain a left-ear channel signal, a right-ear channel signal, and a centered channel signal.

    [0048] The left-ear channel signal is a signal obtained by transmitting a sound source signal to a left-ear channel, the right-ear channel signal is a signal obtained by transmitting the sound source signal to a right-ear channel, and the centered channel signal is a signal obtained by transmitting the sound source signal to a centered channel. The centered channel is located in a mid-vertical plane between the left-ear channel and the right-ear channel.

    [0049] In this embodiment, a signal is sent by a sound source, and the signal is referred to as a sound source signal. Then, a left-ear channel signal, a right-ear channel signal, and a centered channel signal may be obtained. Specifically, a microphone may be disposed in a left ear to obtain the signal sent by the sound source, and an obtained signal is referred to as the left-ear channel signal; a microphone may be disposed in a right ear to obtain the signal sent by the sound source, and an obtained signal is referred to as the right-ear channel signal; and a microphone may be disposed in a mid-vertical plane between the left ear and the right ear to obtain the signal sent by the sound source, and an obtained signal is referred to as the centered channel signal. The microphone disposed in the mid-vertical plane between the left ear and the right ear may be disposed, for example, on a forehead or a nose bridge on a head of a person.

    [0050] S102. Determine, according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal, whether a direction of the sound source is a front direction.

    [0051] In this embodiment, whether the direction of the sound source is the front direction is determined according to the left-ear channel signal, the right-ear channel signal, and the newly-added centered channel signal. The front direction is a direction that the centered channel faces. For example, the front direction is a direction that a face of a user who uses the sound image direction sense processing method according to this embodiment of the present disclosure faces. Compared with the prior art, a centered channel signal is additionally obtained, and then whether the direction of the sound source is the front direction may be determined according to the left-ear channel signal, the right-ear channel signal, and the added centered channel signal.

    [0052] S103. When the direction of the sound source is the front direction, perform at least one type of the following processing: front direction enhancing processing or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    [0053] In this embodiment, when the direction of the sound source is the front direction, the front direction enhancing processing may be performed on the left-ear channel signal, and the front direction enhancing processing may be performed on the right-ear channel signal; or the rear direction weakening processing may be performed on the left-ear channel signal, and the rear direction weakening processing may be performed on the right-ear channel signal; or the front direction enhancing processing and the rear direction weakening processing may be performed on the left-ear channel signal, and the front direction enhancing processing and the rear direction weakening processing may be performed on the right-ear channel signal. After receiving a left-ear channel signal and a right-ear channel signal that undergo the foregoing processing, a listener generates sense on the sound source, and this sense is referred to a sound image. A difference between front direction sense and rear direction sense of the sound image may be enlarged by means of processing in any one of the foregoing manners, so that the listener can accurately recognize that the sound source is from the front direction.

    [0054] According to the sound image direction sense processing method provided in this embodiment of the present disclosure, a left-ear channel signal, a right-ear channel signal, and a centered channel signal that are of a sound source are obtained, where the centered channel is located in a mid-vertical plane between the left-ear channel and the right-ear channel; whether a direction of the sound source is a front direction is determined according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal, where the front direction is a direction that the centered channel faces; and when the direction of the sound source is the front direction, at least one type of the following processing: front direction enhancing processing or rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal. Therefore, a difference between front direction sense and rear direction sense of a sound image may be enlarged, so that a listener can accurately recognize that the sound source is from the front direction, thereby improving accuracy of determining a direction of a sound source.

    [0055] FIG. 2 is a flowchart of Embodiment 2 of a sound image direction sense processing method according to the present disclosure. As shown in FIG. 2, the method in this embodiment may include the following steps.

    [0056] S201. Obtain a left-ear channel signal, a right-ear channel signal, and a centered channel signal.

    [0057] In this embodiment, for a specific implementation process of S201, refer to a specific implementation process of S101 in Embodiment 1 of the method in the present disclosure, and details are not described herein.

    [0058] S202. Obtain a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal.

    [0059] In this embodiment, the delay difference between the left-ear channel signal and the right-ear channel signal may be obtained according to the obtained left-ear channel signal and right-ear channel signal; the delay difference between the left-ear channel signal and the centered channel signal may be obtained according to the obtained left-ear channel signal and centered channel signal; and the delay difference between the right-ear channel signal and the centered channel signal may be obtained according to the obtained right-ear channel signal and centered channel signal.

    [0060] In a specific implementation manner, the foregoing S202 may include the following content: obtaining a Fourier coefficient H.sub.L(f) of the left-ear channel signal according to the obtained left-ear channel signal, where H.sub.L(f) is a f related function, and f is a frequency of the left-ear channel signal; obtaining a Fourier coefficient H.sub.R(f) of the right-ear channel signal according to the obtained left-ear channel signal, where H.sub.R(.sup.f) is a f related function, and f is a frequency; obtaining a Fourier coefficient H.sub.C(f) of the centered channel signal according to the obtained left-ear channel signal, where H.sub.C(f) is a f related function, and f is a frequency; and then obtaining the delay difference between the left-ear channel signal and the right-ear channel signal according to a formula (1), and specifically, obtaining a maximum value of φ.sub.LR(τ) according to the formula (1), and using a value of τ corresponding to the maximum value of φ.sub.LR(τ) as the delay difference between the left-ear channel signal and the right-ear channel signal.

    [0061] The formula (1) is

    [00023] φ LR ( τ ) = 0 x .Math. H L ( f ) .Math. H R * ( f ) .Math. .Math. df × e ( j .Math. .Math. 2 .Math. .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] } 1 / 2 ,

    where

    [0062] H*.sub.R(f) and H.sub.R(f) are conjugate, j represents a complex number, [0, x] represents a low frequency range, and −1ms≦τ≦1ms.

    [0063] The delay difference between the left-ear channel signal and the centered channel signal may be further obtained according to a formula (2). Specifically, a maximum value of φ.sub.LC(τ) is obtained according to the formula (2), and a value of τ corresponding to the maximum value of φ.sub.LC(τ) is used as the delay difference between the left-ear channel signal and the centered channel signal.

    [0064] The formula (2) is

    [00024] φ LC ( τ ) = 0 x .Math. H L ( f ) .Math. H C * ( f ) .Math. .Math. df × e ( j .Math. .Math. 2 .Math. .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H L ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 ,

    where

    [0065] H*.sub.C(f) and H.sub.C(f) are conjugate, j represents a complex number, [0, x] represents a frequency range, and −1ms≦τ≦1ms.

    [0066] The delay difference between the right-ear channel signal and the centered channel signal may be further obtained according to a formula (3). Specifically, a maximum value of φ.sub.RC(τ) is obtained according to the formula (3), and a value of τ corresponding to the maximum value of φ.sub.RC(τ) is used as the delay difference between the right-ear channel signal and the centered channel signal.

    [0067] The formula (3) is

    [00025] φ RC ( τ ) = 0 x .Math. H R ( f ) .Math. H C * ( f ) .Math. .Math. df × e ( j .Math. .Math. 2 .Math. .Math. π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. .Math. H R ( f ) .Math. 2 .Math. df ] [ 0 x .Math. .Math. H C ( f ) .Math. 2 .Math. df ] } 1 / 2 ,

    where

    [0068] H*.sub.C(f) and H.sub.C(f) are conjugate, j represents a complex number, [0, x] represents a low frequency range, and −1ms≦τ≦1ms.

    [0069] S203. Determine, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether a direction of the sound source is a front direction.

    [0070] In a specific implementation manner, S203 may include the following content.

    [00026] c × ITD LR 2 .Math. .Math. a

    may be obtained according to the delay difference between the left-ear channel signal and the right-ear channel signal, where ITD.sub.LR is the delay difference between the left-ear channel signal and the right-ear channel signal, C is a sound speed, and a is a half of a distance between the left-ear channel and the right-ear channel.

    [0071] In a first case, when

    [00027] 0 c × ITD LR 2 .Math. .Math. a 2 2 ,

    it may be determined that an incident angle of the sound source signal is

    [00028] arcsin ( c × ITD LR 2 .Math. .Math. a ) ,

    and then whether the delay difference ITD.sub.LC between the left-ear channel signal and the centered channel signal is greater than or equal to the delay difference ITD.sub.RC between the right-ear channel and the centered channel signal is determined. If |ITD .sub.LC|≧|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°, that is, the incident angle of the sound source signal belongs to an angle range [0°, 45 ]; or if the |ITD .sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 135°, and is less than or equal to 180°, that is, the incident angle of the sound source signal belongs to an angle range [135°,180].

    [0072] In a second case, when

    [00029] - 2 2 c × ITD LR 2 .Math. .Math. a 0 ,

    an incident angle of the sound source signal is

    [00030] arcsin ( c × ITD LR 2 .Math. .Math. a ) ,

    and then whether ITD.sub.LC is greater than or equal to ITD.sub.RC is determined. If |ITD .sub.LC|≧|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°, that is, the incident angle of the sound source signal belongs to an angle range [315°,360°]; or if |ITD .sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 180°, and is less than or equal to 225°, that is, the incident angle of the sound source signal belongs to an angle range [180°,225°].

    [0073] In a third case, when

    [00031] 2 2 c × ITD LR 2 .Math. .Math. a 1 ,

    an incident angle of the sound source single is

    [00032] 45 .Math. ° - arcsin ( c × ITD RC 2 .Math. .Math. a ) .

    [0074] In a fourth case, when

    [00033] - 1 c × ITD LR 2 .Math. .Math. a - 2 2 ,

    an incident angle of a sound source signal is

    [00034] arcsin ( c × ITD LC 2 .Math. .Math. a ) - 45 .Math. ° .

    [0075] After the incident angle of the sound source signal is determined, it may be determined according to the result that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 90°, or the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 360° that the direction of the sound source is the front direction; or it is determined according to the result that the incident angle of the sound source signal is greater than 90°, and is less than 270° that the direction of the sound source is a rear direction, where the rear direction is a direction away from which the centered channel faces. For example, the rear direction is a direction away from which a face of a user who uses the sound image direction sense processing method according to this embodiment of the present disclosure faces. As shown in FIG. 3, in this embodiment, an angle of entrance of the sound source that is parallel to the centered channel is 0°.

    [0076] S204. When the direction of the sound source is the front direction, perform at least one type of the following processing: front direction enhancing processing or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    [0077] In this embodiment, when it is determined in the foregoing manner that the direction of the sound source is the front direction, the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal. For a specific implementation process, refer to related records in S103 of Embodiment 1 of the method in the present disclosure, and details are not described herein.

    [0078] Optionally, S204 may be specifically: when the direction of the sound source is the front direction, and it is further determined that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to a first preset angle, or the incident angle of the sound source signal is greater than or equal to a second preset angle, and is less than or equal to 360°, performing the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, where the first preset angle is less than 90° and the second preset angle is greater than 270°. For example, the first preset angle may be 60° .sub.or 45°, and the second preset angle may be 300° or 315°.

    [0079] Optionally, the performing the front direction enhancing processing on the left-ear channel signal may include: multiplying a signal that is in the left-ear channel signal and whose frequency belongs to a first preset frequency band by a first gain coefficient, so as to obtain a left-ear channel signal that undergoes the front direction enhancing processing, where the first gain coefficient is a value greater than 1, and an amplitude spectrum of a front head related transfer function (HRTF) corresponding to the first preset frequency band is greater than an amplitude spectrum of a rear HRTF corresponding to the first preset frequency band. Optionally, a difference obtained by subtracting the amplitude spectrum of the rear HRTF corresponding to the first preset frequency band from the amplitude spectrum of the front HRTF corresponding to the first preset frequency band is greater than a preset value, where the preset value is a value greater than 0. Correspondingly, the front direction enhancing processing may be performed on the right-ear channel signal in a similar processing manner.

    [0080] Optionally, the performing the rear direction weakening processing on the left-ear channel signal may include: multiplying a signal that is in the left-ear channel signal and whose frequency belongs to a second preset frequency band by a second gain coefficient, so as to obtain a left-ear channel signal that undergoes the rear direction weakening processing, where the second gain coefficient is a value less than 1, and the second preset frequency band is a frequency band other than the first preset frequency band. Correspondingly, the rear direction weakening processing may be performed on the right-ear channel signal in a similar processing manner.

    [0081] Optionally, before the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal, the foregoing first preset frequency band needs to be determined, and details are as follows.

    [0082] An average value of an amplitude spectrum of an HRTF in a front horizontal plane of a head phantom and an average value of an amplitude spectrum of an HRTF in a rear horizontal plane of the head phantom are obtained, where the head phantom is a head phantom to which the sound image direction sense processing method provided in this embodiment of the present disclosure is applied. An HRTF expression is H(θ, φ, f), where θ is an azimuth, φ is an elevation angle, and f is a frequency. Therefore, the average value of the amplitude spectrum of the HRTF in the front horizontal plane may be obtained by means of calculation according to ∫H(θ, φ, f), where a value of θ is [0°, 900° ]and [270°, 360°], and a value of φ is 0°. The average value of the amplitude spectrum of the HRTF in the front horizontal plane is a f related function. The average value of the amplitude spectrum of the HRTF in the rear horizontal plane of the head phantom may be further obtained by means of calculation according to the ∫H(θ, φ, f), where the value of θ is [90°,270°], and the value of φ is 0°. The average value of the amplitude spectrum of the HRTF in the rear horizontal plane is a f related function.

    [0083] Then, a subtraction is performed between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, so as to obtain a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane; and then an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range is obtained according to the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, that is, an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is corresponding to each frequency is obtained, so as to obtain a specific value; and then the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane is compared with the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range, and a frequency band corresponding to a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane that is greater than the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range is used as the first preset frequency band.

    [0084] In specific implementation, the first preset frequency band may include at least one of the following frequency bands: [3 kHz,8 kHz], [10 kHz,12 kHz], [17 kHz,20 kHz], and the second preset frequency band may include at least one of the following frequency bands: [0 kHz,3kHz), (8kHz,10kHz), and (12 kHz,17 kHz). Alternatively, the first preset frequency band may include at least one of the following frequency bands: [3 kHz,8.1 kHz], [10 kHz,12.5 kHz], and [17 kHz,20 kHz], and the second preset frequency band may include at least one of the following frequency bands: [0 kHz,3 kHz), (1 kHz,10 kHz), and (12.5 kHz,17 kHz). This embodiment of the present disclosure is not limited thereto.

    [0085] In a first application scenario, if the head phantom to which the sound image direction sense processing method provided in this embodiment of the present disclosure is applied is a head phantom of a Chinese voice, the first preset frequency band may include [3 kHz,8 kHz], [10 kHz,12 kHz] and [17 kHz,20 kHz], and the second preset frequency band may include [0 kHz,3 kHz), (8 kHz,10 kHz), (12 kHz,17 kHz). Specifically, a band-pass filter whose range is the first preset frequency band may be used, where a gain coefficient of the band-pass filter is a value greater than 1. Then, convolution is performed on the band-pass filter and left-ear channel signal or right-ear channel signal, so that a front direction of a signal whose frequency belongs to the first preset frequency band can be enhanced. A band-pass filter whose range is the second preset frequency band may be used, where a gain coefficient of the band-pass filter is a positive value less than or equal to 1. Then, convolution is performed on the band-pass filter and left-ear channel signal or right-ear channel signal, so that a rear direction of a signal whose frequency belongs to the second preset frequency band can be weakened.

    [0086] For example, sound image direction sense processing may be performed on the left-ear channel signal and the right-ear channel signal by using the following formulas.

    [0087] Processing described in a formula (4) is performed on the left-ear channel signal, and the formula (4) is, for example:

    [00035] L = M 1 × H low .Math. L + .Math. 1 K .Math. .Math. M i + 1 × H bandi .Math. L ,

    where L′ is a left-ear channel signal that undergoes the sound image direction sense processing, L is the left-ear channel signal before the sound image direction sense processing, H.sub.low represents a low-pass filter whose cut-off frequency is F.sub.1, M.sub.1 is a gain coefficient of the low-pass filter whose cut-off frequency is F.sub.1, H.sub.bandi represents a band-pass filter, where the band-pass filter band is [F.sub.i, F.sub.i+1], and M.sub.i is a gain coefficient of the band-pass filter.

    [0088] Processing described in a formula (5) is performed on the right-ear channel signal, and the formula (5) is, for example:

    [00036] R = M 1 × H low .Math. R + .Math. 1 K .Math. .Math. M i + 1 × H bandi .Math. R ,

    where R is a right-ear channel signal that undergoes the sound image direction sense processing, and R is the right-ear channel signal before the sound image direction sense processing.

    [0089] Optionally, for example, one low-pass filter and five band-pass filters may be used, and K is 5. F.sub.1=3 kHz, F.sub.2=8 kHz, F.sub.3=10 kHz, F.sub.4=12 kHz, F.sub.5=17 kHz, and F.sub.6=20 kHz ; and correspondingly, M.sub.1=1, M.sub.2=2, M.sub.3=0.5, M.sub.4=2, M.sub.5=0.5, and M.sub.6=2. Therefore, a signal in the first preset frequency band [3 kHz,8 kHz], [10 kHz,12 kHz], and [17 kHz,20 kHz] may be enhanced, that is, an amplitude of the signal in the first preset frequency band may be increased by 6 dB; and a signal in the second preset frequency band (8 kHz,10 kHz) and (12 kHz,17 kHz) may be weakened, that is, an amplitude of the signal in the second preset frequency band may be reduced by 3 dB.

    [0090] Optionally, for example, one low-pass filter and seven band-pass filters may be used, and K is 7. F.sub.1=3 kHz, F.sub.2=5 kHz, F.sub.3=8 kHz, F.sub.4=10 kHz, F.sub.5=12 kHz, F.sub.6=15 kHz, F.sub.7=17 kHz; and F.sub.820 kHz ; and correspondingly, M.sub.1=1, M.sub.2=1.8, M.sub.3=2, M.sub.4=0.5, M.sub.5=2, M.sub.6=0.8, M.sub.7=0.5, and M.sub.8=2. Therefore, a signal in the first preset frequency band [3 kHz,8 kHz], [10 kHz,12 kHz], and [17 kHz,20 kHz] may be enhanced; and a signal in the second preset frequency band (8 kHz, 10 kHz) and (12 kHz,17 kHz) may be weakened.

    [0091] Optionally, when the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°, or when the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°, one low-pass filter and five band-pass filters may be used, and K is 5. F.sub.1=3 kHz, F.sub.2=8 kHz, F.sub.3=10 kHz, F.sub.4=12 kHz, F.sub.5=17 kHz, and F.sub.6=20 kHz; and correspondingly, M.sub.1=1, M.sub.2=2.8, M.sub.3=0.5, M.sub.4=1.4, M.sub.5=0.5, and M.sub.62. Therefore, a signal in the first preset frequency band [3 kHz,8 kHz], [10 kHz,12 kHz], and [17 kHz,20 kHz] may be enhanced, that is, an amplitude of a signal in the frequency band [3 kHz, 8 kHz] may be increased by 9 dB, an amplitude of a signal in the frequency band [10 kHz,12 kHz] may be increased by 9 dB, and an amplitude of a signal in the frequency band [17 kHz,20 kHz ] may be increased by 9 dB; and a signal in the second preset frequency band (8 kHz, 10 kHz )and (12 kHz,17 kHz) may be weakened, that is, an amplitude of the signal in the second preset frequency band may be reduced by 3 dB. When the incident angle of the sound source signal is greater than or equal to 45°, and is less than or equal to 90°, or when the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 315°, one low-pass filter and five band-pass filters may be used, and K is 5. F.sub.1=3 kHz, F.sub.2=8 kHz, F.sub.3=10 kHz, F.sub.4=12 kHz, F.sub.5=17 kHz, and F.sub.6=20 kHz; and correspondingly, M.sub.1=1, M.sub.2=2.8, M.sub.3=0.71, M.sub.4=2, M.sub.5=0.5, and M.sub.6=2.8. Therefore, a signal in the first preset frequency band [3 kHz,8 kHz], [10 kHz,12 kHz], and [17 kHz,20 kHz] may be enhanced, that is, an amplitude of a signal in the frequency band [3 kHz,8 kHz,8 kHz] may be increased by 9 dB, an amplitude of a signal in the frequency band [10 kHz,12 kHz] may be increased by 6 dB, and an amplitude of a signal in the frequency band [17 kHz, 20 kHz] may be increased by 9 dB; and a signal in the second preset frequency band (8 kHz,10 kHz) and (12 kHz,17 kHz) may be weakened, that is, an amplitude of a signal in the frequency band (8 kHz,10 kHz) may be reduced by 3 dB, and an amplitude of a signal in the frequency band (12 kHz,17 kHz) may be reduced by 6 dB.

    [0092] In a second application scenario, if the head phantom to which the sound image direction sense processing method provided in this embodiment of the present disclosure is applied is a KEMAR artificial head phantom, the first preset frequency band may include [3 kHz,8.1 kHz], [10 kHz,12.5 kHz], and [17 kHz,20 kHz], and the second preset frequency band may include [0 kHz,3 kHz), (8.1 kHz,10 kHz), and (12.5 kHz,17 kHz). Specifically, a band-pass filter whose range is the first preset frequency band may be used, where a gain coefficient of the band-pass filter is a value greater than 1. Then, convolution is performed on the band-pass filter and left-ear channel signal or right-ear channel signal, so that a front direction of a signal whose frequency belongs to the first preset frequency band can be enhanced. A band-pass filter whose range is the second preset frequency band may be used, where a gain coefficient of the band-pass filter is a positive value less than or equal to 1. Then, convolution is performed on the band-pass filter and left-ear channel signal or right-ear channel signal, so that a rear direction of a signal whose frequency belongs to the second preset frequency band can be weakened.

    [0093] Optionally, for example, one low-pass filter and five band-pass filters may be used, and K is 5. F.sub.1=3 kHz, F.sub.2=8.1 kHz, F.sub.3=10 kHz, F.sub.4=12.5 kHz, F.sub.5=17 kHz, and F.sub.6=20 kHz; and correspondingly, M.sub.1=1, M.sub.2=2, M.sub.3=0.5, M.sub.42, M.sub.5=0.5, and M.sub.6=2. Therefore, a signal in the first preset frequency band [3 kHz,8.1 kHz], [10 kHz,12.5 kHz], and [17 kHz,20 kHz]may be enhanced, that is, an amplitude of the signal in the first preset frequency band may be increased by 6 dB; and a signal in the second preset frequency band (8.1 kHz,10 kHz) and (12.5kHz,17 kHz) may be weakened, that is, an amplitude of the signal in the second preset frequency band may be reduced by 3 dB.

    [0094] According to the sound image direction sense processing method provided in this embodiment of the present disclosure, a left-ear channel signal, a right-ear channel signal, and a centered channel signal that are of a sound source are obtained; whether a direction of the sound source is a front direction is determined according to a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal; and when the direction of the sound source is the front direction, at least one type of the following processing: front direction enhancing processing or rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal. Therefore, a difference between front direction sense and rear direction sense of a sound image may be enlarged, so that a listener can accurately recognize that the sound source is from the front direction, thereby improving accuracy of determining a direction of a sound source.

    [0095] FIG. 4 is a schematic structural diagram of Embodiment 1 of a sound image direction sense processing apparatus according to the present disclosure. As shown in FIG. 4, the apparatus in this embodiment may include: an obtaining unit 11, a determining unit 12, and a processing unit 13. The obtaining unit 11 is configured to obtain a left-ear channel signal, a right-ear channel signal, and a centered channel signal, where the left-ear channel signal is a signal obtained by transmitting a sound source signal to a left-ear channel, the right-ear channel signal is a signal obtained by transmitting the sound source signal to a right-ear channel, and the centered channel signal is a signal obtained by transmitting the sound source signal to a centered channel. The centered channel is located in a mid-vertical plane between the left-ear channel and the right-ear channel. The determining unit 12 is configured to determine, according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal that are obtained by the obtaining unit 11, whether a direction of the sound source is a front direction, where the front direction is a direction that the centered channel faces. The processing unit 13 is configured to: when the determining unit 12 determines that the direction of the sound source is the front direction, perform at least one type of the following processing: front direction enhancing processing or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    [0096] Optionally, the determining unit 12 is configured to: obtain a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal; and determine, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether the direction of the sound source is the front direction.

    [0097] Optionally, wherein the determining unit 12 is configured to obtain the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal comprises: the determining unit 12 is configured to: obtain a Fourier coefficient H.sub.L(f) of the left-ear channel signal according to the left-ear channel signal; obtain a Fourier coefficient H.sub.R(f) of the right-ear channel signal according to the left-ear channel signal; obtain a Fourier coefficient H.sub.C(f) of the centered channel signal according to the left-ear channel signal;

    [0098] obtain a maximum value of φ.sub.LR(τ) according to

    [00037] φ LR ( τ ) = 0 x .Math. H L ( f ) .Math. H R * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H L ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H R ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τ corresponding to the maximum value of φ.sub.LR(τ) as the delay difference between the left-ear channel signal and the right-ear channel signal;

    [0099] obtain a maximum value of φ.sub.LC(τ) according to

    [00038] φ LC ( τ ) = 0 x .Math. H L ( f ) .Math. H C * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H L ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H C ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τ corresponding to the maximum value of φ.sub.LC(τ) as the delay difference between the left-ear channel signal and the centered channel signal; and

    [0100] obtain a maximum value of φ.sub.RC(τ) according to

    [00039] φ RC ( τ ) = 0 x .Math. H R ( f ) .Math. H C * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H R ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H C ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τ corresponding to the maximum value of φ.sub.RC(τ) as the delay difference between the right-ear channel signal and the centered channel signal, where

    [0101] H*.sub.R(f) H.sub.R(f) are conjugate, H*.sub.C(f) and H.sub.C(f) are conjugate, j represents a complex number, [0, x] represents a frequency range, and −1ms≦τ≦1ms.

    [0102] Optionally, wherein the determining unit 12 is configured to determine, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether the direction of the sound source is the front direction comprises: the determining unit 12 is configured to:

    [0103] when

    [00040] 0 c × ITD LR 2 .Math. a 2 2 ,

    determine that an incident angle of the sound source signal is

    [00041] arcsin ( c × ITD LR 2 .Math. a ) ,

    where if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 135°, and is less than or equal to 180°;

    [0104] when

    [00042] - 2 2 c × ITD LR 2 .Math. a 0 ,

    determine that an incident angle of the sound source signal is

    [00043] arcsin ( c × ITD LR 2 .Math. a ) ,

    where if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 180°, and is less than or equal to 225°;

    [0105] when

    [00044] 2 2 c × ITD LR 2 .Math. a 1 ,

    determine that an incident angle of the sound source signal is

    [00045] 45 .Math. ° - arcsin ( c × ITD RC 2 .Math. a ) ;

    or

    [0106] when

    [00046] - 1 c × ITD LR 2 .Math. a - 2 2 ,

    determine that an incident angle of the sound source signal is

    [00047] arcsin ( c × ITD LC 2 .Math. a ) 45 .Math. ° ,

    where

    [0107] ITD.sub.LR is the delay difference between the left-ear channel signal and the right-ear channel signal, ITD.sub.RC is the delay difference between the right-ear channel signal and the centered channel signal, and ITD.sub.LC is the delay difference between the left-ear channel signal and the centered channel signal, where c represents a sound speed, and a represents a half of a distance between the left-ear channel and the right-ear channel; and

    [0108] determine, according to the result that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 90°, or the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 360°, that the direction of the sound source is the front direction; or determine, according to the result that the incident angle of the sound source signal is greater than 90°, and is less than 270°, that the direction of the sound source is a rear direction, where the rear direction is a direction away from which the centered channel faces.

    [0109] Optionally, the processing unit 13 is configured to: when the determining unit 12 determines that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to a first preset angle, or the incident angle of the sound source signal is greater than or equal to a second preset angle, and is less than or equal to 360°, perform the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, where the first preset angle is less than 90°, and the second preset angle is greater than 270°.

    [0110] Optionally, wherein the processing unit 13 is configured to perform the front direction enhancing processing separately on the left-ear channel signal and the right-ear channel signal comprises: the processing unit 13 is configured to: separately multiply signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a first preset frequency band by a first gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the front direction enhancing processing, where the first gain coefficient is a value greater than 1, and an amplitude spectrum of a front head related transfer function HRTF corresponding to the first preset frequency band is greater than an amplitude spectrum of a rear HRTF corresponding to the first preset frequency band.

    [0111] Wherein the processing unit 13 is configured to perform the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal comprises: the processing unit 13 is configured to: separately multiply signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a second preset frequency band by a second gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the rear direction weakening processing, where the second gain coefficient is a positive value less than or equal to 1, and the second preset frequency band is a frequency band other than the first preset frequency band.

    [0112] Optionally, the obtaining unit 11 is further configured to: before the processing unit 13 performs the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, obtain an average value of an amplitude spectrum of an HRTF in a front horizontal plane of a head phantom, and an average value of an amplitude spectrum of an HRTF in a rear horizontal plane of the head phantom, where the head phantom is a head phantom to which the apparatus is applied; perform a subtraction between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, so as to obtain a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane; obtain, according to the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range; and use a frequency band corresponding to a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane that is greater than the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range as the first preset frequency band.

    [0113] The apparatus in this embodiment may be configured to execute the technical solutions in the foregoing method embodiments of the present disclosure, and implementation principles and technical effects thereof are similar and are not described herein.

    [0114] FIG. 5 is a schematic structural diagram of Embodiment 2 of a sound image direction sense processing apparatus according to the present disclosure. As shown in FIG. 5, the apparatus in this embodiment may include: a first sensor 21, a second sensor 22, a third sensor 23, a processor 24, and a memory 25. The memory 25 is configured to store code of executing a sound image direction sense processing method, and the memory 25 may include a non-volatile memory. The processor 24 may be a central processing unit (CPU), or an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to implement this embodiment of the present disclosure. The first sensor 21, the second sensor 22, and the third sensor 23 are sensors configured to collect sound, for example, microphones. The first sensor 21 may be disposed, for example, in a left ear of a user; the second sensor 22 may be disposed, for example, in a right ear of the user; and the third sensor 23 may be disposed on a nose bridge of the user. The processor 24 is configured to invoke the code to execute the following operations.

    [0115] The first sensor 21 is configured to obtain a left-ear channel signal, where the left-ear channel signal is a signal obtained by transmitting a sound source signal to a left-ear channel.

    [0116] The second sensor 22 is configured to obtain a right-ear channel signal, where the right-ear channel signal is a signal obtained by transmitting the sound source signal to a right-ear channel.

    [0117] The third sensor 23 is configured to obtain a centered channel signal, where the centered channel signal is a signal obtained by transmitting the sound source signal to a centered channel, and the centered channel is located in a mid-vertical plane between the left-ear channel and the right-ear channel.

    [0118] The processor 24 is configured to: determine, according to the left-ear channel signal obtained by the first sensor 21, the right-ear channel signal obtained by the second sensor 22, and the centered channel signal obtained by the third sensor 23, whether a direction of the sound source is a front direction, where the front direction is a direction that the centered channel faces; and when the direction of the sound source is the front direction, perform at least one type of the following processing: front direction enhancing processing or rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal.

    [0119] Optionally, wherein the processor 24 is configured to determine, according to the left-ear channel signal obtained by the first sensor 21, the right-ear channel signal obtained by the second sensor 22, and the centered channel signal obtained by the third sensor 23, whether the direction of the sound source is the front direction comprises: the processor 24 is configured to obtain a delay difference between the left-ear channel signal and the right-ear channel signal, a delay difference between the left-ear channel signal and the centered channel signal, and a delay difference between the right-ear channel signal and the centered channel signal according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal; and determine, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether the direction of the sound source is the front direction.

    [0120] Optionally, wherein the processor 24 is configured to obtain the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal comprises: the processor 24 is configured to: obtain a Fourier coefficient H.sub.L(f) of the left-ear channel signal according to the left-ear channel signal; obtain a Fourier coefficient H.sub.R(f) of the right-ear channel signal according to the left-ear channel signal; obtain a Fourier coefficient H.sub.C(f) of the centered channel signal according to the left-ear channel signal;

    [0121] obtain a maximum value of φ.sub.LR(τ) according to

    [00048] φ LR ( τ ) = 0 x .Math. H L ( f ) .Math. H R * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H L ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H R ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τ corresponding to the maximum value of φ.sub.LR(τ) as the delay difference between the left-ear channel signal and the right-ear channel signal;

    [0122] obtain a maximum value of φ.sub.LC(τ) according to

    [00049] φ LC ( τ ) = 0 x .Math. H L ( f ) .Math. H C * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H L ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H C ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τcorresponding to the maximum value of φ.sub.LC(τ) as the delay difference between the left-ear channel signal and the centered channel signal; and

    [0123] obtain a maximum value of φ.sub.LC(τ) according to

    [00050] φ RC ( τ ) = 0 x .Math. H R ( f ) .Math. H C * ( f ) .Math. df × e ( j2π .Math. .Math. f .Math. .Math. τ ) { [ 0 x .Math. | H R ( f ) .Math. | 2 .Math. df ] [ 0 x .Math. | H C ( f ) .Math. | 2 .Math. df ] } 1 / 2 ,

    and use a value of τ corresponding to the maximum value of φ.sub.LC(τ) as the delay difference between the right-ear channel signal and the centered channel signal, where

    [0124] H*.sub.R(f) and H.sub.R(f) are conjugate, H*.sub.C(f) and H.sub.C(f) are conjugate, j represents a complex number, [0, x] represents a frequency range, and −1ms ≦τ≦1ms.

    [0125] Optionally, wherein the processor 24 is configured to determine, according to the delay difference between the left-ear channel signal and the right-ear channel signal, the delay difference between the left-ear channel signal and the centered channel signal, and the delay difference between the right-ear channel signal and the centered channel signal, whether the direction of the sound source is the front direction comprises: the processor 24 is configured to:

    [0126] when

    [00051] 0 c × ITD LR 2 .Math. a 2 2 ,

    determine that an incident angle of the sound source signal is

    [00052] arcsin .Math. .Math. ( c × ITD LR 2 .Math. a ) ,

    where if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 45°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 135, and is less than or equal to 180°;

    [0127] when

    [00053] - 2 2 c × ITD LR 2 .Math. a 0 ,

    determine that an incident angle of the sound source signal is

    [00054] arcsin ( c × ITD LR 2 .Math. a ) ,

    where if |ITD.sub.LC|>|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 315°, and is less than or equal to 360°; or if |ITD.sub.LC|<|ITD.sub.RC|, the incident angle of the sound source signal is greater than or equal to 180°; and is less than or equal to 225°;

    [0128] when

    [00055] 2 2 c × ITD LR 2 .Math. a 1 ,

    determine that an incident angle of the sound source signal is

    [00056] 45 .Math. ° - arcsin .Math. .Math. ( c × ITD RC 2 .Math. a ) ;

    or

    [0129] when

    [00057] - 1 c × ITD LR 2 .Math. a - 2 2 ,

    determine that an incident angle of the sound source signal is

    [00058] arcsin .Math. .Math. ( c × ITD LC 2 .Math. a ) - 45 .Math. ° ,

    where

    [0130] ITD.sub.LR is the delay difference between the left-ear channel signal and the right-ear channel signal, ITD.sub.RC is the delay difference between the right-ear channel signal and the centered channel signal, and ITD.sub.LC is the delay difference between the left-ear channel signal and the centered channel signal, where c represents a sound speed, and a represents a half of a distance between the left-ear channel and the right-ear channel; and

    [0131] determine, according to the result that the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to 90°, or the incident angle of the sound source signal is greater than or equal to 270°, and is less than or equal to 360°, that the direction of the sound source is the front direction; or determine, according to the result that the incident angle of the sound source signal is greater than 90°, and is less than 270°, that the direction of the sound source is a rear direction, where the rear direction is a direction away from which the centered channel faces.

    [0132] Optionally, wherein the processor 24 is configured to perform the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal comprises: the processor 24 is configured to: when the incident angle of the sound source signal is greater than or equal to 0°, and is less than or equal to a first preset angle, or the incident angle of the sound source signal is greater than or equal to a second preset angle, and is less than or equal to 360°, perform the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal, where the first preset angle is less than 90°, and the second preset angle is greater than 270°.

    [0133] Optionally, wherein the processor 24 is configured to perform the front direction enhancing processing separately on the left-ear channel signal and the right-ear channel signal comprises: the processor 24 is configured to: separately multiply signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a first preset frequency band by a first gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the front direction enhancing processing, where the first gain coefficient is a value greater than 1, and an amplitude spectrum of a front HRTF corresponding to the first preset frequency band is greater than an amplitude spectrum of a rear HRTF corresponding to the first preset frequency band.

    [0134] Wherein the processor 24 is configured to perform the rear direction weakening processing separately on the left-ear channel signal and the right-ear channel signal comprises: the processor 24 is configured to: separately multiply signals that are in the left-ear channel signal and the right-ear channel signal and whose frequency belongs to a second preset frequency band by a second gain coefficient, so as to obtain a left-ear channel signal and a right-ear channel signal that undergo the rear direction weakening processing, where the second gain coefficient is a positive value less than or equal to 1, and the second preset frequency band is a frequency band other than the first preset frequency band.

    [0135] Optionally, the processor 24 is further configured to: before the at least one type of the following processing: the front direction enhancing processing or the rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal, obtain an average value of an amplitude spectrum of an HRTF in a front horizontal plane of a head phantom, and an average value of an amplitude spectrum of an HRTF in a rear horizontal plane of the head phantom, where the head phantom is a head phantom to which the apparatus is applied; perform a subtraction between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, so as to obtain a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane; obtain, according to the difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane, an average value of a difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range; and use a frequency band corresponding to a difference between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane that is greater than the average value of the difference that is between the average value of the amplitude spectrum of the HRTF in the front horizontal plane and the average value of the amplitude spectrum of the HRTF in the rear horizontal plane and that is within the frequency range as the first preset frequency band.

    [0136] The apparatus in this embodiment may be configured to execute the technical solutions in the foregoing method embodiments of the present disclosure, and implementation principles and technical effects thereof are similar and are not described herein.

    [0137] Persons of ordinary skill in the art may understand that all or some of the steps of the method embodiments may be implemented by a program instructing relevant hardware. The program may be stored in a computer-readable storage medium. When the program runs, the steps of the method embodiments are performed. The foregoing storage medium includes: any medium that can store program code, such as a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc.

    [0138] Finally, it should be noted that the foregoing embodiments are merely intended for describing the technical solutions of the present disclosure, but not for limiting the present disclosure. Although the present disclosure is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some or all technical features thereof, without departing from the scope of the technical solutions of the embodiments of the present disclosure. Additionally, statements made herein characterizing the invention refer to an embodiment of the invention and not necessarily all embodiments.