Procedure and mechanism for controlling and using voice communication
09542957 ยท 2017-01-10
Assignee
Inventors
Cpc classification
H04B3/20
ELECTRICITY
H04M3/568
ELECTRICITY
G10L2021/02165
PHYSICS
H04R2410/07
ELECTRICITY
H04R2201/107
ELECTRICITY
International classification
G10L21/00
PHYSICS
H04M3/56
ELECTRICITY
H04B3/20
ELECTRICITY
Abstract
In a method and system for controlling voice communication of a first person with at least a second person via a communication network a first microphone receives and converts vocal utterances from the first person to a voice signal. A first processor generates a transmission signal by processing the voice signal. A transmitter sends the transmission signal to a receiver. The receiver generates a listening signal by processing the received signal and transmits the listening signal to a speaker. The speaker converts the listening signal to an acoustic signal to be perceived by the first person. In this method a second processor generates the listening signal from the received signal by branching the voice signal and adding the branched voice signal to the received signal. The branched voice signal may be subjected to variable attenuation and/or amplification before being added to the branched voice signal to the received signal.
Claims
1. A method for controlling voice communication of a first person with at least a second person via a communication network comprising: receiving a voice signal from a first microphone, which converts vocal utterances from the first person to the voice signal; generating a transmission signal by processing the voice signal; transmitting the transmission signal to the communication network; receiving a received signal from the communication network; generating a listening signal by processing the received signal; and transmitting the listening signal to a speaker that converts the listening signal to an acoustic signal to be perceived by the first person, wherein the processing of the received signal to generate the listening signal comprises: branching the voice signal and adding the branched voice signal to the received signal; branching the transmission signal to create a branched transmission signal; subjecting the branched transmission signal to a variable echo compensation to generate an echo compensation signal matching an anticipated echo of the transmission signal contained in the received signal; and subtracting the echo compensation signal from the received signal to generate the listening signal.
2. The method of claim 1 also comprising subjecting the branched voice signal to at least one of variable attenuation and amplification before adding the branched voice signal to the received signal.
3. The method of claim 1 also comprising: receiving a general ambient signal from a second microphone that is arranged in an environment where the first person is located, and exhibits different sound acceptance characteristics than the first microphone; and processing of the voice signal to generate the transmission signal by subtracting the general ambient signal from the voice signal.
4. The method of claim 3 also comprising subjecting the general ambient signal to variable attenuation before subtracting the general ambient signal from the voice signal.
5. The method of claim 3 wherein the first microphone has a first sound acceptance direction and the second microphone has a second sound acceptance direction, the second sound acceptance direction being different from the first sound acceptance direction.
6. The method of claim 3 wherein the processing of the received signal to generate the listening signal comprises: branching the general ambient signal; and subtracting the branched general ambient signal from the received signal.
7. The method of claim 6 also comprising subjecting the branched general ambient signal to variable attenuation before subtracting the branched general ambient signal from the received signal.
8. The method of claim 3 wherein the voice signal is generated and the general ambient signal is received at essentially a same location.
9. The method of claim 8 where the essentially a same location is near a mouth of the first person, the first microphone has a first sound acceptance direction and the second microphone has a second sound acceptance direction, the second sound acceptance direction being different from the first sound acceptance direction.
10. The method of claim 1 also comprising receiving a specific ambient signal from a third microphone, which is near an ear of the first person ear, and is closer to the first person's ear than the second microphone and wherein the processing of the received signal to generate the listening signal comprises subtracting an ambient signal from the received signal.
11. The method of claim 10 wherein the ambient signal is subjected to variable attenuation before the subtraction.
12. The method of claim 10 also comprising generating the listening signal through the speaker and the ambient signal through the third microphone at essentially a same location, wherein the speaker and the third microphone both have a same sound acceptance direction.
13. The method of claim 10 also comprising processing the received signal separately for each ear of the first person.
14. The method of claim 13 wherein: the speaker comprises a first speaker is assigned to a first ear of the first person, and a second speaker is assigned to a second ear of the first person; the listening signal comprises a first listening signal which is emitted at the first speaker and a second listening signal which is emitted at the second speaker; the third microphone comprises a first third microphone and a second third microphone; and the specific ambient signal comprises a first specific ambient signal generated by the first third microphone, and a second specific ambient signal generated by the second third microphone.
15. The method of claim 14 wherein the first specific ambient signal is variably attenuated and subtracted from the received signal to generate the first listening signal.
16. The method of claim 14 wherein the second specific ambient signal is variably attenuated and subtracted from the received signal to generate the second listening signal.
17. A communications system comprising a first microphone configured to receive vocal utterances from a first person and convert those utterances to a voice signal; a first processor connected to the first microphone, the first processor configured to receive the voice signal and generate a transmission signal by processing the voice signal; a transmitter connected to the first processor, the transmitter configured to transmit the transmission signal via a network; a receiver configured to receive a signal input from the network and output a received signal based on the received signal input from the network; a second processor connected to the receiver, the second processor configured to receive the received signal and generate a listening signal by processing the received signal; and a speaker that is connected to the second processor, the speaker configured to receive the listening signal and convert the listening signal to an acoustic signal to be perceived by the first person, wherein the second processor is configured such that the processing of the received signal by the second processor to generate the listening signal comprises: adding the voice signal branched to the second processor to the received signal to form the listening signal such that the vocal utterances from the first person are includable within the acoustic signal; branching the transmission signal to create a branched transmission signal; subjecting the branched transmission signal to a variable echo compensation to generate an echo compensation signal matching an anticipated echo of the transmission signal contained in the received signal; and subtracting the echo compensation signal from the received signal to generate the listening signal.
18. The communication system of claim 17 also comprising a speaking/listening unit that contains the speaker, the speaking/listening unit selected from the group consisting of a radiotelephone helmet, a headset, a concealed headset, an earphone, a hearing aid device and a speaker phone.
19. A communication device comprising: a first microphone configured to receive vocal utterances from a user and convert those utterances to a voice signal; a first processor connected to the first microphone, the first processor configured to receive the voice signal and generate a transmission signal by processing the voice signal; a transmitter connected to the first processor, the transmitter configured to transmit the transmission signal to a network; a receiver configured to receive an input signal from the network and output a received signal; a second processor connected to the receiver, the second processor configured to receive the received signal and generate a listening signal by processing the received signal, the processing of the received signal comprising: adding the voice signal to the received signal such that vocal utterances from the user are obtainable from the generated listening signal, branching the transmission signal to create a branched transmission signal; subjecting the branched transmission signal to a variable echo compensation to generate an echo compensation signal matching an anticipated echo of the transmission signal contained in the received signal, and subtracting the echo compensation signal from the received signal to generate the listening signal; and a speaker connected to the second processor, the speaker configured to receive the listening signal and convert the listening signal to an acoustic signal to output the acoustic signal such that the acoustic signal is perceivable by the user such that the vocal utterances are hearable by the user via the acoustic signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6) The figures are schematic and are not necessarily true to scale. The drawings and descriptions of them are only intended to be exemplary demonstrations of the principle of the invention, and they should not limit it in any way.
DESCRIPTION OF THE PREFERRED EMBODIMENT
(7) A first exemplary embodiment of this invention is illustrated in
(8) On the transmitting end 102, the voice signal S.sub.M received at the microphone input 110 is passed through a branching point 126 described further below, and afterwards is fed to an input a of an adder 120. Along with the input a, the adder 120 also has a negative (inverted) input b. This means the signal present at the negative input b is inverted before an addition, i.e., the phase is shifted by 180. An adder with a negative input can also be considered a subtractor. The negative input b of the adder 120 is connected to an output of an attenuator 122.
(9) The attenuator 122 receives the ambient signal S.sub.N received at the microphone input 112 as an input signal. The attenuator 122 subjects the ambient signal S.sub.N to an attenuation function G(f). G(f) is a frequency-dependent attenuation function G(f)=A.sub.x*E(f), where E(f) represents a (listening-/voice-/audio-)frequency-dependent equalization function (equalizer, frequency-response distortion) that can also be programmable, and A.sub.x represents an attenuation that is constant with regard to the frequency and configurable by at least one variable x. G(f) is a combination of frequency response predistortion and a constant attenuation, and there can also be a frequency range with amplification as negative attenuation overall. The attenuation function G(f) can be used on the input signal, e.g., the ambient signal S.sub.N, to improve intelligibility of speech and balance the room conditions. The characteristics of the attenuation function G(f) can be influenced by the control signal S.sub.c1 that can be fed in from the outside. This makes an attenuated ambient signal S.sub.NG(f) available at the output of the attenuator 122.
(10) The attenuated ambient signal is passed through a branching point 128 described further below, and afterwards is fed to a negative input b of the adder 120. After the addition of the inputs a, b in the adder 120, an environment-compensated voice signal S.sub.MS.sub.NG(f) is present at its output x, which is then subjected to another Automatic Gain Control (AGC) 124, and fed to the transmission signal output 114 as the transmission signal S.sub.out after being passed through a branching point 130 described further below. According to the above description, the transmission signal S.sub.out can be expressed with the following formula:
S.sub.out=AGC(S.sub.MS.sub.NG(f))
(11) The transmission signal output 114 is also an interface with a communication network (not shown in detail here) to transmit the transmission signal S.sub.out.
(12) On the receiving end 104, the received signal S.sub.in received from the communication network via the received signal input 116 is processed in three adders 140, 144, and 148, and then fed to the speaker output 118 as the listening signal S.sub.H as described in more detail below. The received signal S.sub.in, after being processed in the three adders 140, 144, and 148, is fed to the equalizer E(f), whereupon the output of the equalizer is fed to the speaker output 118. The equalizer can be designed to be ear-specific, whereupon a custom user hearing impairment, e.g., a hearing impairment of a person wearing a hearing aid, or another type of hearing impairment (loss of hearing sensitivity in higher frequency ranges, e.g., due to age, after an accident, etc., chronic hearing damage from listening to music too loudly as a child) can be balanced with this equalizer to improve the intelligibility of speech. To balance a user-specific hearing impairment, the equalizer function E(f) can be defined by measuring the hearing spectrum of the user, called calibrating in short. The calibration can be conducted as with adjusting a hearing aid. Alternately, preset frequency responses/frequency-response curves are conceivable, where the user could select at least one.
(13) First, the received signal S.sub.in received at the received signal input 116 is fed to a first input a of the adder 140. The adder 140 has two positive inputs a, b. The second input b of the adder 140 is connected to an output signal of an attenuator 142.
(14) The attenuator 142 receives the microphone signal S.sub.M tapped (branched) at the branching point 126 on the transmitting end 102, and subjects it to an attenuation factor R.sub.1 that can be influenced by a control signal S.sub.c2 that can be fed in from the outside. In other words, an attenuated voice signal S.sub.MR.sub.1 is present at the output of the attenuator 142.
(15) The attenuated voice signal is fed to the second input b of the adder 140, and added to the received signal S.sub.in present at the first input a. There is then an addition signal S.sub.in+S.sub.MR.sub.1 present at the output x of the adder 140.
(16) The addition signal is fed to the first input a of the next adder 144. The second input b of the adder 144 is a negative input that is connected to an output of an echo compensator (EC) 146.
(17) The echo compensator 146 receives the transmission signal S.sub.out tapped (branched) at the branching point 130 on the transmitting end 102, and processes it so that an echo compensation signal S.sub.EC output as the result corresponds to an anticipated echo of the transmission signal S.sub.out in the received signal S.sub.in. To do this the echo compensator 146 subjects the tapped transmission signal S.sub.out to a preset delay and attenuation, as is already known by itself in the art.
(18) The echo compensation signal S.sub.EC output from the echo compensator 146 is fed to the negative input b of the adder 144, and subtracted from the addition signal present at the positive input a. Accordingly, there is an echo-compensated addition signal S.sub.in+S.sub.MR.sub.1EC(S.sub.out) present at the output x of the adder 144.
(19) The echo-compensated addition signal is fed to the input a of the last adder 148. The second input b of the adder 148 is again a negative input that is connected to an output of another attenuator 150.
(20) The attenuator 150 receives the attenuated ambient signal S.sub.NG(f) tapped at the branching point 128 on the transmitting end 102, and subjects it to an attenuation factor R.sub.2. The attenuation factor R.sub.2 can be influenced by a control signal S.sub.c3 that can be fed in from the outside. The now twice attenuated ambient signal S.sub.NG(f)R.sub.2 is fed to the negative input b of the adder 148, and subtracted from the echo-compensated addition signal present at the positive input a. Therefore, a signal is present at the output x of the adder 148 that can then optionally be fed to an equalizer E.sub.ind(f) that is customized to the hearing of the user/headset wearer to balance out any hearing impairment on the part of the user. The output of the optional equalizer or the output 148 is then fed as the listening signal S.sub.H to the speaker output 118. According to the above description, the listening signal S.sub.H can be expressed with the following formula:
(21)
(22) whereupon without balancing out the user's hearing impairment, the equalizer function E.sub.ind(f) is set to 1.
(23)
(24) According to the diagram in
(25) On the receiving end 204, a received signal S.sub.M is received from the communication network via the received signal input 216, whereupon the received signal S.sub.in, in contrast to the first exemplary embodiment, is a stereo received signal, including a left and right channel, and the received signal input 216 is therefore also designed as a stereo input.
(26) The stereo received signal S.sub.in is first fed to an adder 240, which differs from the adder 140 in the first exemplary embodiment in that it has a stereo input ab, an addition input c, and a stereo output xy. The addition takes place in a way that the signal present at the addition input c is added to both channels of the stereo signal present at the stereo input ab. As described, the received signal S.sub.in received via the received signal input 216 on the receiving end 204 is present at the stereo input ab. As in the first exemplary embodiment, the attenuated voice signal, attenuated through the attenuator 142 by the attenuation factor R.sub.1 that can be influenced by the control signal S.sub.c2, is present at the addition input c of the adder 240. Thus, an output signal S.sub.in+S.sub.mR.sub.1 is present at the output xy of the adder 240, which is fed to a stereo input ab of another adder 244.
(27) The adder 244 differs from the adder 144 from the first exemplary embodiment only in its stereo design. Thus, along with its stereo input ab, it also has a negative input c and a stereo output xy. As in the first exemplary embodiment, the echo compensation signal S.sub.EC=EC(S.sub.out) generated by the echo compensator 146 is fed to the negative input c of the adder 244. In contrast with the first exemplary embodiment, in this exemplary embodiment, the characteristics of the echo compensator 146 can also be influenced by another control signal S.sub.c4. Thus, there is an echo-compensated addition signal S.sub.m+S.sub.MR.sub.1EC(S.sub.out) at the output xy of the adder 244, which is fed to a stereo input ab of a splitter 252.
(28) The splitter 252 separates the stereo received signal present at the input ab into separate mono outputs l and r, which are then processed in separate signal paths. There is a processing path emanating from output l for a left listening channel, and a processing path emanating from output r for a right listening channel.
(29) In addition, along with the speaker output 118 that has a left listening signal S.sub.H,l for the speaker 18, which is considered the left speaker 18 here, the receiving end 204 of the signal processing block 200 in this exemplary embodiment also has another speaker output 229, where a right listening signal S.sub.H,r is present for a right speaker 29. A left ear sound microphone 21, and a right ear sound microphone 23 are also included. A signal generated by the left ear sound microphone 21 is received in the signal processing block at the left microphone input 221 as a left ear signal S.sub.N,l, and fed to an attenuator 254. The attenuator 254 provides an attenuation function G.sub.l(f). The characteristics of the attenuation function G.sub.l(f) can be influenced by a control signal S.sub.c5. Likewise, a signal generated by the right ear sound microphone 23 is received at a right microphone input 223 as the right ear signal S.sub.N,r, and fed to an attenuator 255, where it is subjected to an attenuation function G.sub.r(f), with characteristics that can be influenced by another control signal S.sub.c6. The left and right ear signal S.sub.N,l, S.sub.N,r canto differentiate from the general ambient signal S.sub.Nalso be considered the left and right specific ambient signal S.sub.N,l, S.sub.N,r. The microphone input 112 can also be considered the general ambient microphone input 110 or the general ambient signal input 110; the left and right microphone input 221, 223 can also be considered the left/right ear signal input 221, 223, the left/right specific microphone input 221, 223, the left/right specific ambient signal input 221.
(30) If the received signal S.sub.in is considered a combined signal with the parts S.sub.in,l, S.sub.in,r for the left and right channel, according to the description above, a left echo-compensated addition signal S.sub.in,l+S.sub.MR.sub.1EC(S.sub.out) is present at the left output l of the splitter 252, and a right echo-compensated addition signal S.sub.in,r+S.sub.MR.sub.1EC (S.sub.out) is present at the right output r of the splitter 252. The left echo-compensated addition signal is then fed to the first input of the adder 148, which corresponds to the adder 148 of the first exemplary embodiment. Likewise, the right echo-compensated addition signal is fed to a first input a of another adder 249, which matches the adder 148 in design.
(31) On the left side, the left ear signal S.sub.N,1G.sub.1(f) attenuated by the attenuator 254 is now fed to the negative input of the adder 148, and as described in the first exemplary embodiment, the output signal of the adder 148 will optionally be fed to a custom equalizer for the left ear E.sub.ind,l(f) which balances out any hearing impairment of the left ear. The output of the optional equalizer or the output 148 will then be fed to the speaker output 118 as the (here: left) listening signal S.sub.H,l. Similarly, on the right side, the right ear signal S.sub.N,rG.sub.r(f) attenuated by the attenuator 255 is fed to a negative input b of the adder 249, and a signal present at an output x of the adder 249 is optionally fed to a custom equalizer for the right ear or E.sub.ind,r (f) which balances out any hearing impairment of the right ear. The output of the optional equalizer or the output 249 is then fed as the right listening signal S.sub.H,r to the microphone output 229.
(32) As can be seen in the above description, the left listening signal S.sub.H,l can be expressed with the following formula:
(33)
(34) and the right listening signal S.sub.H,r can be expressed with the following formula:
(35)
(36) whereupon without balancing out the hearing impairment of the left and/or right ear, the functions E.sub.ind,l(f) and/or E.sub.ind,r(f) are set to 1. Balancing out the hearing impairment of the left and/or right ear provides for improved localization of a noise, like a car, for example, which improves the comprehensibility of speech.
(37) As indicated by the dash-dotted lines in
(38) In the first as well as the second exemplary embodiment, each of the elements shown in the signal processing block 100, 200 can be interpreted as components (circuitry, wiring, solder points, etc.) of a physically realized circuit arrangement or as a processing step of a signal processing procedure.
(39)
(40) In detail, according to the diagram in
(41) A microphone housing 350 is located in the vicinity of the headset 330, either on a wall or on a desk or the like, for example. The microphone housing 350 houses the ambient microphone 12 (see also
(42) The speaker wire 318, the microphone wire 310, and the ambient signal wire 312 all terminate in a switch box 370. More precisely, the microphone cable with the ambient signal wire 354 is connected to a cable connector 372 of the switch box 370, and the headphone cable with the microphone wire (also considered the voice wire) 310 and the speaker wire 318 is connected to a cable connector 374 of the switch box 370. In addition, a conference connection cable 380 with a transmitting wire and a receiving wire (neither shown in detail here) is connected to a cable connector 376 of the switch box 370.
(43) As shown in
(44) In addition, the switch box 370 has three control dials 378 that generate control signals S.sub.c1, S.sub.c2, and S.sub.c3 upon being rotated or based on their position. The control signals S.sub.c1, S.sub.c2, and S.sub.c3 are routed to the signal processing block 100 via terminals that are not shown in detail here.
(45) The signal processing block 100 with its inputs and outputs 110, 112, 114, 116, and 118, the voice microphone 10, the ambient microphone 12, the speaker 18, and the signals S.sub.M, S.sub.H, S.sub.N, S.sub.in, S.sub.out, S.sub.c1 through S.sub.c3 fully correspond in meaning, design, and effect to the diagrams and descriptions with relation to the first exemplary embodiment as per
(46) As can be seen in
S.sub.out=AGC(S.sub.MS.sub.NG(f))
(47) and the listening signal S.sub.H according to the following formula:
S.sub.H=S.sub.in+S.sub.MR.sub.1S.sub.NR.sub.2G(f)EC(S.sub.out)).
(48) In other words, the voice signal S.sub.M recorded via the voice microphone 10 is processed into the transmission signal S.sub.out by subtracting the ambient signal S.sub.N generated by the ambient microphone 12 and attenuated with a suitable attenuation function G(f) from the voice signal S.sub.M, and lastly subjecting the result to an Automatic Gain Control (AGC). On the other end, the received signal S.sub.in is processed into the listening signal S.sub.H by adding the voice signal S.sub.M, corrected with a suitable attenuation or amplification factor R.sub.1, to the received signal S.sub.in, and removing the ambient signal S.sub.N with suitable attenuation, whereupon echo compensation is also designed in such a way that the transmission signal S.sub.out on this end is subtracted from the received signal S.sub.in after the appropriate delay and attenuation, in order to suppress any echo effects of the transmission signal S.sub.out from this end in the received signal S.sub.in.
(49) This provides the user or wearer of the headset 330 with an acceptable auditory impression of his or her own voice, even in a loud environment. In the process, the ambient noise and his or her own voice can be attenuated differently (the voice can also be amplified) based on the situation, so the ambient noise does not have to be completely muted for the wearer. Otherwise, the wearer can use the control dial to attenuate the ambient noise to the extent that it essentially doesn't distract from the conversation.
(50) The headset 330 according to this exemplary embodiment can be used for a variety of applications, like according to the description above for teleconferencing or in a conference system with a variety of participants. However, the application is not limited to this; rather, it also includes applications on a headset for cellular phones or a radio, for the workstation of a simultaneous interpreter, a sport commentator in a stadium or another sports venue, a journalist or correspondent in a loud environment or comparable situations, a speaker/translator booth, a broadcast vehicle, a switchboard, etc.
(51) The earpiece 332 can be noise isolating, and an earmuff can be included at the pressure piece 335 or in place of it. In this case, the described arrangement is also suitable for use in a very loud environment like a helicopter or aircraft cockpit, construction equipment or the like, in loud industrial environments, in nightclubs, etc.
(52) In a variation, a speaker can be included at the second ear, so a single-channel received signal S.sub.in can be heard the same in both ears, or a stereo received signal S.sub.in can be divided among the two ears/speakers after being processed as described.
(53) In place of cable connections 340, 360, 380, wireless connections like Bluetooth, infrared, ultrasound, or other wireless standards can be used.
(54) The switch box 370 can include an arrangement of multiple signal processing blocks 100 to process signals from a variety of conference participants. Here, the received and transmission signal terminals 114, 116 can be connected to a conference control module that can be considered a communication network.
(55) As a fourth exemplary embodiment of this invention,
(56) The headset 400 has an earpiece 430 with a housing 432 and an ear adapter 434, whereupon the housing 432 holds the speaker 18, and whereupon the ear adapter 434 is designed to insert into the ear canal of the ear of the person wearing the headset 400. An air duct 436 stretches from the speaker 18 in the housing 432 through the ear adapter 434 so the sound waves emitted by the speaker 18 can be transmitted unobstructed to the ear canal of the wearer.
(57) A microphone arm 450 can swivel via a hinge 440 connected to the housing 432 of the earpiece 430. The microphone arm 450 has a microphone mount 452 and an arm 454 that connects the microphone mount 452 to the hinge 440. The microphone mount 452 holds the voice microphone 10 and the ambient microphone 12. The wall of the microphone mount 452s features perforations or cut-outs 452a, 452b that make it easier for sound to get to the voice microphone 10 or the ambient microphone 12. The voice microphone 10 and the ambient microphone 12 are designed as a double-microphone unit with opposite directional characteristics (i.e., opposite sound acceptance directions). The perforations 452a, 452b are positioned at least approximately along a continuation of the sound acceptance directions of the microphones 10, 12, and they aid their directivity. The sound acceptance direction of the voice microphone 10 and the associated perforations 452a are facing the anticipated mouth position of the wearer of the headset 400, while the sound acceptance direction of the ambient microphone 12 and the associated perforations 452b are facing the opposite direction. This arrangement also ensures that the voice microphone 10 favorably captures the voice sound of a wearer of the headset 400 (including ambient noise, naturally), while the ambient microphone 12 captures the ambient sound, but the voice sound of the wearer is specifically faded out or shielded from this microphone.
(58) The headset also has a rear earpiece 460 and a connecting piece 470. The connecting piece 470 connects the rear earpiece 460 with the earpiece 430. The connecting piece 470 and the rear earpiece 460 are designed so the rear earpiece 460 can be worn comfortably behind the ear of the wearer, while the connecting piece 470 stretches above the ear or rests against a top edge of the ear when the earpiece 430 is placed in the wearer's ear. Incidentally, without limiting their universality, the earpiece 430, the connecting piece 470 and the rear earpiece 460 are design to be one piece.
(59) The rear earpiece 460 includes a switch module 480, which has an antenna block 482, a control signal block 484, and the signal processing block 100. The antenna block 482 is designed and equipped to send and receive signals via a radio interface with a receiver like a cell phone or other device mentioned above. A radio connection from the antenna block 482 to a receiver is represented by a dashed line and labeled KOM.
(60) The signal processing block 100 is shown in detail in
(61) As shown in
(62) Processing the voice signal S.sub.M to the transmission signal S.sub.out and processing the received signal S.sub.in to the listening signal S.sub.H correspond to the processing procedures described in relation to the first and the third exemplary embodiments, such that these can be referenced from this point in this respect.
(63) As a fifth exemplary embodiment of this invention,
(64) The stereo headset 500 includes a left listening unit 530, a right listening unit 540, and a voice unit 550. The voice unit 550 includes a housing 552, which holds the control board 560. The control board 560 bears the signal processing block 200. Microphones 10, 12, 21, 23 and speakers 18, 23 (see also
(65) The left listening unit 530 includes an earpiece 532 that can be inserted into the (left, according to the design) ear canal of the ear of the person wearing the stereo headset 500, and a grip 534 integrated into the earpiece 532 by design, which can be grabbed from the outside when the earpiece 532 is inserted in the ear canal. The left listening unit 530 houses the (left) speaker 18 and the left ear sound microphone 21. A left earpiece cable 536 stretches between a grommet-like extension of the grip 534 on the left listening unit 530 and a strain relief 554 on the voice unit 550. The left earpiece cable 536 includes a speaker wire that connects the left speaker 18 with the speaker terminal 118 on the signal processing block 200, and a microphone wire that connects the left ear sound microphone 21 with the microphone terminal 221 on the signal processing block 200, so a left ear signal S.sub.N,l generated by the left ear sound microphone 21 can be fed to the microphone input 221 of the signal processing block 200, and a left listening signal S.sub.H,l generated by the signal processing block 200 can be fed from the left speaker output 118 to the left speaker 18.
(66) Likewise, the right listening unit 540 includes an earpiece 542 and a grip 554, and the right speaker 29 and the right ear sound microphone 23 are housed in the right listening unit 540. A right earpiece cable 546 stretches between a grommet-like extension of the grip 544 on the right listening unit 540 and a strain relief 554 on the voice unit 550. The right earpiece cable 546 includes a speaker wire that connects the right speaker 29 with the speaker terminal 229 on the signal processing block 200, and a microphone wire that connects the right ear sound microphone 23 with the microphone terminal 223 on the signal processing block 200, so a right ear signal S.sub.N,r generated by the right ear sound microphone 23 can be fed to the microphone input 223 of the signal processing block 200, and a right listening signal S.sub.H,r generated by the signal processing block 200 can be fed from the right speaker output 229 to the right speaker 29. The left earpiece cable 536 and the right earpiece cable 546 are collected together in a bundling ring that surrounds the cables 536, 546 tightly, but still allows movement.
(67) On the control board 560 inside the voice unit 550, the voice microphone 10 and the ambient microphone 12 are fastened such that the voice microphone 10 is located near housing cut-outs or perforations 552a at the top end of the housing 552 and the general ambient microphone 12 is located near housing cut-outs or perforations 552b at the bottom end of the housing 552. These microphones 10, 12 are arranged such that their sound acceptance directions point toward the respective perforations 552a, 552b. This arrangement also ensures that the voice microphone 10 favorably captures the voice sound of a wearer of the headset 500 (including ambient noise, naturally), while the ambient microphone 12 captures the ambient sound, but the voice sound of the wearer is specifically faded out. The voice microphone 10 is connected directly to the microphone input 110 of the signal processing block 200 via a wire, and the ambient microphone 12 is connected directly to the microphone input 112 of the signal processing block 200 via a wire so a voice signal S.sub.M generated by the voice microphone 10 is fed to the microphone input 110, and the general ambient signal S.sub.N is fed to the microphone input 112.
(68) A connection cable 570 is fed into the voice unit 550 via a strain relief 555 on the voice unit 550. The connection cable 570 has a single-wire output line connected to the transmission signal terminal 112 of the signal processing block 200, and a two-wire received signal line connected to the stereo received signal terminal 216 of the signal processing block 200. The connection cable 570 ends in a plug 572 that, without limiting its universality, is a four-pin jack. A four-pin jack is very common for use with stereo headsets, and it can be wired with a left input signal at the tip, a right input signal at the contact ring directly next to the tip, an output signal at the second or third contact ring, and a ground at the remaining contact ring. This allows the connection cable 570 to exchange the two-channel received signal S.sub.in and the transmission signal S.sub.out with a receiver (not shown in detail here), according to the description in the second exemplary embodiment.
(69) As shown in
(70) Design and functionality of the signal processing block 200, the microphones 10, 12, 21, 23 and the speakers 18, 29, as well as the effects that they can achieve were shown in
S.sub.out=AGC(S.sub.MS.sub.NG(f))
(71) and the received signal S.sub.in is processed to convert it into the left and right listening signal S.sub.H,l and S.sub.H,r, whereupon the processing can be expressed by the following formulas:
S.sub.H,l=S.sub.MR.sub.1EC(S.sub.out)S.sub.N,lG.sub.1(f)
and
S.sub.H,r=S.sub.in,r+S.sub.MR.sub.1EC(S.sub.out)S.sub.N,rG.sub.r(f)
(72) This invention was described and illustrated in drawings above using preferred exemplary embodiments. However, it must be noted that this invention is solely defined by the independent patent claims, and the above exemplary embodiments, variations, and refinements are only provided as exemplary illustrations. Not all of the elements described above are completely necessary for the application and execution of this invention to the extent that they are not covered in at least one independent claim as a mandatory feature. In place of variability, one or all of the attenuators and the echo compensators can have fixed preset characteristics. The signal inputs can be assigned to input amplifiers, and the listening signal outputs can be assigned to output amplifiers.
(73) For the purposes of this invention, the signal processing blocks 100, 200 each correspond to a procedure or a mechanism for controlling voice communication of a first person with at least a second person via a communication network; the transmitting end 102, 202 each corresponds to a step of the procedure of generating a transmission signal by processing a voice signal; the transmitting end 104, 204 each corresponds to a step of the procedure of generating a listening signal by processing the received signal; the ambient signal S.sub.N corresponds to a general ambient signal; the left and right ear signal S.sub.N,l and S.sub.N,r correspond to a specific ambient signal; the voice microphone 10 corresponds to a first microphone; the microphone input 110 corresponds to a step of the procedure to receive a voice signal; the transmission signal output 114 corresponds to a step of the procedure of transmitting the transmission signal to the communication network; the received signal input 116, 216 corresponds to a step of the process of receiving a received signal from the communication network; the speaker output 118, 229 corresponds to a step of the procedure of transmitting the listening signal to a speaker; the branching points 126, 128, 130 correspond to a step of the procedure of branching; the adders 120, 140, 144, 240, 244, 148, 249 correspond to a step of the procedure of adding signals (or subtracting signals if a signal input of the adder is negative); the attenuators 122, 142, 150, 254, 255 correspond to a step of the procedure of subjecting a signal to attenuation or amplification; the echo compensator 146 corresponds to a step of the procedure of subjecting a signal to echo compensation; the microphone input 112 corresponds to a step of the procedure of receiving a general ambient signal; The (general) ambient microphone 12 corresponds to a second microphone; the ear sound microphones 21, 23 correspond to a third microphone; the microphone inputs 221, 223 correspond to a step of the procedure of receiving a specific ambient signal; and the control signals represent a variability of attenuation, amplification, or delay properties.
(74) In additional variations of this invention not shown in the drawings, to compensate for the ambient signal in the listening signal, the ambient signal (general ambient signal) S.sub.N in
(75) The characteristics of the invention described in reference to the illustrated embodiments can also be present in other embodiments of the invention, unless otherwise indicated or intrinsically prohibited for technical reasons.
LIST OF REFERENCE NUMBERS AND SYMBOLS
(76) 10 Voice microphone 12 (General) ambient microphone 18 Speaker, single or left 21 Ear sound microphone (specific ambient microphone), left 23 Ear sound microphone (specific ambient microphone), right 29 Speaker, right 100, 200 Signal processing block 102, 202 Transmitting end 104, 204 Receiving end 110, 112, 221, 223 Microphone inputs 114 Transmission signal output 116, 216 Received signal input 118, 229 Speaker outputs 122 Attenuator G(f) 120, 144, 148, 249 Adder, subtracting 140, 240 Adder, adding 124 Automatic Gain Control (AGC) 146 Echo compensator 142 Attenuator R1 150 Attenuator R2 252 Splitter (SPLT) 254 Attenuator, left channel G.sub.l(f) 255 Attenuator, right channel G.sub.r(f) 310 Voice signal wire 312 Ambient signal wire 318 Listening signal wire 330 Headset 332 Earpiece 334 Headband 335 Pressure piece 336 Microphone arm 337 Microphone mount 338 Windscreen/pop filter 339 Strain relief 340 Headphone cable 350 Microphone housing 352 Cable bushing 360 Microphone cable 370 Switch box 372-376 Cable connectors 378 Control dial 380 Conference connection cable 400 Headset 410 Voice microphone wire 412 Ambient microphone wire 418 Speaker wire 430 Earpiece 432 Housing 434 Ear adapter 436 Air duct 440 Hinge 450 Mouthpiece 452 Microphone mount 452a, 452b Perforation 454 Arm 460 Rear earpiece 470 Connecting piece 480 Switch module 482 Antenna block 484 Control signal block 490 Button panel 492 Control signal wire 500 Headset 530 Listening unit, left 532 Earpiece 534 Grip 536 Earpiece cable, left 539 Bundling ring 540 Listening unit, right 542 Earpiece 544 Grip 546 Earpiece cable, right 550 Voice unit 552 Housing 552a, 552b Perforation 554, 555 Cable bushing (strain relief) 556, 357 Button 558 Adjustment wheel 560 Control board 570 Connection cable 572 Plug a, b, ab Signal inputs x Signal output AGC Automatic Gain Control EC Echo Cancellation G(f), G.sub.l(f), G.sub.r(f) Attenuation function R.sub.1, R.sub.2 Attenuation values S.sub.c, S.sub.c1-S.sub.c6 Control signals S.sub.EC Echo compensation signal S.sub.H Listening signal S.sub.H,l Listening signal, left S.sub.H,r Listening signal, right S.sub.in Received signal S.sub.M Voice signal S.sub.N (General) ambient signal S.sub.N,l Ear signal (specific ambient signal), left S.sub.N,r Ear signal (specific ambient signal), right S.sub.out Transmission signal
(77) The above list of reference numbers and symbols is an integral part of the description.