Active reduction of noise using synchronization signals
10347235 ยท 2019-07-09
Inventors
Cpc classification
G10K2210/108
PHYSICS
G10K11/17881
PHYSICS
G10K11/17885
PHYSICS
G10K11/17815
PHYSICS
G10K11/17837
PHYSICS
G10K2210/3216
PHYSICS
International classification
Abstract
Method and system for active reduction of a predefined audio acoustic signal (AAAS), also referred to as noise, in a quiet zone, without interfering undefined acoustic noise signals within as well as outside the quiet zone, by generating accurate antiphase AAAS signal. The accuracy of the generated antiphase AAAS is obtained by employing a unique synchronization signal(s) (SYNC) which is generated and combined with the predefined AAAS. The combined signal is electrically transmitted (referred to as the electric channel) to a processing quieting component. Simultaneously, the generated SYNC signal is acoustically broadcasted near the predefined AAAS and merges with it. A microphone in the quiet zone receives the merged acoustic signals that arrive via the air (referred to as the acoustical channel) to the quiet zone and a receiver in the quieting component receives the combined electrical AAAS and SYNC signal that arrive wire or wireless to the quiet zone. In the quiet component the SYNC is detected from both electrical and acoustical channels, the detected SYNC signals with the electrically received AAAS signal are used to calculate the timing and momentary amplitude for generating an accurate acoustic antiphase AAAS signal to cancel the acoustic predefined AAAS. By continuously and periodically updating the SYNC signal enables to dynamically evaluate acoustical environmental distortions that might appear due to echo, reverberations, frequency non-linear response, or due to other distortions mechanisms.
Claims
1. A method comprising: acquiring noise from a noise source; receiving a digitized version of the acquired noise; generating a synchronization signal; digitally combining the synchronization signal with the digitized version of the acquired noise; acoustically broadcasting the synchronization signal by a loudspeaker positioned in close proximity to the noise source and being directed towards the predefined zone, such that the broadcasted synchronization signal and the noise are acoustically combined; acquiring, using a microphone positioned at the predefined zone: a) the acoustically-combined noise and broadcasted synchronization signal, and b) ambient noise at the predefined zone; separating the broadcasted synchronization signal from the acquired (a) and (b); calculating an antiphase signal based on: c) the digitally-combined synchronization signal and digitized version of the noise, d) the acquired acoustically-combined noise and broadcasted synchronization signal, and e) the separated broadcasted synchronization signal; and acoustically broadcasting the antiphase signal using a loudspeaker, so as to substantially attenuate the noise as heard at the predefined zone.
2. The method according to claim 1, wherein said acquisition of the noise from the noise source is performed using a microphone positioned close to the noise source.
3. The method according to claim 1, wherein the calculation of the antiphase signal comprises calculating a distortion of an acoustical path between the noise source and the predefined zone, based on differences between the acquired synchronization signal and the generated synchronization signal.
4. The method according to claim 1, wherein: the synchronization signal comprises consecutive packages separated by predefined time intervals; each of the packages comprises a series of wave cycles that have a same amplitude; and each of the packages has a constant audio frequency.
5. The method according to claim 1, wherein the synchronization signal comprises consecutive packages, and wherein each of the packages contains at least one of: a digitally-coded definition of a beginning of the respective package; a digitally-coded counter that is indicative of the position of the respective package among the consecutive packages; and digitally-coded information on an audio frequency of the respective package.
6. The method according to claim 5, further comprising: calculating an exact moment to acoustically broadcast the antiphase signal, based on a delay between the acoustic broadcast of the synchronization signal, and the acquisition of (a).
7. The method according to claim 6, wherein the delay is determined according to the digitally-coded definition of the beginning of the respective package.
8. The method according to claim 1, wherein the broadcasted synchronization signal has a lower amplitude than the noise.
9. The method according to claim 1, wherein said separation of the broadcasted synchronization signal from the acquired (a) and (b) is performed using a narrow band pass filter centered at an audio frequency of the synchronization signal.
10. The method according to claim 1, further comprising a step of calibration, before the noise is present, by generating white noise and performing the steps of claim 1 based on the white noise in lieu of the noise.
11. A system comprising a processor that is configured to cause execution of the following steps: acquire noise from a noise source; receive a digitized version of the acquired noise; generate a synchronization signal; digitally combine the synchronization signal with the digitized version of the acquired noise; acoustically broadcast the synchronization signal by a loudspeaker positioned in close proximity to the noise source and being directed towards the predefined zone, such that the broadcasted synchronization signal and the noise are acoustically combined; acquire, using a microphone positioned at the predefined zone: a) the acoustically-combined noise and broadcasted synchronization signal, and b) ambient noise at the predefined zone; separate the broadcasted synchronization signal from the acquired (a) and (b); calculate an antiphase signal based on: c) the digitally-combined synchronization signal and digitized version of the noise, d) the acquired acoustically-combined noise and broadcasted synchronization signal, and e) the separated broadcasted synchronization signal; and acoustically broadcast the antiphase signal using a loudspeaker, so as to substantially attenuate the noise as heard at the predefined zone.
12. The system according to claim 11, wherein said acquisition of the noise from the noise source is performed using a microphone positioned close to the noise source.
13. The system according to claim 11, wherein the calculation of the antiphase signal comprises calculating a distortion of an acoustical path between the noise source and the predefined zone, based on differences between the acquired synchronization signal and the generated synchronization signal.
14. The system according to claim 11, wherein: the synchronization signal comprises consecutive packages separated by predefined time intervals; each of the packages comprises a series of wave cycles that have a same amplitude; and each of the packages has a constant audio frequency.
15. The system according to claim 11, wherein the synchronization signal comprises consecutive packages, and wherein each of the packages contains at least one of: a digitally-coded definition of a beginning of the respective package; a digitally-coded counter that is indicative of the position of the respective package among the consecutive packages; and digitally-coded information on an audio frequency of the respective package.
16. The system according to claim 15, wherein said processor is further configured to cause execution of the following step: calculate an exact moment to acoustically broadcast the antiphase signal, based on a delay between the acoustic broadcast of the synchronization signal, and the acquisition of (a).
17. The system according to claim 16, wherein the delay is determined according to the digitally-coded definition of the beginning of the respective package.
18. The system according to claim 11, wherein the broadcasted synchronization signal has a lower amplitude than the noise.
19. The system according to claim 11, wherein said separation of the broadcasted synchronization signal from the acquired (a) and (b) is performed using a narrow band pass filter centered at an audio frequency of the synchronization signal.
20. The system according to claim 11, wherein said processor is further configured to cause calibration, before the noise is present, by generating white noise and performing the steps of claim 1 based on the white noise in lieu of the noise.
Description
A BRIEF DESCRIPTION OF THE DRAWINGS
(1) In order to better understand the present invention, and appreciate its practical applications, the following figures & drawings are provided and referenced hereafter. It should be noted that the figures are given as examples only and in no way limit the scope of the invention. Like components are denoted by like reference numerals.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION OF AN EMBODIMENT OF THE INVENTION
(13)
(14) The method and system of the present invention is based on generating antiphase signal which is synchronized to the predefined noise, by using dedicated synchronization signals, referred in the present text as SYNC. The SYNC signals are electrically generated (38), and then acoustically emitted through air while being combined with the predefined noise acoustic signal (AAS). Both the predefined noise and the acoustical SYNC (84)among other acoustic sounds that travels through airare received at the quiet zone, where the SYNC signal is detected. Simultaneously, the SYNC signal is electrically combined with the acquired predefined noise signal (41), and electrically transmitted to the quiet zone, where again the SYNC signal is detected. The SYNC signal detected at each of the two channels synchronizes an antiphase generator to the original predefined noise, to create a quite zone(s) by acoustical interference.
(15)
(16) Reference is presently made to explaining various components that comprise the three major component units (30), (40) and (50) comprising the system of the present invention, presented in a block diagram in
(17) The audio Multiplexing and Broadcasting component (30) is typically a commercially available amplifying system, that, in the context of the present invention, comprises:
(18) (1) A signal mixing box (34) which combines individual electrical audio-derived signals inputs (35, 36, 37 shown in
(19) (2) An optional microphone (32);
(20) (3) An audio power amplifier (33);
(21) (4) A loudspeaker(s) (80 or 81) shown in
(22) The synchronization and transmitting component (40) comprises:
(23) (1) a digital signal processor, referred to as DSP1 (42);
(24) (2) a wired or wireless transmitter (43);
(25) The quieting component (50) comprises:
(26) (1) A microphone, referred to as Emic, designated in the figures as: (62), preferably located at the edge of the quiet zone (63);
(27) (2) An optional second microphone, referred to as Imic, designated in the figures as: (70), which is located in the quiet zone (63) preferably in its approximate center;
(28) (3) A transducer (a digitizer which is an analog to digital converter) (58);
(29) (4) A wire or a wireless receiver (52), that corresponds to the transmitter (43);
(30) (5) A digital signal processor, referred to as: DSP2 (54);
(31) (6) A transducer (a digital to analog converter) (88);
(32) (7) An audio amplifier (60);
(33) (8) A loudspeaker used as a quieting loudspeaker (82) that broadcasts the antiphase AAAS.
(34) With the exception of the following: microphone Emic (62); the quieting loudspeaker (82); and the optional second microphone (Imic) (70)all the subcomponents comprising the quieting component (50) do not necessarily have to be located within or close to the quiet zone (63).
(35) In cases where more than a single quiet zones (63) is desired, each of the zones has to contain the following: a microphone Emic (62); a quieting loudspeaker (82); and, optionally, also a microphone Imic (70).
(36) Presently the mode of operation of the system (10) for the active reduction of predefined AAAS of the present invention is described. The mode of operation of the system (10) can be simultaneously applicable to more than a single quiet zone.
(37) The precision of the matching in time and in amplitude between the AAAS and the antiphase AAAS in the quiet zone is achieved by using unique synchronization signal that is merged with the AAAS acoustic and electric signal. The synchronization signals are interchangeably referred to as SYNC. The SYNC has two major tasks: 1) to precisely time the antiphase generator; and 2) to assist in evaluating the acoustical channel's distortion.
(38) For describing the system's (10) mode of operation, as illustrated in
(39) As Illustrated in
(40) Definitions related to the SYNC signal(s) (38), illustrated in
(41) The SYNC generating system employs two clocks mechanisms: 1) a high resolution (e.g. 10 microseconds, not limited) Real Time Clock, that is used to accurately mark system events, referred to as RTC; and 2) a low resolution (e.g. 10 milliseconds, not limited) free cyclic counter with 10 states (not limited), referred to as Generated Sequential Counter.
(42) A SYNC signal has the following properties, as shown in
(43) 1) Constant amplitude (551)is the value used as a reference for resolving signals attenuation (552, 554);
(44) 2) Constant interval (561) is the time elapse between two consecutive SYNC packages (repeat rate of about 50 Hz, not limited). This rate ensures a frequent update of the calculation. A constant rate will also be used to minimize the effort of searching for SYNC signal in the data stream;
3) A single (or few more; not limited) cycle of a constant frequency, thus called a SYNC cycle (562) (e.g. about 18 KHz; cycle of about 55 microseconds, not limited).
(45) Few SYNC cycles are present during the SYNC period (563), approximately 500 microseconds, not limited, per each time interval. This constant frequency is used for detection of the SYNC signal. Nevertheless, the constant frequency may vary among the SYNC intervals, to enable acoustic channel's dynamic calibration of the acoustic and electric response over the frequency spectrum.
(46) When the amplitude of a SYNC cycle is zerothe binary translation is referred to as binary 0; when the amplitude of the SYNC cycle is non-zerothe binary translation is referred to as binary 1. This allows to code data over the SYNC signal. Other methods of modulating the SYNC may be used as well.
(47)
(48) 1) a predefined Start Of Frame pattern (451) referred to as SOF, that well defines the beginning of the package's data;
(49) 2) a Generated Sequence Mark (452), referred to as: GSM, which is a copy of the Generated Sequential Counter at the moment that SYNC signal has been generated originally for the specific package,
(50) 3) additional digital information (453), such as SYNC frequency value and instruction-codes to activate parts of the quieting system, upon request/need/demand/future plans.
(51) Focus is now turned to the SYNC signal flow description:
(52)
(53) Typically, the combined electrical signal (41) flows through the transmitter and the receiver as a transmitted signal. The transmitted signal, abbreviated as TEAAS+TESYNC and designated (39), is received at the quiet zone relatively immediately as QEAAS+QESYNC signal (78). The term QEAAS+QESYNC refers to the electrically received audio part (QEAAS) and the electrically received SYNC part (ESYNC) in the quiet zone. The predefined AAAS+ASYNC acoustic signal (84) is slower, and arrives to the quiet zone after the channel's delay (570). This is the precise time that the antiphase AAAS+ASYNC (86) is broadcasted.
(54) Focus is now turned to the digital binary data identification:
(55) Separating the SYNC package (450) from the combined signal starts by identifying single cycles. This is done by using a narrow band pass filter centered at SYNC frequency (562). The filter is active at the SYNC time period (563) within the SYNC time interval (561). When the filter crosses a certain amplitude level relative to the SYNC constant amplitude (551), binary data of 1 and 0 can be interpreted within this period. After binary data is identified, a data-structure can be created, as illustrated in
(56) The system copies the moment of detecting the end of the SOF (451). This moment is recorded from the RTC and is used to precisely generate the antiphase. This moment is defined in the present text as the SYNC moment (454) as shown in
(57) Separating the predefined AAAS from the combined signal is done by eliminating the SYNC package (450) from the combined signal by using a narrow band stop filter during the SYNC time period (563), or by other means.
(58) The SYNC moment at each of the two received channels (the acoustical and electrical) is resolved, and attached to the corresponding block, as shown in
(59) In order to find and define the acoustical channel's distortion and to generate the antiphase AAAS, the system, its algorithm illustrated in
(60) (1) Calibration of the secondary paths state. This is an off-line initial calibration state, during system installation and in as sterile (undisturbed) environment as possible, i.e. no predefined noise is active and no other noise as well, as much as possible. In this state, the acoustic channel's distortion is calculated by generation white noise and by generating SYNC signal from the loudspeakers. Then receive them by the microphones. This state intends to resolves the system's secondary paths, marked S1(z).
(2) Validation of the secondary paths estimation. It is an off line fine calibration state, used to validate the initial calibration, and also done as sterile as possible. The system tries to attenuate SYNC signals only (no AAAS) with the previously calculated FIR, while using the estimated secondary path, marked S^(z). If the attenuation has not succeeded than the system tries to calibrate again with higher FIR order.
(3) On-line state, called Idle State. This state intends to resolve the primary path distortion, while the system is already installed and working; the SYNC signal has relatively low amplitude and still SNR (SYNC signal relative to the received signal (72) at the quiet zone) is above certain minimum level. In this state, the SYNC signal component of the combined predefined AAAS+ASYNC signal (84) is used to adapt the distortion function's parameters, referred to as: P1(z), i.e. the system is employing its FxLMS mechanism to find the FIR parameters W(z) that minimize the SYNC component of the combined signal. The idea is that the same filter shall likely attenuate the predefined AAAS component of the combined signal. The system uses this FIR to generate the antiphase AAAS signal. When the SNR degrades or when SYNC signal is not detected than the system moves to Busy state.
(4) On-line state, called Busy State where the system is already installed and working, and the acoustic channel's distortion W(z) is known from the previous states. The SNR (SYNC signal relative to the received signal (72) at the quiet zone) is low, so the system uses the last known FIR to generate the antiphase AAAS signal. Additionally, the system increases the SYNC signal to regain the minimal required SNR, thus move to Idle state.
(61) While off line, i.e. while the system is not yet in use, it needs to undergo calibration procedure of the secondary paths, marked S1(z) in
(62) The calibration procedure continues in the fine calibration state, described earlier, in order to validate the calibration. The validation is done where well defined SYNC signal (38) is generated by DSP2; broadcasted by loudspeaker (82) and received at the quiet zone by microphone (62), as described earlier. Several frequencies, e.g. MEL scale, are deployed At the quiet zone, DSP2 as the FxLMS controller regarded in
(63) In Idle state, SYNC signal is transmitted in relatively low amplitude, while antiphase AAAS signal is generated to interfere with the predefined AAAS as received at the quiet zone. The FIR parameters, W(z), are continuously updated by using the FxLMS Mechanism to minimize the residual of the ASYNC (83) by its antiphase. In this on-line state, predefined AAAS flows through the filter whose parameters are defined by the SYNC signal, thus, generating antiphase both to the predefined AAAS and to the SYNC. When no SYNC is detected by DSP2, or, SNR (of the SYNC relative to the received signal) degradation is observed (by means of SYNC cancelation) the updating holds, and the system moves to Busy state. The system shall re-enter Idle state when the SNR rises beyond a certain threshold again.
(64) In Busy state, SYNC signal is transmitted in relatively low amplitude. In this state the system generates antiphase by using the acoustic channel's distortion parameters W(z), as recently calculated.
(65) The current FIR parameters are used for the active noise cancelation
(66) Focus is now turned to the flow of the SYNC signal along with the predefined AAAS, until the antiphase is precisely generated:
(67) The predefined AAAS is digitally acquired into the system, thus converted to electrical signals. This is done by positioning a microphone (32) as close as possible to the noise source (90) as shown in
(68) The electrically converted noise signals referred to as EAAS are integrated in the mixing box (34) with SYNC signal (38). The integrated signals are amplified by amplifier (33). The Integrated electrically converted signals are referred to as EAAS+ESYNC (41).
(69) As mentioned earlier, the SYNC signal (38) generated by DSP1 (42) at the SYNC and transmitting component (40), is converted to acoustic signal, referred to as: ASYNC (83). ASYNC (83) is amplified by an audio amplifier (33) and broadcasted in the air by either, but not limited to, a dedicated loudspeaker (81) as shown in
(70) While AAAS+ASYNC (84) leaves the Multiplexing and broadcasting component (30), together with negligible time difference, the combined signal EAAS+ESYNC (41) is forwarded to the transmitting component (43), which transmits it either by wire or by wireless method toward a corresponding receiver (52) in the quieting component (50).
(71) The electrically transmitted signal TEAAS+TESYNC (39) is a combination of the audio information electrically transmitted AAAS, referred to as TEAAS, and the SYNC information electrically transmitted, referred to as TESYNC.
(72) The electrical channel is robust, thus, data at the receiver's output (78) received exactly as data at the transmitter's input (39) with no loss and no further distortion, and with negligible delay.
(73) In the quieting component (50) the receiver (52) forwards the integrated signals, referred as QEAAS+QESYNC (78), to DSP2 (54).
(74) DSP2 (54) executes a separation algorithm whose input is the combined signal QEAAS+QESYNC (78) and its output are two separate signals: QEAAS and QESYNC.
(75) At this point DSP2 (54) saves the following in its memory:
(76) 1) GSM (452) as it appeared in QESYNC package, as shown in
(77) 2) RTT which is the accurate time that the specific QESYNC's (78) package has been received by DSP2;
(78) 3) QEAAS data (453) as shown in
(79) The three elements together are referred to as an Eblock. DSP2 (54) stores the Eblock in its memory.
(80) In the quieting component (50) the microphone EMIC (62), positioned at the edge of the quiet zone (63), acquires the acoustical signal at the quiet zone vicinity. This signal is comprised of the AAAS+ASYNC (84) signal, distorted by the acoustic channel, and also of the surrounding voices in the quiet zone vicinity, referred to as QAAS signal (94) shown in
(81) The acquired integrated signals, referred as QAAS+QAAS+QASYNC (72), and forwarded to DSP2 (54).
(82) DSP2 (54) executes a separation algorithm whose input is the combined signal QAAS+QAAS+QASYNC (72). This is the same separation algorithm as was previously described regarding QEAAS and QESYNC processed on the combined signal QEAAS+QESYNC (78) coining from receiver (52). At this point its output is two separate signals: QAAS+QAAS and QASYNC.
(83) At this point DSP2 (54) saves the following in its memory
(84) 1) GSM (452) as appears in QASYNC package as shown in
(85) 2) RTT which is the accurate time that the specific QASYNC's (72) package has been received by DSP2.
(86) 3) QAAS+QAAS data (453), as shown in
(87) The three elements together are referred to as an Ablock. DSP2 (54) stores the Ablock in its memory.
(88) DSP2 (54) executes a correlation algorithm as follows: DSP2 takes the GSM written at the most recent Ablock and searches in the memory for an Eblock having the same GSM. This is in order to locate two corresponding blocks that represent the same interval but with delay.
(89) DSP2 then extracts QEAAS data from Eblock.
(90) DSP2 uses the recent acoustical channel's RTT, in order to time the antiphase generator with Eblock's data, as shown in
(91) DSP2 (54) continuously calculates the acoustic channel's response to the repetitive SYNC signal, as described earlier in Idle state.
(92) Since the Eblock that is stored in the memory enough time before DSP2 needs it for its calculations; and since the FIR filter, represented as W(z) in
(93) After de-digitize the signal, by using a DAC converter (88) and amplify (56), is forwarded toward the loudspeaker (82). This signal has the precise calculated delay (as was previously explained) i.e. the antiphase signal will be broadcasted just at the appropriate moment with the incoming AAAS+ASYNC (84) acoustics signal as heard at the edge of the quiet zone and as shown in
(94) The process that was described above is repeated sequentially for every block, i.e. for each SYNC interval (561) shown in
(95) The acoustic antiphase wave AAAS+ASYNC (86) generated by DSP2 (54) and broadcasted by the quieting loudspeaker (82) precisely matches in time and momentary antiphase amplitude with the AAAS+ASYNC (84) as heard at the quiet zone's edge (63). The two acoustic waves interfere each other, thus, significantly reduce the AAAS signal(s) (91) in the quiet zone.
(96) Optionally, in order to further reduce the residual AAAS inside the quiet zone (63) an additional microphone, marked (70) in
(97) Since the broadcasting of the matched antiphase AAAS in the Quiet Zone is dependent on the predefined AAAS as received by microphone Emic (62) in the quiet zone's edge, it is possible to vary the quiet zone's location according the user's desire or constrains (i.e. dynamic changing of the quiet zone's location within the area). The location change is done by moving the microphone Emic (62) and the antiphase quieting loudspeaker (82), and the optional microphone Imic (70), if in use, to a (new) desired quiet zone location.
(98) The precise timing and momentary amplitude of the broadcasted antiphase AAAS+ASYNC (86) by the quieting loudspeaker (82) against predefined AAAS+ASYNC (84) broadcasted by loudspeaker (80, 81) as shown in
(99) The present invention ensures that the listeners will not be interfered due the presence of the SYNC signals in the air: according
(100) As presented in
(101) Furthermore, optionally, as illustrated in
(102) The generation of the antiphase acoustic signal which is based on the electrical acoustic signal prior acquired, enables cancellation of predefined audio noise signals only, in the quiet zone, without interfering with other surrounding and in-zone audio signals.
(103) Utilizing antiphase acoustic signal by using the pre-acquired electrical acoustic signalsignificantly improves the predefined AAAS attenuation in the high-end of the audio frequency spectrum, where prior arts are limited.
(104) The repetitive updating of the antiphase acoustic signal in the quiet zone in time and momentary amplitude ensures updating of the antiphase signal according to changes in the environment such as relative location of the components or listeners in the quiet zone.
(105) It should be clear that the description of the embodiments and attached Figures set forth in this specification serves only for a better understanding of the invention, without limiting its scope.
(106) It should also be clear that a person skilled in the art, after reading the present specification could make adjustments or amendments to the attached Figures and above described embodiments that would still be covered by the present invention.