SYSTEM AND METHOD FOR ACTIVE REDUCTION OF A PREDEFINED AUDIO ACOUSTIC NOISE BY USING SYNCHRONIZATION SIGNALS

20180158445 ยท 2018-06-07

    Inventors

    Cpc classification

    International classification

    Abstract

    Method and system for active reduction of a predefined audio acoustic signal (AAAS), also referred to as noise, in a quiet zone, without interfering undefined acoustic noise signals within as well as outside the quiet zone, by generating accurate antiphase AAAS signal. The accuracy of the generated antiphase AAAS is obtained by employing a unique synchronization signal(s) (SYNC) which is generated and combined with the predefined AAAS. The combined signal is electrically transmitted (referred to as the electric channel) to a processing quieting component. Simultaneously, the generated SYNC signal is acoustically broadcasted near the predefined AAAS and merges with it. A microphone in the quiet zone receives the merged acoustic signals that arrive via the air (referred to as the acoustical channel) to the quiet zone and a receiver in the quieting component receives the combined electrical AAAS and SYNC signal that arrive wire or wireless to the quiet zone. In the quiet component the SYNC is detected from both electrical and acoustical channels, the detected SYNC signals with the electrically received AAAS signal are used to calculate the timing and momentary amplitude for generating an accurate acoustic antiphase AAAS signal to cancel the acoustic predefined AAAS. By continuously and periodically updating the SYNC signal enables to dynamically evaluate acoustical environmental distortions that might appear due to echo, reverberations, frequency non-linear response, or due to other distortions mechanisms.

    Claims

    1-13. (canceled)

    14. A method comprising: acquiring noise from a noise source; receiving a digitized version of the acquired noise; generating a synchronization signal; digitally combining the synchronization signal with the digitized version of the acquired noise; acoustically broadcasting the synchronization signal by a loudspeaker positioned in close proximity to the noise source and being directed towards the predefined zone, such that the broadcasted synchronization signal and the noise are acoustically combined; acquiring, using a microphone positioned at the predefined zone: a) the acoustically-combined noise and broadcasted synchronization signal, and b) ambient noise at the predefined zone; separating the broadcasted synchronization signal from the acquired (a) and (b); calculating an antiphase signal based on: c) the digitally-combined synchronization signal and digitized version of the noise, d) the acquired acoustically-combined noise and broadcasted synchronization signal, and e) the separated broadcasted synchronization signal; and acoustically broadcasting the antiphase signal using a loudspeaker, so as to substantially attenuate the noise as heard at the predefined zone.

    15. The method according to claim 14, wherein said acquisition of the noise from the noise source is performed using a microphone positioned close to the noise source.

    16. The method according to claim 14, wherein the calculation of the antiphase signal comprises calculating a distortion of an acoustical path between the noise source and the predefined zone, based on differences between the acquired synchronization signal and the generated synchronization signal.

    17. The method according to claim 14, wherein: the synchronization signal comprises consecutive packages separated by predefined time intervals; each of the packages comprises a series of wave cycles that have a same amplitude; and each of the packages has a constant audio frequency.

    18. The method according to claim 14, wherein the synchronization signal comprises consecutive packages, and wherein each of the packages contains at least one of: a digitally-coded definition of a beginning of the respective package; a digitally-coded counter that is indicative of the position of the respective package among the consecutive packages; and digitally-coded information on an audio frequency of the respective package.

    19. The method according to claim 18, further comprising: calculating an exact moment to acoustically broadcast the antiphase signal, based on a delay between the acoustic broadcast of the synchronization signal, and the acquisition of (a).

    20. The method according to claim 19, wherein the delay is determined according to the digitally-coded definition of the beginning of the respective package.

    21. The method according to claim 14, wherein the broadcasted synchronization signal has a lower amplitude than the noise.

    22. The method according to claim 14, wherein said separation of the broadcasted synchronization signal from the acquired (a) and (b) is performed using a narrow band pass filter centered at an audio frequency of the synchronization signal.

    23. The method according to claim 14, further comprising a step of calibration, before the noise is present, by generating white noise and performing the steps of claim 14 based on the white noise in lieu of the noise.

    24. A system comprising a processor that is configured to cause execution of the following steps: acquire noise from a noise source; receive a digitized version of the acquired noise; generate a synchronization signal; digitally combine the synchronization signal with the digitized version of the acquired noise; acoustically broadcast the synchronization signal by a loudspeaker positioned in close proximity to the noise source and being directed towards the predefined zone, such that the broadcasted synchronization signal and the noise are acoustically combined; acquire, using a microphone positioned at the predefined zone: a) the acoustically-combined noise and broadcasted synchronization signal, and b) ambient noise at the predefined zone; separate the broadcasted synchronization signal from the acquired (a) and (b); calculate an antiphase signal based on: c) the digitally-combined synchronization signal and digitized version of the noise, d) the acquired acoustically-combined noise and broadcasted synchronization signal, and e) the separated broadcasted synchronization signal; and acoustically broadcast the antiphase signal using a loudspeaker, so as to substantially attenuate the noise as heard at the predefined zone.

    25. The system according to claim 24, wherein said acquisition of the noise from the noise source is performed using a microphone positioned close to the noise source.

    26. The system according to claim 24, wherein the calculation of the antiphase signal comprises calculating a distortion of an acoustical path between the noise source and the predefined zone, based on differences between the acquired synchronization signal and the generated synchronization signal.

    27. The system according to claim 24, wherein: the synchronization signal comprises consecutive packages separated by predefined time intervals; each of the packages comprises a series of wave cycles that have a same amplitude; and each of the packages has a constant audio frequency.

    28. The system according to claim 24, wherein the synchronization signal comprises consecutive packages, and wherein each of the packages contains at least one of: a digitally-coded definition of a beginning of the respective package; a digitally-coded counter that is indicative of the position of the respective package among the consecutive packages; and digitally-coded information on an audio frequency of the respective package.

    29. The system according to claim 28, wherein said processor is further configured to cause execution of the following step: calculate an exact moment to acoustically broadcast the antiphase signal, based on a delay between the acoustic broadcast of the synchronization signal, and the acquisition of (a).

    30. The system according to claim 29, wherein the delay is determined according to the digitally-coded definition of the beginning of the respective package.

    31. The system according to claim 24, wherein the broadcasted synchronization signal has a lower amplitude than the noise.

    32. The system according to claim 24, wherein said separation of the broadcasted synchronization signal from the acquired (a) and (b) is performed using a narrow band pass filter centered at an audio frequency of the synchronization signal.

    33. The system according to claim 24, wherein said processor is further configured to cause calibration, before the noise is present, by generating white noise and performing the steps of claim 1 based on the white noise in lieu of the noise.

    Description

    A BRIEF DESCRIPTION OF THE DRAWINGS

    [0083] In order to better understand the present invention, and appreciate its practical applications, the following figures & drawings are provided and referenced hereafter. It should be noted that the figures are given as examples only and in no way limit the scope of the invention. Like components are denoted by like reference numerals.

    [0084] FIG. 1 schematically illustrates a Typical case in which the predefined AAAS is emitted directly from the noise source.

    [0085] FIG. 2 schematically illustrates a private case where the predefined AAAS is emitted indirectly from a commercial amplifying system in which a loudspeaker is used as the noise source.

    [0086] FIG. 3 schematically illustrates the merging of electrical SYNC signal converted to acoustical SYNC signal, with predefined AAAS, where the predefined AAAS is emitted directly from the noise source.

    [0087] FIG. 4 schematically illustrates the merging electrical SYNC signal converted to acoustical SYNC signal, with predefined AAAS, where the predefined AAAS is emitted from an amplifying system.

    [0088] FIG. 5 is a block diagram that illustrates the major components of the method and system of the present invention, for active reduction of a predefined AAAS and their employment mode relative to each other.

    [0089] FIG. 6 is a detailed schematic presentation of an embodiment of the system of the present invention, where the predefined AAAS is acquired by the multiplexing and broadcasting component in either configuration shown in FIG. 1 or FIG. 2 .

    [0090] FIG. 7 is a functional block diagram that illustrates major signal flow paths between the major components (illustrated in FIG. 5) of the system (with emphasis on the SYNC) of the present invention,

    [0091] FIG. 8 illustrates schematically a basic structure of typical a SYNC package.

    [0092] FIG. 9 schematically illustrates the physical characteristic of a typical SYNC.

    [0093] FIG. 10 is a graphical illustration of the major signals propagation throughout the system within a time interval.

    [0094] FIG. 11 illustrates the algorithmic process that the system of the present invention employs, considering the acoustical domain and the electrical domain.

    DETAILED DESCRIPTION OF AN EMBODIMENT OF THE INVENTION

    [0095] FIG. 5 illustrates schematically the major components of a system and method (10) for active reduction of an audio acoustic noise signal of the present invention and their employment mode relative to each other. The figure illustrates the three major components of system: 1) an audio Multiplexing and Broadcasting component (30); 2) synchronization and transmitting component (40); and 3) a quieting component (50). A detailed explanation of the three major components of the system (10) is given in FIG. 6. The structure and usage of the synchronization signal, referred to as SYNC signal, is given further on in the text, as well as analysis of the SYNC employment algorithm.

    [0096] The method and system of the present invention is based on generating antiphase signal which is synchronized to the predefined noise, by using dedicated synchronization signals, referred in the present text as SYNC. The SYNC signals are electrically generated (38), and then acoustically emitted through air while being combined with the predefined noise acoustic signal (AAS). Both the predefined noise and the acoustical SYNC (84)among other acoustic sounds that travels through airare received at the quiet zone, where the SYNC signal is detected. Simultaneously, the SYNC signal is electrically combined with the acquired predefined noise signal (41), and electrically transmitted to the quiet zone, where again the SYNC signal is detected. The SYNC signal detected at each of the two channels synchronizes an antiphase generator to the original predefined noise, to create a quite zone(s) by acoustical interference.

    [0097] FIG. 6 is a schematic graphical illustration of embodiments of the employment of system (10) for the active reduction of the predefined audio acoustic noise (91).

    [0098] Reference is presently made to explaining various components that comprise the three major component units (30), (40) and (50) comprising the system of the present invention, presented in a block diagram in FIG. 5:

    [0099] The audio Multiplexing and Broadcasting component (30) is typically a commercially available amplifying system, that, in the context of the present invention, comprises:

    (1) A signal mixing box (34) which combines individual electrical audio-derived signals inputs (35, 36, 37 shown in FIG. 2 and FIG. 4). The mixing box has a reserved input for the SYNC signal, which routed to (at least) one electrical output component;
    (2) An optional microphone (32);
    (3) An audio power amplifier (33);
    (4) A loudspeaker(s) (80 or 81) shown in FIG. 3 and FIG. 4;

    [0100] The synchronization and transmitting component (40) comprises:

    (1) a digital signal processor, referred to as DSP1 (42);
    (2) a wired or wireless transmitter (43);

    [0101] The quieting component (50) comprises:

    (1) A microphone, referred to as Emic, designated in the figures as: (62), preferably located at the edge of the quiet zone (63);
    (2) An optional second microphone, referred to as Imic, designated in the figures as: (70), which is located in the quiet zone (63) preferably in its approximate center;
    (3) A transducer (a digitizer which is an analog to digital converter) (58);
    (4) A wire or a wireless receiver (52), that corresponds to the transmitter (43);
    (5) A digital signal processor, referred to as: DSP2 (54);
    (6) A transducer (a digital to analog converter) (88);
    (7) An audio amplifier (60);
    (8) A loudspeaker used as a quieting loudspeaker (82) that broadcasts the antiphase AAAS.

    [0102] With the exception of the following: microphone Emic (62); the quieting loudspeaker (82); and the optional second microphone (Imic) (70)all the subcomponents comprising the quieting component (50) do not necessarily have to be located within or close to the quiet zone (63).

    [0103] In cases where more than a single quiet zones (63) is desired, each of the zones has to contain the following: a microphone Emic (62); a quieting loudspeaker (82); and, optionally, also a microphone Imic (70).

    [0104] Presently the mode of operation of the system (10) for the active reduction of predefined AAAS of the present invention is described. The mode of operation of the system (10) can be simultaneously applicable to more than a single quiet zone.

    [0105] The precision of the matching in time and in amplitude between the AAAS and the antiphase AAAS in the quiet zone is achieved by using unique synchronization signal that is merged with the AAAS acoustic and electric signal. The synchronization signals are interchangeably referred to as SYNC. The SYNC has two major tasks: 1) to precisely time the antiphase generator; and 2) to assist in evaluating the acoustical channel's distortion. FIG. 7 shows the functional diagram of the system.

    [0106] For describing the system's (10) mode of operation, as illustrated in FIG. 6, focus is first turned for explaining the SYNC (38) signal characterization, processing and routing. FIG. 7 is (also) referred to explain the use of the functional-use of SYNC.

    [0107] As Illustrated in FIG. 6 the SYNC signal (38) is generated by DSP1 (42) that resides in the synchronization and transmitting component (40). It is transmitted toward the mixing box (34) that resides in the audio multiplexing and broadcasting component (30). The SYNC has such physical characterization that contains specific information as described in context of the description given for FIG. 8 and FIG. 9 hereafter.

    [0108] Definitions related to the SYNC signal(s) (38), illustrated in FIG. 8 and FIG. 9, are presently presented:

    [0109] The SYNC generating system employs two clocks mechanisms: 1) a high resolution (e.g. 10 microseconds, not limited) Real Time Clock, that is used to accurately mark system events, referred to as RTC; and 2) a low resolution (e.g. 10 milliseconds, not limited) free cyclic counter with 10 states (not limited), referred to as Generated Sequential Counter.

    [0110] A SYNC signal has the following properties, as shown in FIG. 9:

    1) Constant amplitude (551)is the value used as a reference for resolving signals attenuation (552, 554);
    2) Constant interval (561) is the time elapse between two consecutive SYNC packages (repeat rate of about 50 Hz, not limited). This rate ensures a frequent update of the calculation. A constant rate will also be used to minimize the effort of searching for SYNC signal in the data stream;
    3) A single (or few more; not limited) cycle of a constant frequency, thus called a SYNC cycle (562) (e.g. about 18 KHz; cycle of about 55 microseconds, not limited).

    [0111] Few SYNC cycles are present during the SYNC period (563), approximately 500 microseconds, not limited, per each time interval. This constant frequency is used for detection of the SYNC signal. Nevertheless, the constant frequency may vary among the SYNC intervals, to enable acoustic channel's dynamic calibration of the acoustic and electric response over the frequency spectrum.

    [0112] When the amplitude of a SYNC cycle is zerothe binary translation is referred to as binary 0; when the amplitude of the SYNC cycle is non-zerothe binary translation is referred to as binary 1. This allows to code data over the SYNC signal. Other methods of modulating the SYNC may be used as well.

    [0113] FIG. 8 schematically illustrates a typical SYNC package (450) which information carried by the SYNC signal, within the SYNC period (563). A SYNC package contains, but is not limited to, the following data by digital binary coding:

    1) a predefined Start Of Frame pattern (451) referred to as SOF, that well defines the beginning of the package's data;
    2) a Generated Sequence Mark (452), referred to as: GSM, which is a copy of the Generated Sequential Counter at the moment that SYNC signal has been generated originally for the specific package,
    3) additional digital information (453), such as SYNC frequency value and instruction-codes to activate parts of the quieting system, upon request/need/demand/future plans.

    [0114] Focus is now turned to the SYNC signal flow description:

    [0115] FIG. 10 illustrates an example of employing a SYNC package (450) over AAAS, and demonstrates the signal(s) flow in a system where AAAS source (marked 91 at FIG. 3 and at FIG. 4) propagates to the quiet zone (63) and arrives after delay (570).

    [0116] Typically, the combined electrical signal (41) flows through the transmitter and the receiver as a transmitted signal. The transmitted signal, abbreviated as TEAAS+TESYNC and designated (39), is received at the quiet zone relatively immediately as QEAAS+QESYNC signal (78). The term QEAAS+QESYNC refers to the electrically received audio part (QEAAS) and the electrically received SYNC part (ESYNC) in the quiet zone. The predefined AAAS+ASYNC acoustic signal (84) is slower, and arrives to the quiet zone after the channel's delay (570). This is the precise time that the antiphase AAAS+ASYNC (86) is broadcasted.

    [0117] Focus is now turned to the digital binary data identification:

    [0118] Separating the SYNC package (450) from the combined signal starts by identifying single cycles. This is done by using a narrow band pass filter centered at SYNC frequency (562). The filter is active at the SYNC time period (563) within the SYNC time interval (561). When the filter crosses a certain amplitude level relative to the SYNC constant amplitude (551), binary data of 1 and 0 can be interpreted within this period. After binary data is identified, a data-structure can be created, as illustrated in FIG. 8: SOF (451) may be considered as, but not limited to, a unique predefined binary pattern uses to identify the start of the next frame, enabling to accumulate binary bits and thus create the GSM (452) and the data (453).

    [0119] The system copies the moment of detecting the end of the SOF (451). This moment is recorded from the RTC and is used to precisely generate the antiphase. This moment is defined in the present text as the SYNC moment (454) as shown in FIG. 8.

    [0120] Separating the predefined AAAS from the combined signal is done by eliminating the SYNC package (450) from the combined signal by using a narrow band stop filter during the SYNC time period (563), or by other means.

    [0121] The SYNC moment at each of the two received channels (the acoustical and electrical) is resolved, and attached to the corresponding block, as shown in FIG. 10 (see the identification of GTT and RTT). The attaching action is called Time Tagging. The Sync moment of each of the channels is called Received Time Tag, abbreviated as RTT. Since the transition through the electrical channel is fast, it is reasonable to assume that the Generated Time (GTT) is almost equal to RTT of the electrical channel

    [0122] In order to find and define the acoustical channel's distortion and to generate the antiphase AAAS, the system, its algorithm illustrated in FIG. 11, logically changes its state among the following four states:

    (1) Calibration of the secondary paths state. This is an off-line initial calibration state, during system installation and in as sterile (undisturbed) environment as possible, i.e. no predefined noise is active and no other noise as well, as much as possible. In this state, the acoustic channel's distortion is calculated by generation white noise and by generating SYNC signal from the loudspeakers. Then receive them by the microphones. This state intends to resolves the system's secondary paths, marked S1(z).
    (2) Validation of the secondary paths estimation. It is an off line fine calibration state, used to validate the initial calibration, and also done as sterile as possible. The system tries to attenuate SYNC signals only (no AAAS) with the previously calculated FIR, while using the estimated secondary path, marked S(z). If the attenuation has not succeeded than the system tries to calibrate again with higher FIR order.
    (3) On-line state, called Idle State. This state intends to resolve the primary path distortion, while the system is already installed and working; the SYNC signal has relatively low amplitude and still SNR (SYNC signal relative to the received signal (72) at the quiet zone) is above certain minimum level. In this state, the SYNC signal component of the combined predefined AAAS+ASYNC signal (84) is used to adapt the distortion function's parameters, referred to as: P1(z), i.e. the system is employing its FxLMS mechanism to find the FIR parameters W(z) that minimize the SYNC component of the combined signal. The idea is that the same filter shall likely attenuate the predefined AAAS component of the combined signal. The system uses this FIR to generate the antiphase AAAS signal. When the SNR degrades or when SYNC signal is not detected than the system moves to Busy state.
    (4) On-line state, called Busy State where the system is already installed and working, and the acoustic channel's distortion W(z) is known from the previous states. The SNR (SYNC signal relative to the received signal (72) at the quiet zone) is low, so the system uses the last known FIR to generate the antiphase AAAS signal. Additionally, the system increases the SYNC signal to regain the minimal required SNR, thus move to Idle state.

    [0123] While off line, i.e. while the system is not yet in use, it needs to undergo calibration procedure of the secondary paths, marked S1(z) in FIG. 11: DSP2 generates white noise by the quieting loudspeaker (82), instead of antiphase AAAS+ASYNC (86), which is received by the microphone (62) at the quiet zone. Then DSP1 and DSP2, respectively, analyze the received signals and produce the secondary acoustical channel's response to audio frequencies.

    [0124] The calibration procedure continues in the fine calibration state, described earlier, in order to validate the calibration. The validation is done where well defined SYNC signal (38) is generated by DSP2; broadcasted by loudspeaker (82) and received at the quiet zone by microphone (62), as described earlier. Several frequencies, e.g. MEL scale, are deployed At the quiet zone, DSP2 as the FxLMS controller regarded in FIG. 11, updates the model of the acoustical channel W(z) (e.g. based on FIR filter), by employing FxLMS mechanism, where the broadcasted signals are known and expected. The signal to minimize is QAAS+QASYNC (72). When the minimization process is at a required level it means that the difference between a received signal and the system's output on the quieting loudspeaker (82) is minimal, thus the filter estimated the channel with high fidelity.

    [0125] In Idle state, SYNC signal is transmitted in relatively low amplitude, while antiphase AAAS signal is generated to interfere with the predefined AAAS as received at the quiet zone. The FIR parameters, W(z), are continuously updated by using the FxLMS Mechanism to minimize the residual of the ASYNC (83) by its antiphase. In this on-line state, predefined AAAS flows through the filter whose parameters are defined by the SYNC signal, thus, generating antiphase both to the predefined AAAS and to the SYNC. When no SYNC is detected by DSP2, or, SNR (of the SYNC relative to the received signal) degradation is observed (by means of SYNC cancelation) the updating holds, and the system moves to Busy state. The system shall re-enter Idle state when the SNR rises beyond a certain threshold again.

    [0126] In Busy state, SYNC signal is transmitted in relatively low amplitude. In this state the system generates antiphase by using the acoustic channel's distortion parameters W(z), as recently calculated.

    [0127] The current FIR parameters are used for the active noise cancelation

    [0128] Focus is now turned to the flow of the SYNC signal along with the predefined AAAS, until the antiphase is precisely generated:

    [0129] The predefined AAAS is digitally acquired into the system, thus converted to electrical signals. This is done by positioning a microphone (32) as close as possible to the noise source (90) as shown in FIG. 3, or directly from an electronic system as shown in FIG. 4. In either casethe acquired predefined AAAS is referred to as EAAS.

    [0130] The electrically converted noise signals referred to as EAAS are integrated in the mixing box (34) with SYNC signal (38). The integrated signals are amplified by amplifier (33). The Integrated electrically converted signals are referred to as EAAS+ESYNC (41).

    [0131] As mentioned earlier, the SYNC signal (38) generated by DSP1 (42) at the SYNC and transmitting component (40), is converted to acoustic signal, referred to as: ASYNC (83). ASYNC (83) is amplified by an audio amplifier (33) and broadcasted in the air by either, but not limited to, a dedicated loudspeaker (81) as shown in FIG. 3, or by a general (commonly used) audio system's loudspeaker (80) as shown in FIG. 4. In both cases (shown in the Figures) the acoustic signal ASYNC (83) and the AAS (91) are merged in the air. The merged signals are referred to as AAAS+ASYNC (84). On the way to the microphone Emic (62) in the quiet zone, the merged signals (84) are distorted by P1(z) as shown in FIG. 11. The merged signals (84) are the ones that the signal from the quieting loudspeaker (82) cancels.

    [0132] While AAAS+ASYNC (84) leaves the Multiplexing and broadcasting component (30), together with negligible time difference, the combined signal EAAS+ESYNC (41) is forwarded to the transmitting component (43), which transmits it either by wire or by wireless method toward a corresponding receiver (52) in the quieting component (50).

    [0133] The electrically transmitted signal TEAAS+TESYNC (39) is a combination of the audio information electrically transmitted AAAS, referred to as TEAAS, and the SYNC information electrically transmitted, referred to as TESYNC.

    [0134] The electrical channel is robust, thus, data at the receiver's output (78) received exactly as data at the transmitter's input (39) with no loss and no further distortion, and with negligible delay.

    [0135] In the quieting component (50) the receiver (52) forwards the integrated signals, referred as QEAAS+QESYNC (78), to DSP2 (54).

    [0136] DSP2 (54) executes a separation algorithm whose input is the combined signal QEAAS+QESYNC (78) and its output are two separate signals: QEAAS and QESYNC.

    [0137] At this point DSP2 (54) saves the following in its memory:

    1) GSM (452) as it appeared in QESYNC package, as shown in FIG. 8;
    2) RTT which is the accurate time that the specific QESYNC's (78) package has been received by DSP2;
    3) QEAAS data (453) as shown in FIG. 8.

    [0138] The three elements together are referred to as an Eblock. DSP2 (54) stores the Eblock in its memory.

    [0139] In the quieting component (50) the microphone EMIC (62), positioned at the edge of the quiet zone (63), acquires the acoustical signal at the quiet zone vicinity. This signal is comprised of the AAAS+ASYNC (84) signal, distorted by the acoustic channel, and also of the surrounding voices in the quiet zone vicinity, referred to as QAAS signal (94) shown in FIG. 6. In FIG. 11 that describes the algorithm deployed in this invention, the SYNC signal is represented as SYNC(n); the undesired noise is represented as x(n); the surrounding voices QAAS are represented as y(n); and y(n) represents the surrounding voices that may be distorted a little due to residual noises.

    [0140] The acquired integrated signals, referred as QAAS+QAAS+QASYNC (72), and forwarded to DSP2 (54).

    [0141] DSP2 (54) executes a separation algorithm whose input is the combined signal QAAS+QAAS+QASYNC (72). This is the same separation algorithm as was previously described regarding QEAAS and QESYNC processed on the combined signal QEAAS+QESYNC (78) coining from receiver (52). At this point its output is two separate signals: QAAS+QAAS and QASYNC.

    [0142] At this point DSP2 (54) saves the following in its memory

    1) GSM (452) as appears in QASYNC package as shown in FIG. 8;
    2) RTT which is the accurate time that the specific QASYNC's (72) package has been received by DSP2.
    3) QAAS+QAAS data (453), as shown in FIG. 8.

    [0143] The three elements together are referred to as an Ablock. DSP2 (54) stores the Ablock in its memory.

    [0144] DSP2 (54) executes a correlation algorithm as follows: DSP2 takes the GSM written at the most recent Ablock and searches in the memory for an Eblock having the same GSM. This is in order to locate two corresponding blocks that represent the same interval but with delay.

    [0145] DSP2 then extracts QEAAS data from Eblock.

    [0146] DSP2 uses the recent acoustical channel's RTT, in order to time the antiphase generator with Eblock's data, as shown in FIG. 7.

    [0147] DSP2 (54) continuously calculates the acoustic channel's response to the repetitive SYNC signal, as described earlier in Idle state. 101221 Since the Eblock that is stored in the memory enough time before DSP2 needs it for its calculations; and since the FIR filter, represented as W(z) in FIG. 11, is adaptive; and since the secondary channel path S 1(z) is known; and since the precise moment to transmit the antiphase DSP2 is known; thus, it is possible to accurately and precisely generate the acoustical antiphase AAS.

    [0148] After de-digitize the signal, by using a DAC converter (88) and amplify (56), is forwarded toward the loudspeaker (82). This signal has the precise calculated delay (as was previously explained) i.e. the antiphase signal will be broadcasted just at the appropriate moment with the incoming AAAS+ASYNC (84) acoustics signal as heard at the edge of the quiet zone and as shown in FIG. 6.

    [0149] The process that was described above is repeated sequentially for every block, i.e. for each SYNC interval (561) shown in FIG. 9, thus, ensuring sound continuity and also compensates for physical variations that may occur, such as relative movement, reverberations and frequency response variations.

    [0150] The acoustic antiphase wave AAAS+ASYNC (86) generated by DSP2 (54) and broadcasted by the quieting loudspeaker (82) precisely matches in time and momentary antiphase amplitude with the AAAS+ASYNC (84) as heard at the quiet zone's edge (63). The two acoustic waves interfere each other, thus, significantly reduce the AAAS signal(s) (91) in the quiet zone.

    [0151] Optionally, in order to further reduce the residual AAAS inside the quiet zone (63) an additional microphone, marked (70) in FIG. 6, may be used. This microphone is located in the quiet zone, preferably at its approximate center, and receives residue predefined AAAS originating from incomplete coherency between the incoming predefined AAAS and the generated antiphase AAAS.

    [0152] Since the broadcasting of the matched antiphase AAAS in the Quiet Zone is dependent on the predefined AAAS as received by microphone Emic (62) in the quiet zone's edge, it is possible to vary the quiet zone's location according the user's desire or constrains (i.e. dynamic changing of the quiet zone's location within the area). The location change is done by moving the microphone Emic (62) and the antiphase quieting loudspeaker (82), and the optional microphone Imic (70), if in use, to a (new) desired quiet zone location.

    [0153] The precise timing and momentary amplitude of the broadcasted antiphase AAAS+ASYNC (86) by the quieting loudspeaker (82) against predefined AAAS+ASYNC (84) broadcasted by loudspeaker (80, 81) as shown in FIG. 6, provides a quiet zone (63) where QAAS (94) can still be heard (QAAS are sounds such as, but not limited to, speaking and/or conversing near or at the quiet zone) while the predefined AAAS is not heard inside).

    [0154] The present invention ensures that the listeners will not be interfered due the presence of the SYNC signals in the air: according FIG. 9, the amplitude of the broadcasted synchronization signal (551) is substantially small related to the audio amplitude of the predefined AAAS (553), thus, the SYNC signals are not heard by the listeners. Additionally, the SYNC signal amplitude is controlled by DSP2, as described earlier, by moving among system states Idle and Busy. This SYNC structure does not disturb human hearing while not distorting the predefined AAAS outside of the quiet zone or the QAAS within the quiet zone.

    [0155] As presented in FIG. 8, each SYNC package (450) includes a well-defined GSM (452) which is associated to the time that the SYNC was generated at. As illustrated in FIG. 10, the GSM Time Tag enables DSP2 (54) to uniquely identify the specific package that earlier has been extracted from QEAAS+QESYNC (78), according the GSM time tag that recently extracted from QAAS+ASYNC (72). The identification ensures reliable and complete correlation of the audio signal between the electrically-stored signal which is used to build the antiphase signal, and the incoming acoustic signal at the quite zone

    [0156] Furthermore, optionally, as illustrated in FIG. 8, the SYNC signal may include additional data (453) to be used, not limited to, such as instruction-codes to activate parts of the quieting system, upon request/need/demand/future plans, and/or other data.

    [0157] The generation of the antiphase acoustic signal which is based on the electrical acoustic signal prior acquired, enables cancellation of predefined audio noise signals only, in the quiet zone, without interfering with other surrounding and in-zone audio signals.

    [0158] Utilizing antiphase acoustic signal by using the pre-acquired electrical acoustic signalsignificantly improves the predefined AAAS attenuation in the high-end of the audio frequency spectrum, where prior arts are limited.

    [0159] The repetitive updating of the antiphase acoustic signal in the quiet zone in time and momentary amplitude ensures updating of the antiphase signal according to changes in the environment such as relative location of the components or listeners in the quiet zone.

    [0160] It should be clear that the description of the embodiments and attached Figures set forth in this specification serves only for a better understanding of the invention, without limiting its scope.

    [0161] It should also be clear that a person skilled in the art, after reading the present specification could make adjustments or amendments to the attached Figures and above described embodiments that would still be covered by the present invention.