NOISE SOURCE VISUALIZATION DATA ACCUMULATION AND DISPLAY DEVICE, METHOD, AND ACOUSTIC CAMERA SYSTEM

20170337938 · 2017-11-23

    Inventors

    Cpc classification

    International classification

    Abstract

    A noise source visualization data accumulation and display device is provided where at two or more acoustic data are generated by beamforming acoustic signals acquired at different moments by using a plurality of microphone arrays and thereafter, one selected among two or more acoustic data or acoustic data processed therefrom is mapped to one optical image to be displayed.

    Claims

    1. A noise source visualization data accumulation and display device, wherein two or more acoustic data (D1 and D2) are generated by beamforming acoustic signals acquired at different moments by using a plurality of microphone arrays and thereafter, and wherein one selected among two or more acoustic data or acoustic data (M3) processed therefrom is mapped to one optical image to be displayed.

    2. A noise source visualization data accumulation and display method, comprising: a step (S10) of providing an acoustic and image signal acquiring means (100) configured to include acoustic sensors (10) disposed to be spaced apart on a curve or a plane at a regular interval to sense an acoustic signal of a noise source, an acoustic signal acquiring unit (20) converting the acoustic signal received from the acoustic sensors (10) into a digital signal and transmitting the digital signal to a central processing unit (40), and a pick-up lens 30 picking up an optical image of the noise source; an initial signal acquiring step (S20) in which the acoustic and image signal acquiring means (100) acquires the acoustic signal and the acoustic image of the noise source during a first time frame (T1); an initial analysis step (S30) in which the central processing unit (40) calculates beam power (P.sub.ij) of each point based on the acoustic signal acquired during the first time frame (T1) to generate first acoustic data and generate image data based on a signal of the pick-up lens (30); an initial expression step (S40) in which a display unit (50) coordinates the first acoustic data and image data calculated by the central processing unit (40) and overlays the first acoustic data and image data to visually express the first acoustic data and image data; an accumulation signal acquiring step (S50) in which the acoustic signal acquiring unit (20) acquires the acoustic signal of the noise source during a second time frame (T2) which is temporally later than the first time frame (T1); an accumulation signal analyzing step(S60) in which the central processing unit (40) calculates the beam power (P.sub.ij) of each point based on the acoustic signal acquired during the second time frame (T2) to generate accumulated acoustic data; and an accumulation expression step (S70) in which the display unit (50) overlays and maps an acoustic matrix (M3) calculated by using the second acoustic data (D2) and the initial acoustic data D1 or the second acoustic data (D2) and the initial acoustic data (D1) to image data to visually express the acoustic matrix (M3) mapped to the image data.

    3. The noise source visualization data accumulation and display method of claim 2, wherein in the initial analysis step (S30) or the accumulation signal analyzing step (S60), when a value calculated by using a difference value of at least two beam power values selected among beam power (P.sub.ij) values of each point calculated based on the acoustic signal during one time frame is larger than a predetermined value, the central processing unit (40) treats the value as the effective acoustic data to map the value to the image data and overlay and display the value mapped to the image data or make the value as a triggering signal of data storing.

    4. The noise source visualization data accumulation and display method of claim 3, wherein in the initial analysis step (S30) or the accumulation signal analyzing step (S60), when a difference of a maximum value P.sub.max and a minimum value (P.sub.min) among the beam power (P.sub.ij) values is larger than a predetermined reference value (ΔP1) or a difference of the maximum value (P.sub.max) and an average value (P.sub.mean) is larger than a predetermined reference value (ΔP2), the central processing unit (40) treats the value as the effective acoustic data to map and overlay and display the value to the image data or make the value as the triggering signal of the data storing.

    5. The noise source visualization data accumulation and display method of claim 4, wherein when a standard deviation value of the beam power (P.sub.ij) values of each point calculated based on the acoustic signal acquired during one time frame is larger than a predetermined reference, the central processing unit (40) determines that effective noise is generated and treats the effective noise as the effective acoustic data to map the value as the effective acoustic data to map the value to the image data and overlap and display the value mapped to the image data or make the value as the triggering signal of the data storing.

    6. An acoustic camera system for a noise source visualization data accumulation and display, comprising: acoustic sensors (10) disposed to be spaced apart on a curve or a plane at a regular interval to sense an acoustic signal of a noise source; an acoustic signal acquiring unit (20) converting the acoustic signal received from the acoustic sensors (10) into a digital signal and transmitting the digital signal to a central processing unit (40); a pick-up lens (30) picking up an optical image of the noise source; a central processing unit (40) calculating the beam power (P.sub.ij) of each point based on the acoustic signal acquired during a time frame to generate acoustic data and generate image data based on a signal of the pick-up lens (30); and a display unit (50) coordinating the acoustic data and the image data calculated by the central processing unit (40) and overlaying the first acoustic data and the image data to visually express the acoustic data and the image data, wherein the central processing unit (40) generates at least two or more sound visualization images by beamforming the acoustic signals acquired at different moments by using the acoustic sensors (10) and the acoustic signal acquiring unit (20) and thereafter, maps an acoustic matrix (M) calculated by using accumulated acoustic data (D2) and initial acoustic data (D1) or the accumulated acoustic data (D2) and the initial acoustic data (D1) onto one optical image and displays the acoustic matrix (M) mapped onto the optical image.

    7. The acoustic camera system of claim 6, wherein the central processing unit (40) maps at least two or more acoustic data by beamforming the acoustic signals acquired at different moments and thereafter, normalizes the generated acoustic data and overlaps and maps the normalized acoustic data onto the optical image to display the acoustic data mapped onto the optical image.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0016] FIGS. 1A to 1D are explanatory diagrams of a beamforming concept.

    [0017] FIGS. 2A and 2B are configuration diagrams of a source noise visualization data accumulation display acoustic camera system.

    [0018] FIG. 3 is a flowchart of a noise source visualization data accumulation display method.

    [0019] FIGS. 4A (overall all level: 45.1 DB) and 4B (over all level: 57.7 DB) illustrate two acoustic data (beam power levels) measured and analyzed at the same position at different time.

    [0020] FIGS. 5A and 5B illustrate an acoustic data frame acquired by subtracting an average value from a beam power level value (a matrix element value, P.sub.ij) of acoustic data displayed in matrix.

    [0021] FIG. 5C illustrates an acoustic data frame in which respective matrix element values of the acoustic data frames of FIGS. 5A and 5B are averaged and illustrated.

    [0022] FIG. 6A illustrates a method and an acoustic and image data mapping image before noise source visualization data is accumulatively displayed.

    [0023] FIG. 6B illustrates an acoustic and image mapping image in which noise source visualization data is accumulatively displayed in one image according to the present invention.

    [0024] FIG. 7 illustrates a diagram showing values of each time frame according to the present invention.

    DETAILED DESCRIPTION OF THE INVENTION

    [0025] FIGS. 1A and 1B are explanatory diagrams of a beamforming concept. A beamforming technique may be described through an equation given below. yi(t) represents a signal measured in an i-th microphone and z(t) represents a beam output. When respective signals measured by M microphone are multiplied by a weight according to a sensor by giving a time delay in a virtual sound source direction and thereafter, the signals are added, beam power may be acquired. When directions of an actual sound source and a virtual sound source coincide with each other, the signals are amplified. By such a method, a position of a noise source may be estimated. The beam power may be expressed by the equation given below.

    [00001] b ( t ) = .Math. j .Math. .Math. p m ( t - Δ .Math. .Math. t j )

    [0026] When a virtual noise source is present at a predetermined position, the size of the noise source may be expressed by the equation given below.

    [00002] s ( t ) = .Math. j .Math. .Math. 1 r j .Math. ei [ kr j - ω ( t + Δ .Math. .Math. t j ) ]

    [0027] FIGS. 1A and 1B are explanatory diagrams of a beamforming concept, FIG. 2 is a configuration diagram of a source noise visualization data accumulation display acoustic camera system,

    [0028] FIG. 3 is a flowchart of a noise source visualization data accumulation display method, FIGS. 4A (overall all level: 45.1 DB) and 4B (over all level: 57.7 DB) illustrate two acoustic data (beam power levels) measured and analyzed at the same position at different time, FIGS. 5A and 5B illustrate an acoustic data frame acquired by subtracting an average value from a beam power level value (a matrix element value, P.sub.ij) of acoustic data displayed in matrix, FIG. 5C illustrates an acoustic data frame in which respective matrix element values of the acoustic data frames of FIGS. 5A and 5B are averaged and illustrated, FIG. 6A illustrates a method and an acoustic and image data mapping image before noise source visualization data is accumulatively displayed, and FIG. 6B illustrates an acoustic and image mapping image in which noise source visualization data is accumulatively displayed in one image according to the present invention.

    [0029] As illustrated in FIGS. 2 to 6, the noise source virtualization data accumulation display method of the present invention is configured to include a step of providing an acoustic and image signal acquiring means 100 (S10), an initial signal acquiring step (S20), an initial analysis step (S30), an initial expression step (S40), an accumulation signal acquiring step (S50), an accumulation signal analyzing step (S60), and an accumulation expression step (S70).

    [0030] <Steps S10 and S20>

    [0031] As illustrated in FIGS. 2A and 2B, and 3, in the step of providing the acoustic and image signal acquiring means 100 (S10), provided is an apparatus configured to include MEMS acoustic sensors 10 disposed to be spaced apart on a curve or a plane at a regular interval to sense an acoustic signal of a noise source, an acoustic signal acquiring unit 20 converting the acoustic signal received from the MEMS acoustic sensors 10 into a digital signal and transmitting the digital signal to a central processing unit 40, and a pick-up lens 30 picking up an optical image of the noise source.

    [0032] Herein, a micro electro mechanical system (MEMS) is a technology that simultaneously integrates a micro mechanical component having a micrometer size and an electronic circuit by applying a semiconductor manufacturing process in association with the MEMS acoustic sensors 10. A MEMS microphone measures mechanical transformation of a thin film by a pressure applied to the thin film by a change in capacitance between electrodes mounted in a thin-film sensor and has the same operating principle as a general condenser microphone. Since the MEMS microphone directly measures an analog signal by digital pulse density modulation (PDM) by using an ADC, the MEMS microphone has an advantage that a separate expensive ADC measurement device required in measurement by using an analog sensor is not required.

    [0033] In the initial signal acquiring step (S20), the acoustic and image signal acquiring means 100 acquires the acoustic signal and the acoustic image of the noise source during a first time frame T1. In actual, the acoustic signal acquiring unit 20 may measure the analog signal at a continuous time interval without a pause period (only of a time section recognized as effective data by analysis and determination of a central processing unit is displayed and stored later).

    [0034] <Steps S30 and S40>

    [0035] In an exemplary embodiment (FIGS. 4A and 6A), in a first acoustic data generating step, the central processing unit 40 generates a beam power level matrix for each point of the noise source illustrated in FIG. 4A. In the initial analysis step (S30), the central processing unit 40 calculates beam power P.sub.ij of each point based on the acoustic signal acquired during the first time frame T1 to generate first acoustic data and generate image data based on a signal of the pick-up lens 30.

    [0036] Herein, the acoustic data may be a beam power (P.sub.ij) level of each point itself or a matrix type acoustic numerical value generated based on the beam power (P.sub.ij) level of each point. Herein, the matrix type acoustic numerical value generated based on the beam power (P.sub.ij) level may be, for example, a value acquired by subtracting an average value from the beam power (P.sub.ij) level of each point as illustrated in FIGS. 5A and 5B. Alternatively, the matrix type acoustic numerical value may be divided into a maximum value, the average value, an overall level, and the like or a normalized value by using the maximum value, the average value, the overall level, and the like.

    [0037] In an exemplary embodiment (FIG. 6A-T1), in the initial expression step (S40), a first image of FIG. 6A is expressed. In the initial expression step (S40), a display unit 50 coordinates the first acoustic data and image data calculated by the central processing unit 40 and overlays the first acoustic data and image data to visually express the first acoustic data and image data.

    [0038] <Step S50>

    [0039] In an exemplary embodiment, the accumulation signal acquiring step (S50), the MEMS acoustic sensors 10 and the acoustic signal acquiring unit 20 acquire the acoustic signal of the noise source during a second time frame T2 which is temporally later than the first time frame T1. In actual, the acoustic signal acquiring unit 20 will measure the analog signal at a continuous time interval without a pause period (only of a time section recognized as effective data by analysis and determination of a central processing unit is displayed or stored later).

    [0040] <Step S60>

    [0041] In an exemplary embodiment (FIGS. 4B and 6A-T2), in the accumulation signal analyzing step(S60), the central processing unit 40 generates the acoustic matrix illustrated in FIG. 4B.

    [0042] The central processing unit 40 calculates the beam power P.sub.ij of each point based on the acoustic signal acquired during the second time frame T2 to generate accumulated acoustic data. Herein, the acoustic data may be a beam power (P.sub.ij) level of each point itself or a matrix type acoustic numerical value generated based on the beam power (P.sub.ij) level of each point. Herein, the matrix type acoustic numerical value generated based on the beam power (P.sub.ij) level may be, for example, a value acquired by subtracting an average value from the beam power (P.sub.ij) level of each point as illustrated in FIGS. 5A and 5B. Alternatively, the matrix type acoustic numerical value may be divided into a maximum value, the average value, an overall level, and the like or a normalized value by using the maximum value, the average value, the overall level, and the like.

    [0043] <Step S70>

    [0044] In an exemplary embodiment (FIGS. 5C and 6B), as illustrated in FIG. 6B, in the accumulation expression step (S70), the display unit 50 maps and displays second acoustic data D2 and initial acoustic data D1 or fifth, fourth, third, and second acoustic data D2, D3, D4, and D5, and the initial acoustic data D1 in one optical image.

    [0045] Herein, as illustrated in FIG. 5C, the central processing unit 40 may generate an acoustic matrix M3 calculated by using the second acoustic data D2 and the initial acoustic data D1 and map and display the generated acoustic matrix M3 in the optical image. In FIG. 5C, the acoustic matrix M calculated by using the second acoustic data D2 and the initial acoustic data D1 represents M.sub.ij acquired by averaging the data FIGS. 5A and 5B (in this case, the calculation means an averaging operation).

    [0046] Thereafter, steps S50, S60, and S70 are repeated, for example, one time to 10 times and in some cases, steps S50, S60, and S70 are repeated even more times to display noise source generation degrees at different time zones in one screen.

    [0047] <Determination of Effective Data and Triggering>

    [0048] In the initial analysis step (S30) or the accumulation signal analyzing step (S60), when a value calculated by using a difference value of at least two beam power values selected among beam power (P.sub.ij) values of each point calculated based on the acoustic signal during one time frame is larger than a predetermined value, the central processing unit 40 treats the value as the effective acoustic data to map the value to the image data and overlay and display the value mapped to the image data or make the value as a triggering signal of data storing.

    [0049] In the initial analysis step (S30) or the accumulation signal analyzing step (S60), when a difference of a maximum value P.sub.max and a minimum value P.sub.min among the beam power (P.sub.ij) values is larger than a predetermined reference value ΔP1 or a difference of the maximum value P.sub.max and an average value P.sub.mean is larger than a predetermined reference value ΔP2, the central processing unit 40 treats the value as the effective acoustic data to map and overlay and display the value to the image data or make the value as the triggering signal of the data storing. When a standard deviation value of the beam power (P.sub.ij) values of each point calculated based on the acoustic signal acquired during one time frame is larger than a predetermined reference, the central processing unit 40 determines that effective noise is generated and treats the effective noise as the effective acoustic data to map the value as the effective acoustic data to map the value to the image data and overlap and display the value mapped to the image data or make the value as the triggering signal of the data storing.

    [0050] As illustrated in FIGS. 2 to 6, the noise source visualization data accumulation display method of the present invention is configured to include the MEMS acoustic sensors 10, the acoustic signal acquiring unit 20, the pick-up lens 30, the central processing unit 40, and the display unit 50. As illustrated in FIG. 2B, the MEMS acoustic sensors 10 are disposed on a curve or a plane (not illustrated) to be spaced apart from each other at a regular interval to sense the acoustic signal of the noise source. The acoustic signal acquiring unit 20 converts the acoustic signals received from the MEMS acoustic sensors 10 into digital signals and transmits the digital signals to the central processing unit 40.

    [0051] The pick-up lens 30 picks up an optical image of the noise source. The central processing unit 40 calculates the beam power P.sub.ij of each point based on the acoustic signal acquired during a time frame to generate acoustic data and generate image data based on a signal of the pick-up lens 30. The display unit 50 coordinates the acoustic data and the image data calculated by the central processing unit 40 and overlays the first acoustic data and the image data to visually express the acoustic data and the image data.

    [0052] In this case, the central processing unit 40 generates at least two or more sound visualization images by beamforming the acoustic signals acquired at different moments by using the MEMS acoustic sensors 10 and the acoustic signal acquiring unit 20 and thereafter, maps the generated sound visualization images onto one optical image acquired from the pick-up lens 30 and displays the sound visualization images mapped to the optical image. The central processing unit 40 maps at least two or more acoustic data by beamforming the acoustic signals acquired at different moments and thereafter, normalizes the generated acoustic data and maps the normalized acoustic data onto the optical image to accumulate and display the acoustic data mapped onto the optical image.

    [0053] The present invention has been described in association with the above-mentioned preferred embedment, but the scope of the present invention is not limited to the embodiment and the scope of the present invention is determined by the appended claims, and thereafter, the scope of the present invention will includes various modifications and transformations included in an equivalent range to the present invention.

    [0054] Reference numerals disclosed in the appended claims are just used to assist appreciation of the present invention and it is revealed that the reference numerals do not influence analysis of the claims and it should not be narrowly analyzed by the disclosed reference numerals.