METHOD FOR DETERMINING ABNORMAL ACOUSTIC SOURCE AND AI ACOUSTIC IMAGE CAMERA
20220381606 · 2022-12-01
Inventors
- Young-Key KIM (Daejeon, KR)
- In-Keon KIM (Daejeon, KR)
- Wook-Jin JEONG (Daejeon, KR)
- Jung-Seop KIM (Daejeon, KR)
Cpc classification
G06V10/12
PHYSICS
G06V20/52
PHYSICS
H04N5/272
ELECTRICITY
International classification
G06V10/12
PHYSICS
G06V20/52
PHYSICS
Abstract
Disclosed is an AI acoustic camera including an acoustic source localizing means unit of generating position-specific acoustic level data by determining a position of an acoustic source, an AI acoustic analysis unit of recognizing a type of acoustic source estimated as an abnormal acoustic source by extracting a regeneration time domain acoustic signal for the acoustic source with the determined position and AI-learning and recognizing an acoustic feature image of the extracted time domain acoustic signal, an object recognition unit of recognizing a type of object positioned in the acoustic source through image analysis of an area recognized as that the acoustic source is positioned, and a determination unit of determining the acoustic source as a true acoustic source when the type of acoustic source and the type of object have commonality.
Claims
1. A method for determining an abnormal acoustic source using acoustic source localizing AI acoustic classification comprising: an acoustic source localizing step of calculating a level of an acoustic source for each position based on acoustic data acquired by a plurality of acoustic sensor array; a candidate acoustic source time domain acoustic source signal extraction step of extracting a regeneration time domain acoustic signal of a position estimated as that the acoustic source is present based on the level of the acoustic source for each position; an acoustic feature image generation step of generating a color feature image by extracting a feature of the time domain acoustic source signal of the candidate acoustic source; an AI acoustic classification step of recognizing the acoustic feature image and performing the acoustic classification for the candidate acoustic source by using a pre-learned AI acoustic classification means; and an abnormal acoustic source determination step of determining the acoustic source as the abnormal acoustic source when the acoustic classification for the candidate acoustic source belongs to a predefined monitoring target range.
2. The method for determining the abnormal acoustic source of claim 1, further comprising: an object image classification step of determining a type of object located at the candidate acoustic source by video analysis of a candidate acoustic source coordinate or adjacent position, wherein in the abnormal acoustic source determination step, when the acoustic classification and the type of object are included in a predetermined monitoring target range, the acoustic source is determined as an abnormal acoustic source and an alarm signal is generated.
3. The method for determining the abnormal acoustic source of claim 1, wherein the acoustic source localizing step includes an acoustic data acquisition step of acquiring, by an acoustic data acquisition unit, acoustic data through an acoustic sensor array configured by a plurality of acoustic sensors; and a position-specific acoustic level calculation step of calculating, by an acoustic calculation unit of an acoustic processing unit, a position-specific acoustic level in a direction of the acoustic sensor array, the candidate acoustic source time domain acoustic source signal extraction step includes an abnormal acoustic source candidate selection step of selecting, by an abnormal acoustic source candidate selection unit, one position as a local area representative position in at least one local area of grouping positions having acoustic levels exceeding a predetermined level; and a regeneration time-axial acoustic signal extraction step of candidate position(s) of extracting, by the acoustic processing unit, a regeneration time-axial acoustic signal of a local area representative position belonging to an abnormal acoustic source candidate local area, the acoustic feature image generation step is generating, by an acoustic feature image generation unit, color based on the data obtained by feature extraction or conversion for the time-axial acoustic signal of the local area representative position, the AI acoustic classification step is recognizing, by an AI acoustic analysis unit, the feature image to be classified as one of pre-learned acoustic scenes, and the abnormal acoustic source determination step is determining, by a determination unit, the candidate local area or local area representative position as the abnormal acoustic source when the classification of the acoustic scene coincides with or has relevance with a predefined abnormal acoustic source sensing target.
4. The method for determining the abnormal acoustic source of claim 3, further comprising: an object recognition step of recognizing, by an object recognition unit, a type of object located in the abnormal acoustic source candidate local area based on the video image in an adjacent area of the abnormal acoustic source candidate local area(s) or the abnormal acoustic source candidate position(s), wherein when the classification of the acoustic scene determined by the AI acoustic analysis unit, the type of object recognized by the object recognition unit, and the predefined abnormal acoustic source sensing target range all are matched with each other, the determination unit determines the candidate local area or the candidate position as the abnormal acoustic source.
5. The method for determining the abnormal acoustic source of claim 3, further comprising: an alarm and transmission step of generating, by the determination unit, an alarm signal when the abnormal acoustic source candidate local area or the abnormal acoustic source candidate position is determined as the abnormal acoustic source, and transmitting, by a transmission unit, optical acoustic image information of overlapping an optical video image with an acoustic field visualizing image generated by the acoustic calculation unit to the server.
6. An AI acoustic image camera comprising: an acoustic data acquisition unit that acquires acoustic data through an acoustic sensor array configured by a plurality of acoustic sensors; an acoustic calculation unit of an acoustic processing unit that calculates a position-specific acoustic level in a direction of the acoustic sensor array; an abnormal acoustic source candidate selection unit that selects one position as a local area representative position in at least one local area of grouping positions having acoustic levels exceeding a predetermined level; an acoustic signal extraction unit of the acoustic processing unit that extracts a regeneration time-axial acoustic signal of a local area representative position belonging to an abnormal acoustic source candidate local area; an acoustic feature image generation unit that generates color images based on the data obtained by feature extraction or conversion for the time-axial acoustic signal of the local area representative position; an AI acoustic analysis unit that recognizes the feature image to be classified as one of pre-learned acoustic scenes; and a determination unit that determines the candidate local area or local area representative position as the abnormal acoustic source when the classification of the acoustic scene coincides with or has relevance with a predefined abnormal acoustic source sensing target.
7. The AI acoustic image camera of claim 6, further comprising: an object recognition unit that recognizes a type of object located in the abnormal acoustic source candidate local area based on the video image in an adjacent area of the abnormal acoustic source candidate local area(s) or the abnormal acoustic source candidate position(s), wherein when the classification of the acoustic scene determined by the AI acoustic analysis unit, the type of object recognized by the object recognition unit, and the predefined abnormal acoustic source sensing target range all are matched with each other, the determination unit determines the candidate local area or the candidate position as the abnormal acoustic source.
8. The AI acoustic image camera of claim 6, wherein the determination unit generates an alarm signal when the abnormal acoustic source candidate local area or the abnormal acoustic source candidate position is determined as the abnormal acoustic source, and further comprising: a transmission unit that transmits optical acoustic image information of overlapping an optical video image with an acoustic field visualizing image generated by the acoustic calculation unit to the server.
9. The AI acoustic image camera of claim 6, wherein the acoustic feature image is a spectrogram.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The above and other aspects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION
[0026] Hereinafter, a method for determining an abnormal acoustic source and an AI acoustic image camera according to an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
[0027] In the present disclosure, an acoustic source includes an ultrasonic acoustic source that belongs to a range of 20 KHz to 100 KHz.
[0028] The method for determining the abnormal acoustic source according to an embodiment of the present disclosure optionally includes an acoustic source localizing step, a candidate acoustic source time data extracting step, an acoustic feature image generating step, an AI acoustic classifying step, and an abnormal acoustic source determining step.
[0029] As illustrated in
[0030] As illustrated in
[0031] a) Localizing of Acoustic Source and Selection of Abnormal Acoustic Source Candidate Local Area
[0032] Localizing of Acoustic Source
[0033] First, in the acoustic source localizing step, a level of an acoustic source for each position is calculated based on acoustic data acquired by a plurality of acoustic sensor arrays.
[0034] Specifically, in the acoustic data acquisition step (S10), the acoustic data acquisition unit 10 acquires acoustic data through an acoustic sensor array 11 configured by a plurality of acoustic sensors.
[0035] Next, in the position-specific acoustic level calculation step (S20), the acoustic calculation unit 21 of the acoustic processing unit 20 calculates a position-specific acoustic level in a direction of the acoustic sensor array. Specifically, the acoustic level is position-specific beam power.
[0036] In an embodiment, delay distance calculation of calculating distances between sensors and virtual plane positions is performed using a sensor coordinate and a virtual plane coordinate, time delay correction is applied to each of the acoustic wave signals using the delay distances, and these delay distances are summed to generate acoustic source values of the virtual plane positions. Beam power levels of the generated acoustic source values are calculated and generated.
[0037] The contents about the acoustic source localizing (acoustic field visualizing) disclosed in US Patent No. U.S. Ser. No. 10/945,705 B2 (Portable ultrasonic facilities diagnosis device) and Korean Patent No. 10-1976756 (Portable ultrasonic image facilities diagnosis device including electronic means for visualizing radiated ultrasonic waves) registered by applicants of the present disclosure will be disclosed in the specification of the present disclosure.
[0038]
[0039] The localizing of the acoustic source is performed by a delay-sum beamforming method of detecting a time delay between a position of a signal collected through each sensor included in the sensor array and a sensor and estimating a generation position of the acoustic source in front of the sensor array.
[0040] Selection of Abnormal Acoustic Source Candidate
[0041] In the abnormal acoustic source candidate selection step (S30), the abnormal acoustic source candidate selection unit 23 selects one position as a local area representative position (e.g., the representative position is a local maximum position) in at least one local area (abnormal acoustic source candidate local area) of grouping positions having acoustic levels exceeding a predetermined (or predefined) level.
[0042] As illustrated in
[0043] For example, it is preferred that a first local area representative position is a position where the beam power level is maximum in the first local area. A representative position may be present in a red part of forming the central portion in the first local area. Even in the second local area, a representative position is selected in the same manner.
[0044] b) Extraction of Time Domain Acoustic Signal
[0045] A time domain acoustic signal and a time-axis acoustic signal refer to acoustic signals expressed according to a time flow in the same meaning. A vertical axis represents a time axis and a lateral axis represents an amplitude of the acoustic signal.
[0046] Next, in the candidate acoustic source time domain acoustic signal extraction step, a regenerated (time domain beamformed) time domain acoustic signal of a position estimated as that the acoustic source is present is extracted based on the level of the acoustic source for each position. In an embodiment, the position estimated as that the acoustic source is present may be a representative position, or a maximum level position of the local area.
[0047] In the time-axial acoustic signal extraction step (S40), the acoustic processing unit 20 extracts a time-axial acoustic signal (time signal, time domain beamformed time-axial acoustic signal) of a local area representative position belonging to an abnormal acoustic source candidate local area.
[0048] In the present disclosure, the “regenerated time domain acoustic signal of the position” means a time-axial reference acoustic signal generated by an acoustic method or a beamforming method of reconfiguring an acoustic source of a specific position (or specific direction) using a plurality of acoustic sensors.
[0049] As illustrated in
[0050] In the time-axial acoustic signal extraction step (S40), in the position-specific acoustic level calculation step (S20), that is, the acoustic source localizing step, an acoustic signal located in the representative position is extracted (selected) and produced from acoustic signals of each position regenerated by the time domain beamforming.
[0051]
[0052] An acoustic pressure signal reaching a microphone is
p(t)=[p.sub.1(t),p.sub.2(t), . . . ,p.sub.M(t)].sup.T
[0053] Scan vectors (delay time) for each position and each time are
[0054] A delay-sum beamforming output signal, that is, a regeneration time domain acoustic signal is
[0055] Wherein, M represents a microphone channel number and θ represents an incident angle of the acoustic source.
[0056]
[0057] c) Generation of Acoustic Feature Image
[0058] As illustrated in
[0059] As illustrated in
[0060]
[0061] The acoustic feature image generation unit 50 may image at least one feature parameter selected from Discrete Wavelet Transform (DWT), Multi-resolution Short-Time Fourier Transform, mel filterbank, log mel filterbank energy applied with log, mel-frequency filterbank conversion, and multi-resolution log-mel spectrogram through log conversion to be generated as input and learning data.
[0062] d) AI Acoustic Classification
[0063] In the AI acoustic classification step, the acoustic feature image is recognized and the acoustic classification for the candidate acoustic source is performed by using a pre-learned AI acoustic classification means.
[0064] In the acoustic classification step (S60), the AI acoustic analysis unit 60 recognizes the feature image to be classified as one of pre-learned acoustic scenes. For example, the AI acoustic analysis unit 60 may perform acoustic classification for the candidate acoustic source by a convolutional neural network (CNN) trained using an acoustic feature image.
[0065] e) Object Recognition
[0066] In the object image classification step, a type of object located at the candidate acoustic source is determined by video analysis of a candidate acoustic source coordinate or adjacent position. In the abnormal acoustic source determination step, when the acoustic classification and the type of object are included in a predetermined monitoring target range, the acoustic source is determined as an abnormal acoustic source and an alarm signal is generated.
[0067] In the object recognition step (S70), the object recognition unit 70 recognizes a type of object located in the abnormal acoustic source candidate local area based on the video image in an adjacent area of the abnormal acoustic source candidate local area(s) or the abnormal acoustic source candidate position(s).
[0068] For example, the object recognition unit 70 includes a convolutional neural network (CNN) pre-learning images of facilities, environments, and humans and may be an AI means which receives a video image of an adjacent area of the abnormal acoustic source candidate position(s) to determine a type (facility, human, pipe, motor, machine, transformer, and power line).
[0069] f) Determination
[0070] In the abnormal acoustic source determination step, when the acoustic classification for the candidate acoustic source belongs to a predefined monitoring target range, the acoustic source is determined as the abnormal acoustic source.
[0071] As illustrated in
[0072] In the case of including the object recognition step (S70) by the object recognition unit 70, when the classification of the acoustic scene determined by the AI acoustic analysis unit 60, the type (feature) of object recognized by the object recognition unit 70, and the predefined abnormal acoustic source sensing target range all are matched with each other (e.g., the acoustic scene is gas leakage, the object image is a gas pipe, and the sensing target is gas-related facilities), the determination unit 80 determines the candidate local area or the candidate position as the abnormal acoustic source.
[0073] e) Alarm and Transmission
[0074] As illustrated in
[0075] While the present disclosure has been described in connection with the preferred embodiments described above, the scope of the present disclosure is not limited to these embodiments, and the scope of the present disclosure will be defined by the appended claims and will include various changes and modifications belonging to an equivalent scope to the present disclosure.
[0076] The reference numerals described in the following claims are intended to simply assist in the understanding of the present disclosure and should not be impacted in the interpretation of the scope of the present disclosure, and the scope of the present disclosure should not be construed as narrower by the described reference numerals.