Method for predicting clamp force using convolutional neural network

11530957 · 2022-12-20

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for predicting a clamp force using a convolutional neural network includes: generating a cepstrum image from a signal processing analysis apparatus; extracting a characteristic image by multiplying a predetermined weight value to pixels of the generated cepstrum image through artificial intelligence learning; extracting, as a representative image, the largest pixel from the extracted characteristic image; synthesizing an image by synthesizing the extracted representative image information; and predicting a clamp force by comparing the synthesized image with a predetermined value.

Claims

1. A method for predicting a clamp force using a convolutional neural network, the method comprising: generating a cepstrum image containing frequency change information as a pixel value and processed from measured data from a component by a signal processing analysis apparatus; extracting a characteristic image multiplied by a predetermined weight value to pixels of the generated cepstrum image through artificial intelligence learning; extracting, as a representative image, a largest pixel from the extracted characteristic image; synthesizing an image by synthesizing the extracted representative image by a dense layer; and predicting a clamp force by comparing a similarity of the synthesized image with a predetermined value; indicating the predicted clamp force having a highest similarity; and selecting a bolt based on the predicted clamp force.

2. The method of claim 1, wherein an Adam optimization for optimizing the predetermined weight value is applied between the synthesizing of the image and the predicting of the clamp force.

3. The method of claim 2, wherein a loss function is applied to the Adam optimization.

4. The method of claim 1, wherein in the extracting of the characteristic image, a convolution filter is applied.

5. The method of claim 1, wherein in the extracting of the representative image, a pooling filter is applied.

6. The method of claim 4, wherein at least two convolution filters are continuously applied.

7. The method of claim 6, wherein the extracted representative image is subjected to the extracting of the characteristic image at least once.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a flowchart according to an exemplary embodiment of the present disclosure.

(2) FIG. 2 is a configuration diagram of a convolutional neural network according to an exemplary embodiment of the present disclosure.

(3) FIG. 3 is a configuration diagram of a filter according to an exemplary embodiment of the present disclosure.

DESCRIPTION OF SPECIFIC EMBODIMENTS

(4) The present disclosure may have various modifications and various embodiments and specific embodiments will be illustrated in the drawings and described in detail in the detailed description. However, this does not limit the present disclosure to specific embodiments, and it should be understood that the present disclosure covers all the modifications, equivalents and replacements included within the idea and technical scope of the present disclosure.

(5) In describing each drawing, reference numerals refer to like elements.

(6) Terms including as first, second, and the like are used for describing various constituent elements, but the constituent elements are not limited by the terms. The terms are used only to discriminate one constituent element from another component.

(7) The terms are used only to discriminate one constituent element from another component. A term ‘and/or’ includes a combination of a plurality of associated disclosed items or any item of the plurality of associated disclosed items.

(8) If it is not contrarily defined, all terms used herein including technological or scientific terms have the same meanings as those generally understood by a person with ordinary skill in the art.

(9) Terms which are defined in a generally used dictionary should be interpreted to have the same meaning as the meaning in the context of the related arts, and are not interpreted as an ideal meaning or excessively formal meanings unless clearly defined in the present application.

(10) A flow of a method for predicting a clamp force using a convolutional neural network according to an exemplary embodiment of the present disclosure will be described.

(11) FIG. 1 is a flowchart according to an exemplary embodiment of the present disclosure, FIG. 2 is a configuration diagram of a convolutional neural network according to an exemplary embodiment of the present disclosure and FIG. 3 is a configuration diagram of a filter according to an exemplary embodiment of the present disclosure.

(12) First, in a cepstrum image generating step (S1), a cepstrum image is generated a signal processing analysis apparatus (S20).

(13) The generated cepstrum image contains frequency change information as a pixel value as the clamp force of the acquired signal increases.

(14) The cepstrum image may be obtained from the signal processing and analysis apparatus (510).

(15) The learning cepstrum image (S30) may be obtained from the generated cepstrum image.

(16) Next, in a characteristic image extracting step (S2), a characteristic image is extracted by multiplying a predetermined weight value by the pixel of the cepstrum image (S30) generated through artificial intelligence learning.

(17) At this time, in the characteristic image extracting step (S2), a convolution filter (S40) may be applied.

(18) The convolution filter is a small-sized filter and has a weight value to be multiplied by the image pixel.

(19) More specifically, the convolution filter is one of layers of the convolutional neural network that is used to extract the input image.

(20) A value of a matrix of the convolution filter in each deep learning is called a mask.

(21) By using the convolution filter, the value of the mask through repetitive learning is changed to an appropriate value for distinction, thereby improving the accuracy of learning.

(22) When the cepstrum image is input to the convolution filter, the filter is moved on the image and the characteristic image is extracted by multiplying a predetermined appropriate weight value.

(23) The extracted characteristic image is input to a pooling filter (S50).

(24) The pooling filter (S50) extracts only representative values among the pixel information.

(25) More specifically, the pooling filter (S50) is used to derive the most significant value of the feature map derived through convolution and is used to reduce the size of the image.

(26) When the extracted characteristic image passes through the pooling filter (S50), only the representative information is stored, and the size of the image is reduced.

(27) At least two convolution filters may be continuously applied.

(28) Next, in a representative image extracting step (S3), the largest pixel is extracted from the extracted characteristic image.

(29) At this time, in the representative image extracting step (S3), a pooling filter may be applied.

(30) The extracted representative image may be subject to the characteristic image extracting step at least once again.

(31) Next, in an image synthesizing step (S4), a synthesis image is generated by synthesizing the extracted representative image information.

(32) Finally, in a clamp force predicting step (S5), the predicted clamp force is indicated by comparing the synthesized image with a predetermined value (S80).

(33) At this time, the information constituted by the representative values is finally synthesized by the dense layer, and the predicted value is indicated as a clamp force having the highest probability.

(34) The predicted value is indicated as a clamp force-specific probability vector (S70).

(35) On the other hand, Adam optimization (S62) may be applied for optimizing the weight value between the image synthesizing step (S4) and the clamp force predicting step (S5).

(36) Since the minimum value is derived using an Adam optimizer, it is appropriate to be used in a real-time measurement system due to a faster speed than the conventional slope descending method.

(37) At this time, a loss function (S61) may be applied to the Adam optimization (S62).

(38) In other words, the Adam optimization technique helps in optimization of the convolution filter.

(39) A filter with sufficient optimization has a weight value to effectively extract the characteristics of the image.

(40) A configuration of the filter according to an exemplary embodiment of the present disclosure will be described below in detail with reference to FIG. 3.

(41) A cepstrum image 100 is acquired as data measured from a component, and at this time, the size of the image is a size of 599×32 pixels.

(42) The convolution filter 200 is applied to the entire image while a filter having a size of 3×3 moves to the right or down by one space to extract a characteristic of the cepstrum change.

(43) More specifically, a first convolution filter 210 and a second convolution filter 220 may be continuously applied as the convolution filter 200.

(44) The size of a first characteristic image 310 extracted from the convolution filter 200 is maintained at a size of 599×32 pixels.

(45) The first characteristic image 310 is input to a first pooling filter 410.

(46) The pooling filter 400 may sequentially include a first pooling filter 410 and a second pooling filter 420.

(47) When the first characteristic image 310 subjected to the convolution enters the pooling filter 400, all regions of the image are divided into 2×2 regions, and the largest pixel value in each region is extracted.

(48) This value is a first representative image 510.

(49) That is, since only one representative image value is obtained from four values as illustrated in the drawing, the horizontal and vertical sizes of the image is reduced by half.

(50) The first representative image 510 is again derived as a second characteristic image 320 through the third convolution filter 230 and the fourth convolution filter 240.

(51) The second characteristic image 320 passes through the second pooling filter 420 again to obtain a second representative image 520.

(52) A synthesis image 600 becomes 1 by synthesizing image information of the dense layer (S60) to measure the similarity with each class among 31 classes of 50 kN to 81 kN and add the similarities.

(53) In the clamp force predicting step (S5), the class having the highest similarity among the measured similarities is confirmed as a determined clamp force.

(54) As such, the first convolution filter 210 and the second convolution filter 220 may be doubly used as the convolution filter 200.

(55) As such, a multi-filter structure may be applied, in which the characteristic images obtained by the first convolution filter 210 and the second convolution filter 220 are provided to the first pooling filter 410 and the characteristic images obtained by the third convolution filter 230 and the fourth convolution filter 240 are provided to the second filter 420.

(56) In other words, it a multi-filter structure may be applied using a triple filter structure that is a combination of the double convolution filter and the pooling filter.

(57) In this way, deep characteristics may be extracted by stacking the convolution filters in multiple layers.

(58) In addition, important information may be stored in a smaller image by using the pooling filter at least once, thereby improving the speed and accuracy of real-time prediction.

(59) The method for predicting the clamp force using the convolutional neural network according to the exemplary embodiment of the present disclosure can be constituted by a cepstrum image generating step (S31) of generating a cepstrum image from the signal processing analysis apparatus and a predicting step (S80) of indicating a predicted clamp force by comparing the generated cepstrum image with a predetermined value.