METHOD OF MANAGING SYSTEM HEALTH
20230161653 · 2023-05-25
Assignee
Inventors
- Nack Woo KIM (Daejeon, KR)
- Byung Tak LEE (Daejeon, KR)
- Jun Gi LEE (Daejeon, KR)
- Hyun Yong LEE (Daejeon, KR)
Cpc classification
G06F18/2132
PHYSICS
International classification
G06F18/2132
PHYSICS
Abstract
A method of managing system health is provided. The method includes calculating a reconstruction missing value with respect to the second domain data, determining a degree of degradation of the system on the basis of the reconstruction missing value, predicting a second remaining useful life (RUL) prediction value {tilde over (y)} of the system on the basis of the second domain data, based on a result of the determination of the degree of degradation, optimizing a degradation compensation function on the basis of a distribution of a first RUL prediction value y of the system predicted based on the first domain data in a pre-learning process of the diagnosis model, and predicting a final RUL prediction value {tilde over (y)}′ obtained by compensating for the second RUL prediction value {tilde over (y)}, by using the optimized degradation compensation function.
Claims
1. A method of managing health of a system by using a diagnosis model pre-learned based on multi-domain data including first and second domain data, the method comprising: calculating a reconstruction missing value with respect to the second domain data; determining a degree of degradation of the system on the basis of the reconstruction missing value; predicting a second remaining useful life (RUL) prediction value {tilde over (y)} of the system on the basis of the second domain data, based on a result of the determination of the degree of degradation; optimizing a degradation compensation function on the basis of a distribution of a first RUL prediction value y of the system predicted based on the first domain data in a pre-learning process of the diagnosis model; and predicting a final RUL prediction value {tilde over (y)}′ obtained by compensating for the second RUL prediction value {tilde over (y)}, by using the optimized degradation compensation function.
2. The method of claim 1, further comprising, before the calculating the reconstruction missing value, preprocessing the second domain data, wherein the preprocessing the second domain data comprises: dividing a size of the second domain data into a batch size; and connecting the second domain data divided into the batch size.
3. The method of claim 1, wherein the diagnosis model comprises an encoder network and a decoder network connected to an output of the encoder network, and the calculating the reconstruction missing value comprises: extracting a dimensionality-reduced second latent variable from the second domain data by using the encoder network; reconstructing the second latent variable to have the same data dimensionality as a data dimensionality of the second domain data by using the decoder network; and calculating the reconstruction missing value representing a difference between the second domain data and output data reconstructed from the second latent variable by using the decoder network.
4. The method of claim 3, wherein the encoder network is a neural network pre-learned not to classify the first and second domain data into different domains, and the extracting the dimensionality-reduced second latent variable comprises extracting the second latent variable, including a common characteristic of the first and second domain data, from the second domain data by using the pre-learned encoder network.
5. The method of claim 1, wherein the determining the degree of degradation of the system comprises: extracting an anomaly index on the basis of the reconstruction missing value; comparing the anomaly index with a first threshold value to determine the occurrence or not of anomaly of the system; and when it is determined that the anomaly of the system occurs, comparing a second threshold value with a ratio of the number of anomaly notification samples to the number of normalcy notification samples to determine the occurrence or not of anomaly associated with the degree of degradation of the system.
6. The method of claim 5, wherein the diagnosis model comprises an encoder network configured to extract a dimensionality-reduced second latent variable from the second domain data and a decoder network configured to reconstruct the second latent variable, and the number of normalcy notification samples is the number of output values recognized as normalcy among output values of the decoder network, and the number of anomaly notification samples is the number of output values recognized as anomaly among the output values of the decoder network.
7. The method of claim 5, wherein the anomaly is a value obtained by normalizing the reconstruction missing value.
8. The method of claim 5, wherein the anomaly index is a representative value representing a maximum value, a minimum value, or an average value of the reconstruction missing value.
9. The method of claim 1, wherein the optimizing the degradation compensation function comprises mapping a second degradation model, representing a distribution of the second RUL prediction value {tilde over (y)}, to a first degradation model representing a distribution of the first RUL prediction value y.
10. The method of claim 1, wherein the predicting the final RUL prediction value {tilde over (y)}′ comprises: calculating a first threshold time by using a first degradation model function representing a distribution of the first RUL prediction value y; calculating a second threshold time by using a second degradation model function representing a distribution of the second RUL prediction value {tilde over (y)}; mapping the second threshold time to the first threshold time by using a difference time between the first threshold time and the second threshold time; and outputting a compensated second RUL prediction value {tilde over (y)}′ on the basis of the second threshold time mapped to the first threshold time.
11. An apparatus for managing system health, the apparatus comprising: a processor and a storage device configured to store the diagnosis model executed by the processor, wherein the diagnosis model comprises: an encoder network configured to extract a second feature vector from second domain data; a decoder network connected to the encoder network and configured to predict a degree of degradation of the system by using data reconstructed from the second feature vector; and a regression network connected to the encoder network and configured to start a remaining useful life (RUL) prediction process on the basis of a result of predicting the degree of degradation of the system, and the regression network predicts a final RUL prediction value {tilde over (y)}′ by using a first RUL prediction value y of the system predicted based on a first feature vector extracted from the first domain data in a pre-learning process and a second RUL prediction value {tilde over (y)} of the system predicted based on the second feature vector.
12. The apparatus of claim 11, wherein the diagnosis model further comprises a domain discrimination network connected to the encoder network, and the domain discrimination network is implemented as an adversarial neural network pre-learned not to classify the first and second domain data into different domains.
13. The apparatus of claim 11, wherein the decoder network predicts a degree of degradation of the system on the basis of a reconstruction missing value representing a difference between the second domain data and the reconstructed data.
14. The apparatus of claim 13, wherein the decoder network normalizes the reconstruction missing value or determines the occurrence or not of anomaly of the system on the basis of an anomaly index representing a maximum value, a minimum value, or an average value of the reconstruction missing value, and when anomaly of the system is determined, predicts the degree of degradation of the system on the basis of a ratio of the number of anomaly notification samples to the number of normalcy notification samples, and the number of normalcy notification samples is the number of output values recognized as normalcy among output values of the decoder network, and the number of anomaly notification samples is the number of output values recognized as anomaly among the output values of the decoder network.
15. The apparatus of claim 14, wherein, when a ratio of the number of anomaly notification samples to the number of normalcy notification samples is greater than a threshold value, the regression network starts the RUL prediction process for calculating the final RUL prediction value {tilde over (y)}′ on the basis of a result of the prediction of the degree of degradation of the system.
16. The apparatus of claim 11, wherein the regression network compensates for the second RUL prediction value {tilde over (y)} by using a degradation compensation function of mapping a distribution of the second RUL prediction value {tilde over (y)} to a distribution of the first RUL prediction value y and predicts the final RUL prediction value {tilde over (y)}′, obtained by compensating for the second RUL prediction value {tilde over (y)}, as an RUL of the system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
DETAILED DESCRIPTION OF THE INVENTION
[0038] Explanation of the present invention is merely an embodiment for structural or functional explanation, so the scope of the present invention should not be construed to be limited to the embodiments explained in the embodiment. That is, since the embodiments may be implemented in several forms without departing from the characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, unless otherwise specified, but rather should be construed broadly within its scope as defined in the appended claims. Therefore, various changes and modifications that fall within the scope of the claims, or equivalents of such scope are therefore intended to be embraced by the appended claims.
[0039] In the following description, the technical terms are used only for explain a specific exemplary embodiment while not limiting the present invention. The terms of a singular form may include plural forms unless referred to the contrary. The meaning of ‘comprise’, ‘include’, or ‘have’ specifies a property, a region, a fixed number, a step, a process, an element and/or a component but does not exclude other properties, regions, fixed numbers, steps, processes, elements and/or components.
[0040]
[0041] Referring to
[0042] To this end, the apparatus 500 for managing the health of the system according to an embodiment of the present invention may be a computing device configured to include a processor 100, a memory (or storage device) 200, an input device 300, and an output device 400.
[0043] The processor 100 may be a device including at least one central processing unit (CPU) and/or at least one graphics processing unit (GPU), which control(s) and manage(s) an operation of the apparatus 500 for managing the health of the system according to an embodiment of the present invention.
[0044] The processor 100 may execute or control a plurality of software modules for managing the health of the system. Here, the plurality of software modules may include a data collector 110, a data preprocessor 120, an RUL diagnosis model 130, a state output unit 140, a database (DB) 150, and a model update unit 160.
[0045] The storage device 200 including a memory may temporarily or permanently store intermediate data and/or resultant data processed by the processor 100, or may temporarily or permanently store intermediate data and/or resultant data processed by the plurality of software modules executed by the processor 100.
[0046] The storage device 200 including the memory may provide an execution space of each of the plurality of software modules and may temporarily or permanently store an algorithm, a program code, and an instruction for executing the plurality of software modules. The storage device 200 may include, for example, a volatile memory, a non-volatile memory, and a hard disk.
[0047] The input device 300 may be a device which transfers a user input to the processor 100 and may be a display device including a key input device or a key input function.
[0048] The output device 400 may be a device for outputting intermediate data and/or resultant data processed by the plurality of software modules or the processor 100, and for example, may be a display device including a display function.
[0049] Hereinafter, software modules executed by the processor 100 will be described.
[0050] The data collector 110 may collect multi-domain data from the database 150 stored in the storage device 200 and may provide the collected multi-domain data to the data preprocessor 120.
[0051] For example, the data collector 110 may periodically generate a query on the basis of a user input received through the input device 300 and may collect the multi-domain data from the database 150 by using the periodically generated query.
[0052] In another embodiment, the data collector 110 may aperiodically generate the query, namely, may generate a specific query at a specific time, and may collect the multi-domain data from the database 150 by using the specific query.
[0053] In another embodiment, the data collector 110 may collect the multi-domain data from a cloud database 150 by using a query generated at an arbitrary time.
[0054] The data preprocessor 120 may preprocess the multi-domain data input from the data collector 110, input the preprocessed multi-domain data to the RUL diagnosis model 130, and store the preprocessed multi-domain data in the database 150 again.
[0055] In an embodiment, preprocessing of the multi-domain data may be a process of cleansing the multi-domain data. Data cleansing may be process of removing, filling, or interpolating a missing value and/or an outlier associated with the multi-domain data, or a process of replacing the missing value and/or the outlier with another value.
[0056] In another embodiment, preprocessing of the multi-domain data may be a process of normalizing the multi-domain data.
[0057] In another embodiment, preprocessing of the multi-domain data may include a process of dividing each domain data constituting the multi-domain data on the basis of a batch size and/or a process of connecting pieces of domain data divided based on the batch size.
[0058] The RUL diagnosis model 130 may be an artificial neural network learned based on a learning algorithm executed by the processor 100 and may collectively process an anomaly diagnosis process, a health diagnosis process, an RUL prediction process, and a domain matching process to provide a processing result of each processor to the state output unit 140.
[0059] The state output unit 140 may convert the processing result, provided from the RUL diagnosis model 130, into visual information and may output the visual information to the output device 400. In this case, the output device 400 may be a display device which displays the visual information.
[0060] The model update unit 160 may store the RUL diagnosis model 130, having different versions based on an update time or the number of updates (a learning time or the number of learnings), in the storage device 200 and may read a specific version (for example, the RUL diagnosis model 130 updated recently).
[0061]
[0062] Referring to
[0063] Moreover, the present invention may collectively process an anomaly diagnosis process through the decoder network, a multi-domain data matching process though a domain discrimination network, and an RUL prediction process through a regression network, in order to diagnose an anomaly of a system and predict an RUL of the system in an environment where multi-domain data is provided.
[0064] To this end, the RUL diagnosis model 130 according to an embodiment of the present invention may include an encoder network 131, a decoder network 133, a regression network 135, and a domain discrimination network 137.
[0065] Before describing the elements 131, 133, 135, and 137, multi-domain data described herein may denote all kinds of data which are collected under operation conditions and operation test environments of different systems.
[0066] For example, in a wind power plant system, multi-domain data may include operation condition data and operation sensing data obtained in a wind power plant A and operation condition data and operation sensing data obtained in a wind power plant B.
[0067] In a sunlight power plant, multi-domain data may include operation condition data and operation sensing data obtained in a sunlight power plant A and operation condition data and operation sensing data obtained in a sunlight power plant B.
[0068] In home appliances, multi-domain data may include operation condition data and operation sensing data obtained in a home appliance A and operation condition data and operation sensing data obtained in a home appliance B.
[0069] Encoder Network 131
[0070] The encoder network 131 may be an artificial neural network which encodes multi-domain data collected from the database 150 or the cloud database 200 to reduce a dimensionality of the multi-domain data.
[0071] The encoder network 131 may reduce a dimensionality of the multi-domain data to generate a dimensionality-reduced feature vector. The feature vector may be a vector value which represents an intrinsic characteristic of each domain data, included in the multi-domain data, in a vector space and may be referred to as a ‘latent variable’. Unless specially described, a ‘feature vector’ and a ‘latent variable’ may be regarded as the term herein.
[0072] In order to extract a feature vector (or a latent variable) from each domain data, the encoder network 131 may be implemented as, for example, a deep neural network (DNN), a convolutional neural network (CNN)), a recurrent neural network (RNN), or a combination thereof.
[0073] Decoder Network 133
[0074] The decoder network 133 may be an artificial neural network which decodes output data (a feature vector or a latent variable) of the encoder network 131 to reconstruct output data (a feature vector or a latent variable) having the same data dimensionality as the multi-domain data which is input data.
[0075] A neural network connecting the encoder network 131 to the decoder network 133 may be referred to as an autoencoder. In order to construct such an autoencoder, the decoder network may be implemented as, for example, a DNN, a CNN, an RNN, or a combination thereof.
[0076] Regression Network 135
[0077] The regression network 135 may be an artificial neural network which predicts an RUL of a system by using output data (a latent variable) of the encoder network 131 as an input.
[0078] In order to predict the RUL of the system, the regression network 135 may be implemented as, for example, a DNN, a CNN, an RNN, or a combination thereof.
[0079] Domain Discrimination Network 137
[0080] The domain discrimination network 137 may be an artificial neural network which matches output data (a feature vector or a latent variable) of the encoder network 131. The multi-domain data may be matched through matching of the output data.
[0081] For matching of the multi-domain data, the domain discrimination network 137 may be implemented as, for example, an adversarial neural network such as a domain-adversarial neural network (DANN) or a generative adversarial network (GAN).
[0082]
[0083] When it is assumed that multi-domain data includes a first domain data corresponding to a first domain and a second domain data corresponding to a second domain which differs from the first domain, as illustrated in a left region of
[0084] On the other hand, as illustrated in a right region of
[0085] That is, the domain discrimination network 137 learned according to an embodiment of the present invention may be learned to regard as a multi-domain as one same domain and may be classified not to clearly differentiate the first latent variables and the second latent variables.
[0086] When the first domain data includes right answer data (label) and the second domain data does not include the right answer data, the domain discrimination network 137 learned to regard a multi-domain as one same domain may predict the second domain data including no right answer data on the basis of the first domain data including the right answer data.
[0087]
[0088] Learning of the RUL diagnosis model 130 described below is limited to being performed in an environment which provides multi-domain data including first and second domain data associated with two different domains. However, this is merely for helping understand description and may not intend to limit a situation where the RUL diagnosis model 130 according to the present invention is learned in an environment which provides two pieces of domain data. Therefore, the learning method of the RUL diagnosis model according to the present invention may be applied even in an environment which provides multi-domain data including three pieces of domain data associated with three or more different domains.
[0089] Referring to
[0090] Subsequently, in step S420, the preprocessor 120 may preprocess the multi-domain data. In an embodiment, preprocessing may include a processing process of dividing the multi-domain data based on a predetermined batch size and/or a processing process of connecting pieces of multi-domain data divided based on the batch size.
[0091] Subsequently, in step S430, in order for the encoder network 131 and the domain discrimination network 137 to process a domain matching process, the RUL diagnosis model 130 may be learned to minimize classification miss (a classification missing value) occurring in a process of classifying the preprocessed multi-domain data (i.e., the preprocessed first and second domain data) by using a processor (100 of
[0092] In an embodiment, the encoder network 131 may extract a first latent variable of the preprocessed first domain data and a second latent variable of the preprocessed second domain data. Subsequently, the processor (100 of
[0093] The domain discrimination network 137 may be implemented as an adversarial neural network, and thus, even when learning is performed to minimize classification miss, the domain discrimination network 137 may be learned not to the first and second latent variables into different domains.
[0094] To this end, a classifier, a domain discriminator, and a gradient reversal layer connecting the domain discriminator to the gradient reversal layer may be additionally provided in the domain discrimination network 137, and weight learning in a neural network may be performed based on backpropagation from an output terminal to an input terminal.
[0095] Subsequently, in step S440, in order for the encoder network 131 and the regression network 135 to process an RUL prediction process, the RUL diagnosis model 130 may be learned so that the processor (100 of
[0096] In an embodiment, the encoder network 131 may extract the first latent variable of the preprocessed first domain data including the right answer data. Subsequently, the regression network 135 may output RUL prediction data predicted based on the extracted first latent variable. Subsequently, the processor (100 of
[0097] Subsequently, in step S450, in order for the encoder network 131 and the decoder network 133 to process a reconstruction process of the first domain data, the processor (100 of
[0098] In an embodiment, the encoder network 131 may extract a dimensionality-reduced first latent variable from the preprocessed first domain data. Subsequently, the decoder 133 may output reconstructed data obtained by reconstructing the first latent variable to have the same dimensionality as that of the first domain data. Subsequently, the processor (100 of
[0099] Here, in a learning process (S450) for performing the reconstruction process, the parameter of the encoder network 131 may not be learned, and only the parameter of the decoder network 133 may be learned. That is, only a model parameter of the decoder network 133 may be learned in a state where a model parameter of the encoder network 131 is fixed not to be learned. This may be for not affecting a learning result of each of the domain discrimination network 137 and the regression network 135.
[0100]
[0101] Referring to
[0102] A main element for performing steps described below may be the processor 100 illustrated in
[0103] First, in step S511, a process of preprocessing multi-domain data may be performed. For example, preprocessing may include a processing process of dividing the multi-domain data based on a predetermined batch size and/or a processing process of connecting pieces of multi-domain data divided based on the batch size.
[0104] Subsequently, in step S512, in order to perform a test of the RUL diagnosis model 130, preprocessed second domain data including no right answer data (right answer label) may be input to the RUL diagnosis model 130 learned based on the learning method of
[0105] Furthermore, the domain matching process may be performed by the domain discrimination network 137 which is learned to be difficult to differentiate first domain data from second domain data. As described above, because the domain discrimination network 137 is implemented as an adversarial neural network, even when learning is performed to minimize classification miss, reverse weight learning may be performed not to differentiate pieces of multi-domain input data. Through such a learning process, the learning-completed encoder network 131 may output a latent variable which is configured to include a common characteristic of the first domain data and the second domain data.
[0106] Subsequently, in steps S513 to S516, a health diagnosis process may be performed.
[0107] First, in step S513, a process of calculating reconstruction miss (a reconstruction missing value) of the second domain data may be performed.
[0108] In order to calculate the reconstruction miss (the reconstruction missing value), the encoder network 131 pre-learned through the learning process of
[0109] The decoder network 133 pre-learned based on the first domain data through the learning process of
[0110] Subsequently, in step S514, a process of extracting an anomaly index of a second domain corresponding to the second domain data may be performed based on the calculated reconstruction miss (reconstruction missing value).
[0111] In an embodiment, the anomaly index may be a value obtained by normalizing reconstruction missing values.
[0112] In an embodiment, the anomaly index may be a representative value such as a maximum value, a minimum value, or an average value of reconstruction missing values selected within a certain period.
[0113] In an embodiment, the anomaly index may be a distance value from a center of a normal cluster obtained by clustering the reconstruction missing values.
[0114] In an embodiment, the anomaly index may be a distance value from an outermost portion of the normal cluster.
[0115] Subsequently, in step S515, when an anomaly index (or an absolute value of an anomaly index) which is an output value of the decoder network 133 is greater than a ‘threshold value_1’, the decoder network 133 or the processor 100 may generate ‘anomaly notification’ corresponding to a corresponding output value, and when the anomaly index (or the absolute value of the anomaly index) is less than or equal to the ‘threshold value_1’, the decoder network 133 or the processor 100 may generate ‘normalcy notification’ corresponding to a corresponding output value.
[0116] Subsequently, in step S516, when a ratio of the number of anomaly notification samples to the number of normalcy notification samples occurring within a certain time interval Δω is greater than a ‘threshold value_2’, the decoder network 133 or the processor 100 may generate ‘system degradation anomaly notification’, and RUL processes (S518 to S520) may be performed. On the other hand, when a ratio of the number of anomaly notification samples to the number of normalcy notification samples is less than or equal to the ‘threshold value_2’, the decoder network 133 or the processor 100 may generate ‘system degradation normalcy notification’. Here, a ratio of the number of anomaly notification samples to the number of normalcy notification samples may be an absolute value obtained by dividing the number of anomaly notification samples by the number of normalcy notification samples. The number of normalcy notification samples may be the number of output values of the decoder network 133 recognized as a normalcy value through a process (S515), and the number of anomaly notification samples may be the number of output values of the decoder network 133 recognized as an anomaly value through a process (S515).
[0117] Steps S515 and S516 will be described below in more detail with reference to
[0118] As illustrated in
[0119] When a ratio of the number of output samples, which are greater than the ‘threshold value_1’ and are recognized as an anomaly value, to the number of output samples recognized as a normalcy value within a predetermined certain time interval Δw is greater than a ‘threshold value_2’, ‘system degradation anomaly notification’ may occur. At this time, a manager of a system may adjust a frequency number of occurrence of ‘anomaly notification’ or ‘system degradation anomaly notification’ and may adjust the ‘threshold value_1’ or the ‘threshold value_2’, and when it is checked that the number of accumulations of ‘anomaly notification’ or ‘system degradation anomaly notification’ is greater than or equal to a reference number of times, the occurrence of notification may be controlled so that notification occurs.
[0120] In the present invention, the ‘threshold value_2’ may represent the number of anomaly notification samples with respect to the number of normalcy notification samples within Aw, and thus, may be expressed as a slope value in a function (an anomaly index function) representing an anomaly index.
[0121] In the present invention, a time at which ‘system degradation anomaly notification’ starts may be referred as a ‘threshold time’ in an x axis representing a time, and a time at which ‘system degradation anomaly notification’ starts may be referred as a threshold point in a y axis representing an ‘anomaly index’.
[0122] Hereinafter, a processing process of RUL processes (S518 to S520) will be described in more detail with reference to
[0123] First, in step S518, the regression network 135 may primarily predict an RUL prediction value {tilde over (y)} of a system on the basis of second domain data.
[0124] Subsequently, in step S519, a process of optimizing a degradation compensation function ƒ({tilde over (y)}; θ) of mapping a degradation model of a second domain corresponding to the second domain data to a degradation model of a first domain corresponding to first domain data may be performed. Here, the degradation compensation function ƒ({tilde over (y)}; θ) may be a compensation function of approximating {tilde over (y)} to y, wherein y may be a right answer value included in the first domain data or an RUL prediction value predicted based on the first domain data. Also, the process of optimizing the degradation compensation function ƒ({tilde over (y)}; θ) may be a process of learning a parameter θ.
[0125] Subsequently, in step S520, the regression network 135 may compensate for the RUL prediction value {tilde over (y)} on the basis of the degradation compensation function ƒ({tilde over (y)}; θ) which is optimized (or learned) in step S519 and may output the compensated RUL prediction value {tilde over (y)}. The optimized (or learned) degradation compensation function ƒ({tilde over (y)}; θ) may finally output a degradation-compensated prediction value {tilde over (y)}′ with respect to an input {tilde over (y)}.
[0126] Based on the learning method of
[0127] Furthermore, although the encoder network 131 is learned to perform the domain matching process, a distribution of a first latent variable extracted from the first domain data used in a learning process of the encoder network 131 may not completely be equal to a distribution of a second latent variable extracted from the second domain data used in a real test process.
[0128] As described above, when the distribution of the first latent variable is not completely be equal to the distribution of the second latent variable, as illustrated in
[0129] In order to decrease the RUL prediction error, in the present invention, a first threshold time t.sub.1 may be calculated in a degradation model function (or an RUL prediction function) representing a degradation model_1 predicted based on the first domain data and a second threshold time t.sub.2 may be calculated in a degradation model function (or an RUL prediction function) representing a degradation model_2 predicted based on the second domain data, and then, by using a difference time Δt between the first threshold time t.sub.1 and the second threshold time t.sub.2, the second threshold time t.sub.2 of the degradation model_2 may be mapped to the first threshold time t.sub.1 of the degradation model_1. Subsequently, an RUL prediction process may be performed from a second threshold time t.sub.2′ mapped to the first threshold time t.sub.1 of the degradation model_1. Accordingly, an RUL prediction error based on a distribution difference (or a threshold time difference) of an RUL prediction value between multi-domains may be considerably reduced.
[0130] Moreover, in order to more increase the accuracy of RUL prediction, an RUL prediction value may be corrected based on the degradation compensation function ƒ({tilde over (y)}; θ). In this case, the degradation compensation function ƒ({tilde over (y)}; θ) may be a function which has {tilde over (y)} (an RUL prediction value based on the second domain data) as an input and has y (a right answer value of the first domain data) as an output and may be calculated by optimizing the parameter θ.
[0131] That is, the RUL prediction value {tilde over (y)}primarily predicted through the degradation compensation function ƒ({tilde over (y)}; θ) may be corrected to the degradation-compensated prediction value {tilde over (y)}′. Accordingly, an RUL prediction error caused by a domain variation may be considerably reduced.
[0132] The degradation compensation function ƒ({tilde over (y)}; θ), as illustrated in
[0133] In an embodiment, a degradation model and a degradation compensation function may be configured by a linear combination of a linear function. In another embodiment, a degradation model and a degradation compensation function may be configured with a quadratic function or a cubic function. In another embodiment, a degradation model and a degradation compensation function may be configured with an exponential function or a logarithmic function.
[0134] As described above, the RUL prediction process according to an embodiment of the present invention may largely reduce a prediction error through threshold time matching between pieces of multi-domain data and parameter value compensation based on a degradation model, and simultaneously, an RUL prediction diagnosis based on the second domain data may be more precisely performed.
[0135] First, according to the embodiments of the present invention, an anomaly diagnosis network model may be added to a health diagnosis system, and thus, an anomaly diagnosis, a health diagnosis, and an RUL diagnosis of a system may be simultaneously performed.
[0136] Second, an adversarial artificial neural network may be added to the health diagnosis system, and thus, the receptivity of multi-domain data may increase.
[0137] Third, based on an anomaly index value in each domain, the accuracy of prediction of an RUL may be enhanced through application of degradation compensation and matching between threshold times at which an abnormal state based on a domain is continued.
[0138] It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the inventions. Thus, it is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.