SYSTEM AND METHOD FOR STATE ESTIMATION IN A NOISY MACHINE-LEARNING ENVIRONMENT

20230385606 · 2023-11-30

Assignee

Inventors

Cpc classification

International classification

Abstract

A system and method for estimating a system state. The method includes making a first measurement and a second measurement of a value of a characteristic of a state of a system. The method includes constructing first filter measurement and time estimates after the second measurement coinciding with the first measurement including corresponding covariance matrices describing an accuracy of the first filter measurement and time estimates. The method includes constructing second filter measurement and time estimates coinciding with the second measurement including corresponding covariance matrices describing an accuracy of the second filter measurement and time estimates. The method includes constructing a smoothing estimate from the first and second filter measurement estimates. The method includes constructing a first prediction estimate that provides a forecast of a value of the characteristic of the state of the system including a first prediction covariance matrix describing an accuracy of the first prediction estimate.

Claims

1. A method, comprising: making a first measurement of a value of a characteristic of a state of a system; making a second measurement of a value of said characteristic of said state of said system after said first measurement; constructing a first filter measurement estimate after said second measurement coinciding with said first measurement including a first filter measurement covariance matrix describing an accuracy of said first filter measurement estimate; constructing a first filter time estimate after said first filter measurement estimate including a first filter time covariance matrix describing an accuracy of said first filter time estimate employing a dynamic model of said state of said system; constructing a second filter measurement estimate after said first filter time estimate coinciding with said second measurement including a second filter measurement covariance matrix describing an accuracy of said second filter measurement estimate; constructing a second filter time estimate after said second filter measurement estimate including a second filter time covariance matrix describing an accuracy of said second filter time estimate employing said dynamic model of said state of said system; constructing a smoothing estimate from said first filter measurement estimate and said second filter measurement estimate; and constructing a first prediction estimate after said smoothing estimate that provides a forecast of a value of said characteristic of said state of said system including a first prediction covariance matrix describing an accuracy of said first prediction estimate employing said dynamic model of said state of said system.

2. The method as recited in claim 1 further comprising constructing a second prediction estimate after said first prediction estimate that provides another forecast of a value of said characteristic of said state of said system including a second prediction covariance matrix describing an accuracy of said second prediction estimate employing said dynamic model of said state of said system.

3. The method as recited in claim 1 further comprising constructing a plurality of prediction estimates that provides a corresponding plurality of forecasts of a value of said characteristic of said state of said system including a corresponding plurality of prediction covariance matrices describing an accuracy of said plurality of prediction estimates employing said dynamic model of said state of said system.

4. The method as recited in claim 1 wherein constructing said smoothing estimate comprises sweeping backward recursively from said second filter measurement estimate to said first filter measurement estimate.

5. The method as recited in claim 1 further comprising altering said state of said system based on said first prediction estimate.

6. The method as recited in claim 1 wherein said constructing said first filter measurement estimate, said first filter time estimate, said second filter measurement estimate and said second filter time estimate are performed by a Kalman filter.

7. The method as recited in claim 1 further comprising reporting said state of said system based on said first prediction estimate.

8. The method as recited in claim 1 wherein said first measurement comprises a plurality of independent measurements characterized by a diagonal measurement covariance matrix.

9. The method as recited in claim 1 wherein said dynamic model comprises a linear dynamic model with constant coefficients.

10. The method as recited in claim 1 wherein said dynamic model comprises a matrix with coefficients that describes a temporal evolution of said state of said system.

11. An apparatus operable to construct a state of a system, comprising: processing circuitry coupled to a memory, configured to: make a first measurement of a value of a characteristic of said state of said system; make a second measurement of a value of said characteristic of said state of said system after said first measurement; construct a first filter measurement estimate after said second measurement coinciding with said first measurement including a first filter measurement covariance matrix describing an accuracy of said first filter measurement estimate; construct a first filter time estimate after said first filter measurement estimate including a first filter time covariance matrix describing an accuracy of said first filter time estimate employing a dynamic model of said state of said system; construct a second filter measurement estimate after said first filter time estimate coinciding with said second measurement including a second filter measurement covariance matrix describing an accuracy of said second filter measurement estimate; construct a second filter time estimate after said second filter measurement estimate including a second filter time covariance matrix describing an accuracy of said second filter time estimate employing said dynamic model of said state of said system; construct a smoothing estimate from said first filter measurement estimate and said second filter measurement estimate; and construct a first prediction estimate after said smoothing estimate that provides a forecast of a value of said characteristic of said state of said system including a first prediction covariance matrix describing an accuracy of said first prediction estimate employing said dynamic model of said state of said system.

12. The apparatus as recited in claim 11 wherein said processing circuitry is further configured to construct a second prediction estimate after said first prediction estimate that provides another forecast of a value of said characteristic of said state of said system including a second prediction covariance matrix describing an accuracy of said second prediction estimate employing said dynamic model of said state of said system.

13. The apparatus as recited in claim 11 wherein said processing circuitry is further configured to construct a plurality of prediction estimates that provides a corresponding plurality of forecasts of a value of said characteristic of said state of said system including a corresponding plurality of prediction covariance matrices describing an accuracy of said plurality of prediction estimates employing said dynamic model of said state of said system.

14. The apparatus as recited in claim 11 wherein said processing circuitry is configured to construct said smoothing estimate by sweeping backward recursively from said second filter measurement estimate to said first filter measurement estimate.

15. The apparatus as recited in claim 11 wherein said processing circuitry is further configured to alter said state of said system based on said first prediction estimate.

16. The apparatus as recited in claim 11 wherein said processing circuitry is configured to construct said first filter measurement estimate, said first filter time estimate, said second filter measurement estimate and said second filter time estimate with a Kalman filter.

17. The apparatus as recited in claim 11 wherein said processing circuitry is further configured to report said state of said system based on said first prediction estimate.

18. The apparatus as recited in claim 11 wherein said first measurement comprises a plurality of independent measurements characterized by a diagonal measurement covariance matrix.

19. The apparatus as recited in claim 11 wherein said dynamic model comprises a linear dynamic model with constant coefficients.

20. The apparatus as recited in claim 11 wherein said dynamic model comprises a matrix with coefficients that describes a temporal evolution of said state of said system.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0042] For a more complete understanding of the present disclosure, reference is now made to the following detailed description taken in conjunction with the accompanying drawings, in which:

[0043] FIG. 1 illustrates a graphical representation comparing forecasting performance (Symmetric Mean Absolute Percentage Error) of machine learning and statistical methods for known machine learning processes;

[0044] FIG. 2 illustrates a process for smoothing over fixed time intervals;

[0045] FIG. 3 illustrates a graphical representation of filtering, smoothing, and prediction performance for analytics;

[0046] FIGS. 4A and 4B illustrate performance examples of temperature tracking and vibration filtering in a noisy measurement environment;

[0047] FIG. 5 illustrates a flow diagram of an embodiment of a method of estimating the state of a system; and,

[0048] FIG. 6 illustrates a block diagram of an embodiment of an apparatus for estimating the state of a system.

[0049] Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated and, in the interest of brevity, may not be described after the first instance.

DETAILED DESCRIPTION

[0050] The making and using of exemplary embodiments of the disclosed invention are discussed in detail below. It should be appreciated, however, that the general embodiments are provided to illustrate the inventive concepts that can be embodied in a wide variety of specific contexts, and the specific embodiments are merely illustrative of specific ways to make and use the systems, subsystems, and modules for estimating the state of a system in a real-time, noisy measurement, machine-learning environment. While the principles will be described in the environment of a linear system in a real-time machine-learning environment, any environment such as a nonlinear system, or a non-real-time machine-learning environment, is within the broad scope of the disclosed principles and claims.

[0051] Intelligent prediction is a system introduced herein that uniquely combines the three forms of optimal estimation (filtering, smoothing, and predicting) to provide utility for predictive and prescriptive analytics with applications to real-time sensor data. The results of this systematic approach outperform the best of the current statistical methods and, as such, outperform machine learning methods.

[0052] To perform predictive and prescriptive analytics in real-time and avoid the pitfalls of machine learning which utilizes artificial neural networks, the system architecture introduced herein is based on combining three types of estimation: filtering, smoothing, and predicting, which is an approach new to forecasting, and which it is believed has not been previously considered with regard to machine learning, as illustrated in Table 1.

[0053] FIG. 1, to which reference is now made, illustrates on the vertical axis a symmetric mean absolute percentage error for forecasting performance of machine learning and statistical methods for known machine learning processes. The first part of the combined three-part system is an implementation of a discrete-time Kalman filter. The filtering form of optimal estimation is when an estimate coincides with the last measurement point.

[0054] With the following definitions: [0055] x.sub.k=(n×1), state vector at time t.sub.k [0056] ϕ.sub.k=(n×n), state transition matrix [0057] w.sub.k=(n×1), process white noise, w.sub.k˜(0,Q.sub.k) [0058] Q.sub.k is the covariance of the process noise [0059] z.sub.k=(m×1), measurement vector at time t.sub.k [0060] H.sub.k=(m×n), measurement matrix [0061] v.sub.k=(m×1), measurement white noise, v.sub.k˜(0, R.sub.k) [0062] R.sub.k is the covariance of the measurement noise,
the dynamic process is described by


x.sub.k+1=ϕ.sub.kx.sub.k+w.sub.k,


measurements are described by


z.sub.k=H.sub.kx.sub.k+v.sub.k,

and initial conditions given by


{circumflex over (x)}.sub.0.sup.−=E[x.sub.0]


P.sub.0.sup.−=E[(x.sub.0−{circumflex over (x)}.sub.0.sup.−)(x.sub.0−{circumflex over (x)}.sub.0.sup.−).sup.T].

The discrete-time Kalman filter recursive equations are given by


K.sub.k=P.sub.k.sup.−H.sub.k.sup.T(H.sub.kP.sub.k.sup.−H.sub.k.sup.T+R.sub.k).sup.−1Filtering gain


{circumflex over (x)}.sub.k={circumflex over (x)}.sub.k+K.sub.k(z.sub.k−H.sub.k{circumflex over (x)}.sub.k.sup.−)State measurement estimate


P.sub.k=(I−K.sub.kH.sub.k)P.sub.k.sup.−(I−K.sub.kH.sub.k).sup.T+K.sub.kR.sub.kK.sub.k.sup.T State measurement covariance


{circumflex over (x)}.sub.k+1.sup.−=ϕ.sub.k{circumflex over (x)}.sub.kState time estimate at a next time step t.sub.k+1


P.sub.k+1.sup.−=ϕ.sub.kP.sub.kϕ.sub.k.sup.T+Q.sub.k State time covariance(also at next time step)

[0063] It is worthy to mention C. F. van Loan's method is employed to compute ϕ.sub.k and Q.sub.k. As previously mentioned, the Kalman filter has numerous applications for guidance, navigation, and control of aerospace vehicles, e.g., aircraft, spacecraft, rockets, and missiles. However, the filter will be combined, as introduced herein, with smoothing and predicting with applications to (possibly real-time) predictive and prescriptive analytics.

[0064] The second part of the three-part system is an implementation of discrete fixed-interval smoothing. The smoothing form of optimal estimation is when an estimate falls within a span of measurement points. For the proposed system, the time interval of the measurements is fixed (hence the name) and optimal estimates of the (saved) states{circumflex over (x)}.sub.k are obtained.

[0065] With initial conditions given by the last a posteriori estimate and covariance from the filter


x.sub.T.sup.s=x.sub.r


P.sub.T.sup.s=P.sub.T,

the smoother sweeps backward recursively


C.sub.k=P.sub.kϕ.sub.k.sup.T(P.sub.k+1.sup.−).sup.−1Smoothing gain


x.sub.k.sup.s={circumflex over (x)}.sub.k+C.sub.k(x.sub.k+1.sup.s−ϕ.sub.k{circumflex over (x)}.sub.k)State smoothing estimate


P.sub.k.sup.s=P.sub.k+C.sub.k(P.sub.k+1.sup.s−P.sub.k−1.sup.−)C.sub.k.sup.T State smoothing covariance

[0066] The third part of the three-part system is an implementation of a predictor. The predicting form of optimal estimation is when an estimate falls beyond the last measurement point. The equations for the predictor are identical to the filter with the following three exceptions: [0067] (1) The initial conditions are given by the last a posteriori estimate and covariance of the smoother:


{circumflex over (x)}.sub.0.sup.−=x.sub.T.sup.s


P.sub.0.sup.−=P.sub.T.sup.s, [0068] (2) The covariance of the measurement noise R.sub.k is set to a large value rendering the measurements worthless because there are not any measurements available with prediction. [0069] (3) As such, z.sub.k is fixed to the value of last measurement.
The predictor propagates forward for the forecast period of interest. These predictions may be at various points in the future over various time periods. For example, one analytics application might predict temperature one minute into the future and/or five minutes into the future. Also, there may be temperature predictions at hourly or daily rates. These example combinations would be dependent on the application, of course.

[0070] Those skilled in the art know how to model dynamic process and measurements described by x.sub.k+1=ϕ.sub.kx.sub.k+w.sub.k and z.sub.k=H.sub.k x.sub.k+v.sub.k, respectively. Thus, once initialized with {circumflex over (x)}.sub.0.sup.− and P.sub.0.sup.−, the five-step Kalman filter iterates recursively until the set of data to be filtered is exhausted resulting in state estimates {circumflex over (x)}.sub.k and state covariances P.sub.k.

[0071] Upon saving the state estimates {circumflex over (x)}.sub.k, the state covariances P.sub.k, and properly initializing x.sub.T.sup.s and P.sub.T.sup.s with the last entries of {circumflex over (x)}.sub.k (T.sub.f) and P.sub.k (T.sub.f), the three-step smoother iterates recursively with a backward sweep to an earlier time point, as illustrated in FIG. 2, showing fixed-intervals smoothing, until all states/covariances are consumed. At this point, the system is prepared for predictive and prescriptive analytics.

[0072] The last state estimate and state covariance of the smoother is used to initialize the predictor. The predictor runs just like the five-step Kalman filter with two exceptions: (i) the covariance of the measurement noise R k is set to an arbitrarily large value to indicate the measurements are worthless, because there are not any, and (ii) the measurement z k is fixed to its final value, because that is the last piece of information available.

[0073] An application of the disclosed three-part system's implementation is shown in FIG. 3 illustrating an example of filtering, smoothing, and predicting for analytics where the future position of a moving object is predicted. In FIG. 3, the actual position of an object is depicted on the vertical axis with a dashed line, the filtered position is represented by dots, the smoothed position is denoted by a solid line, and the predicted position is shown with “x” marking the spots. Measurements are filtered and smoothed up until 50 seconds, at which time, predictions are made 30 seconds into the future.

[0074] Referring again to FIG. 1, if performance of a three-part system is better than ETS, then its performance is better than the rest. In this section, details are presented showing filtering with prediction is better than ETS. When combined with smoothing, performance improves and forms the basis of the accurate prediction shown in FIG. 3.

[0075] Data used to perform the analysis was gathered from a live, operating Horizontal Pump System (HPS). Filtering and predicting was tested with eighteen data sets. Measurements were taken every hour so that one forecasting period represents one hour of elapsed time. The data measures various components of the HPS including bearing and winding temperatures in the motor, pump vibration and suction pressure, overall system health, and other attributes of the system.

[0076] Each data set includes noise which may vary with time. The different sources of data provide a mixture of different characteristics such as seasonality, trends, impulses, and randomness. For example, temperature data is affected by the day/night (diurnal) cycle which creates a (short) seasonal characteristic. Vibration data, however, is not affected by the day/night cycle and is not seasonal but does contain a significant portion of randomness.

[0077] A missed prediction occurs when an observed measurement exceeds a threshold value, but no forecast was produced which predicted the exception. Any forecast which predicted the exception within twelve periods leading up to the exception was not considered because such a short forecast is not useful. A prediction strategy should produce as few missed predictions as possible.

[0078] Turning now to Table 2 (below), illustrated are temperature and vibration filtering after filtering and predicting, showing sensitivities, forecast lengths, and average percent of missed predictions.

TABLE-US-00004 TABLE 2 Average Forecast % Missed Length Predictions Sensi- Periods Low High Strategy tivity (Days) Noise Noise ETS High 24 (1) 18.92% 43.19% Filter/Predictor High 24 (1)  0.00%  0.00% ETS Medium 24 (1) 16.67% 42.94% Filter/Predictor Medium 24 (1)  0.00%  0.00% ETS Low 24 (1) 61.17% 62.05% Filter/Predictor Low 24 (1) 24.96% 13.89% ETS High 336 (14)  0.00%  5.56% Filter/Predictor High 336 (14)  0.00%  0.00% ETS Medium 336 (14)  0.00%  2.78% Filter/Predictor Medium 336 (14)  0.00%  0.00% ETS Low 336 (14)  0.00%  5.56% Filter/Predictor Low 336 (14)  0.00%  0.00%
Table 2 compares each strategy (ETS versus filter/prediction) over 24 periods (1 day) and 336 periods (14 days). The filter sensitivity column refers to how closely the signal is being tracked. For instance, temperature changes slowly over time so the filter/prediction combination is set to high sensitivity to track the slowly changing signal; whereas vibration, which contains high frequency noise is set to low sensitivity, as illustrated in FIG. 4 showing performance examples of temperature tracking and vibration filtering in a noisy measurement environment. The Average % Missed Predictions column in Table 2 shows the probability that, given the data has crossed a critical threshold, the corresponding strategy failed to predict the event. A lower percentage indicates better performance for this metric. In each scenario (low noise/high noise), the filter/predictor strategy with a high or medium sensitivity correctly predicted every event. Even in the case of low sensitivity, the filter/predictor strategy outperformed the ETS strategy.

[0079] The filtering, smoothing, and predicting process introduced herein outperforms the ETS strategy which was the basis of performance assessment over machine learning strategies as shown in Table 1. These results appear to be independent of sensitivity setting (low, medium, or high). Therefore, in general, a practitioner could use the filter/predictor strategy to avoid missed predictions. Furthermore, with the inclusion of smoothing, these results are improved upon as shown in FIG. 3.

[0080] Turning now to FIG. 5, illustrated is a flow diagram of an embodiment of a method 500 of estimating a state of a system. The method 500 may be employable to estimate a state of a system in a machine learning and/or noisy measurement environment. The method 500 is operable on a processor such as a microprocessor coupled to a memory, the memory contained instructions which, when executed by the processor, are operative to perform the functions. The method 500 begins at a start step or module 505.

[0081] At a step or module 510, a first estimate of a state of a system is constructed at a first time including a first covariance matrix describing an accuracy of the first estimate.

[0082] At a step or module 520, a second estimate of the state of said system is constructed at a second time, after the first time, including a second covariance matrix describing an accuracy of the second estimate employing a dynamic model of the state of the system; the dynamic model comprises a matrix with coefficients that describes a temporal evolution of the state of the system.

[0083] At a step or module 530, a value of a characteristic of the state of the system is measured at the second time. Measuring the value of the characteristic can include making a plurality of independent measurements characterized by a diagonal measurement covariance matrix. At a step or module 540, the second estimate of the state of the system and the second covariance matrix are adjusted based on the value of the characteristic.

[0084] At a step or module 550, a third estimate of the state of the system is constructed at a third time, before the second time, including a third covariance matrix describing an accuracy of the third estimate employing the dynamic model of the state of the system.

[0085] At a step or module 560, a fourth estimate of the state of the system is constructed at a fourth time, after the second time, from the second estimate. In some embodiments, the fourth time is on a different time scale from the first, second and third times.

[0086] At a step or module 570, the dynamic model is altered in response to the value of the characteristic.

[0087] At a step or module 580, the state of the system is reported based on the fourth estimate.

[0088] At a step or module 590, a fifth estimate of the state of the system is constructed at a fifth time, after the second time, from the second estimate.

[0089] In certain embodiments, the dynamic model is a linear dynamic model with constant coefficients. In an embodiment, constructing the first estimate and constructing the second estimate are performed by a Kalman filter.

[0090] At a step or module 595, the state of the system is altered based on the fourth estimate.

[0091] The method 500 terminates at end step or module 598.

[0092] The impacts to implementation of predictive analysis of processes introduced herein cannot be understated. Whereas machine learning approaches are directly dependent on a large and fully populated training corpus, purely statistical approaches, such as ETS and the novel filter/predictor strategy introduced herein, learn directly from the real-time signal with additional data or knowledge imposed. Based upon the findings indicated in Table 1, the established ETS approach is already of better performance than the more widely used machine learning techniques. The improvements and advantages of the process introduced herein over ETS (shown in Table 2) only solidifies the merits of the new approach.

[0093] In short, advantages of the novel filtering, smoothing, and predicting process do not requiring a priori knowledge as it does for machine learning techniques. Because the system combines optimal estimation techniques of filtering, smoothing, and predicting, there are no dependencies on artificial neural nets and their (shallow, greedy, brittle, and opaque) shortcomings.

[0094] Turning now to FIG. 6, illustrated is a block diagram of an embodiment of an apparatus 600 for estimating the state of a system in a machine learning environment. The apparatus 600 is configured to perform functions described hereinabove of constructing the estimate of the state of the system. The apparatus 600 includes a processor (or processing circuitry) 610, a memory 620 and a communication interface 630 such as a graphical user interface.

[0095] The functionality of the apparatus 600 may be provided by the processor 610 executing instructions stored on a computer-readable medium, such as the memory 620 shown in FIG. 6. Alternative embodiments of the apparatus 600 may include additional components (such as the interfaces, devices and circuits) beyond those shown in FIG. 6 that may be responsible for providing certain aspects of the device's functionality, including any of the functionality to support the solution described herein.

[0096] The processor 610 (or processors), which may be implemented with one or a plurality of processing devices, perform functions associated with its operation including, without limitation, performing the operations of estimating the state of a system, computing covariance matrices, and estimating a future state of the system. The processor 610 may be of any type suitable to the local application environment, and may include one or more of general-purpose computers, special purpose computers, microprocessors, digital signal processors (“DSPs”), field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”), and processors based on a multi-core processor architecture, as non-limiting examples.

[0097] The processor 610 may include, without limitation, application processing circuitry. In some embodiments, the application processing circuitry may be on separate chipsets. In alternative embodiments, part or all of the application processing circuitry may be combined into one chipset, and other application circuitry may be on a separate chipset. In still alternative embodiments, part or all of the application processing circuitry may be on the same chipset, and other application processing circuitry may be on a separate chipset. In yet other alternative embodiments, part or all of the application processing circuitry may be combined in the same chipset.

[0098] The memory 620 (or memories) may be one or more memories and of any type suitable to the local application environment, and may be implemented using any suitable volatile or nonvolatile data storage technology such as a semiconductor-based memory device, a magnetic memory device and system, an optical memory device and system, fixed memory and removable memory. The programs stored in the memory 620 may include program instructions or computer program code that, when executed by an associated processor, enable the respective apparatus 600 to perform its intended tasks. Of course, the memory 620 may form a data buffer for data transmitted to and from the same. Exemplary embodiments of the system, subsystems, and modules as described herein may be implemented, at least in part, by computer software executable by the processor 610, or by hardware, or by combinations thereof.

[0099] The communication interface 630 modulates information for transmission by the respective apparatus 600 to another apparatus. The respective communication interface 630 is also configured to receive information from another processor for further processing. The communication interface 630 can support duplex operation for the respective other processor 600.

[0100] In summary, the inventions disclosed herein combine three techniques of optimal estimation of the state of a system. The three techniques include filtering, smoothing, and predicting processes, and can be performed, without limitation, in a machine learning and/or a noisy measurement environment.

[0101] The filtering portion of optimal estimation is performed to construct a first estimate of a state vector x.sub.k at a time point t.sub.k that coincides with a measurement of a value of characteristic of the state of the system at the time point t.sub.k. The filtering process employs a covariance matrix that describes the accuracy of the first estimate of the state vector x.sub.k at the time point t.sub.k. A second estimate of the state vector x.sub.k+1 at the time point t.sub.k+1 is then constructed by propagating the state of the system forward to a second time point t.sub.k+1, the second time point being after the first time point. The propagating forward employs a dynamic model of the state of the system to produce the estimate of the state vector x.sub.k+1 at the second time point t.sub.k+1. The first estimate of the state vector x.sub.k and constructing the second estimate of the state vector x.sub.k+1 can be performed by employing a Kalman filter.

[0102] The dynamic model can employ a matrix with coefficients that describes temporal evolution of the state of the system. In certain embodiment, the dynamic model is a linear dynamic model with constant coefficients.

[0103] A value of a characteristic of the state of the system x.sub.k+1 is measured at the second time point t.sub.k+1. The second estimate of the state of the system and the second covariance matrix are adjusted based on the measured value of the characteristic at the second time point t.sub.k+1

[0104] Measuring the value of the characteristic can include making a plurality of independent measurements characterized by a diagonal measurement covariance matrix.

[0105] The smoothing portion of optimal estimation is performed by constructing a third state estimate for a time point that is earlier than the time point t.sub.k+1. The earlier time point can fall within or before a span of current measurement points, e.g., between or before the time points t.sub.k and t.sub.k+1.

[0106] The predicting portion then propagates the state estimate forward for a forecast period of interest. The last state estimate and state covariance of the smoother can be used to initialize the predicting. The predictions may be at various time points in the future and over various time scales that are after the second time point. Measurement noise R.sub.k can be set to an arbitrarily large value to accommodate the inherent absence of a state measurement at a future time point. The initial conditions for the prediction can be taken as the last a posteriori state estimate and the covariance of the smoother.

[0107] As described above, the exemplary embodiments provide both a method and corresponding apparatus consisting of various modules providing functionality for performing the steps of the method. The modules may be implemented as hardware (embodied in one or more chips including an integrated circuit such as an application specific integrated circuit), or may be implemented as software or firmware for execution by a processor. In particular, in the case of firmware or software, the exemplary embodiments can be provided as a computer program product including a computer readable storage medium embodying computer program code (i.e., software or firmware) thereon for execution by the computer processor. The computer readable storage medium may be non-transitory (e.g., magnetic disks; optical disks; read only memory; flash memory devices; phase-change memory) or transitory (e.g., electrical, optical, acoustical or other forms of propagated signals-such as carrier waves, infrared signals, digital signals, etc.). The coupling of a processor and other components is typically through one or more busses or bridges (also termed bus controllers). The storage device and signals carrying digital traffic respectively represent one or more non-transitory or transitory computer readable storage medium. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device such as a controller.

[0108] Although the embodiments and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope thereof as defined by the appended claims. For example, many of the features and functions discussed above can be implemented in software, hardware, or firmware, or a combination thereof. Also, many of the features, functions, and steps of operating the same may be reordered, omitted, added, etc., and still fall within the broad scope of the various embodiments.

[0109] Moreover, the scope of the various embodiments is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized as well. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.