|Kalman Filter or Kalman Filtering produces estimates of the true values of measurements and their associated calculated values by predicting a value, estimating the uncertainty of the predicted value, and computing a weighted average of the predicted value and the measured value. The most weight is given to the value with the least uncertainty. The estimates produced by the method tend to be closer to the true values than the original measurements because the weighted average has a better estimated uncertainty than either of the values that went into the weighted average.
The Kalman filtering has two distinct phases: Predict and Update. The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate. This improved estimate is termed the a posteriori state estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction steps performed. Likewise, if multiple independent observations are available at the same time, multiple update steps may be performed (typically with different observation matrices Hk).
>In order to use the Kalman filtering to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman filtering. This means specifying the following matrice:
Fk, the state-transition model; Hk, the observation model; Qk, the covariance of the process noise; Rk, the covariance of the observation noise; and sometimes Bk, the control-input model for each time-step
First of all, we need to introduce a system of discrete control processes--Linear Stochastic Difference equation. The system can be a linear stochastic differential equations to describe:
X (k) = AX (k-1) + BU (k) + W (k)
Coupled system of measurement:
Z (k) = HX (k) + V (k)
Formula in the last two, X (k) is k times the system state, U (k) is k times the amount of system control. A and B are system parameters, the multi-model system, their matrix. Z (k) is k times the measured value, H is a measure of system parameters, the multi-measurement system, H for the matrix. W (k) and V (k) represent process and measurement noise. They are assumed as Gaussian white noise (White Gaussian Noise), their covariance are Q, R (here we assume they do not change with the system state changes).
To meet the above conditions (linear stochastic differential systems, processes and measurement are Gaussian white noise), Kalman filter is optimal information processors.
First of all we have to use the system process model to predict the next state of the system. Suppose the current system state is k, according to the system model can be based on a state of the system in state prediction:
X (k | k-1) = AX (k-1 | k-1) + BU (k) ... ... ... .. (1)
Type (1), X (k | k-1) is predicted using the results of the previous state, X (k-1 | k-1) is on a state of optimal results, U (k) for the present state control volume, if not control volume, it can be 0.
Until now, our system results have been updated, however, corresponds to X (k | k-1) of the covariance not updated. We use P said covariance:
P (k | k-1) = AP (k-1 | k-1) A '+ Q ... ... ... (2)
Type (2), P (k | k-1) is X (k | k-1) corresponds to the covariance, P (k-1 | k-1) is X (k-1 | k-1) corresponding covariance, A ', said the transposed matrix A, Q is the systematic process of covariance. 1,2 is the Kalman filtering formula 5 formula which the first two, that is, the system forecasts.
Now that we have now the state of prediction, and then we collect the current state of measurement. Predicted value and measured value combination, we can get current state (k) the optimization estimates X (k | k):
X (k | k) = X (k | k-1) + Kg (k) (Z (k)-HX (k | k-1)) ... ... ... (3)
One Kg for the Kalman gain (Kalman Gain):
Kg (k) = P (k | k-1) H '/ (HP (k | k-1) H' + R) ... ... ... (4)
Until now, we've got the best k state estimates X (k | k). However, another Kalman filter in order to keep the system running until the end of the process going, we need to update the k state X (k | k) of the covariance:
P (k | k) = (I-Kg (k) H) P (k | k-1) ... ... ... (5)
Where I is a matrix, a single model for a single measurement, I = 1. When the system enters state k +1, P (k | k) is the formula (2) P (k-1 | k-1). This algorithm can go from the return operation.
Kalman filter basically describes the principle, formula 1,2,3,4, and 5 are his five basic formula. According to this formula 5, can easily achieve a computer program.