Xem mẫu

Global Positioning Systems, Inertial Navigation, and Integration, Mohinder S. Grewal, Lawrence R. Weill, Angus P. Andrews Copyright # 2001 John Wiley & Sons, Inc. Print ISBN 0-471-35032-X Electronic ISBN 0-471-20071-9 8 Kalman Filter Engineering We now consider the following, practical aspects of Kalman ®ltering applications: 1. how performance of the Kalman ®lter can degrade due to computer roundoff errors and alternative implementation methods with better robustness against roundoff; 2. how to determine computer memory, word length, and throughput require-ments for implementing Kalman ®lters in computers; 3. ways to implement real-time monitoring and analysis of ®lter performance; 4. the Schmidt±Kalman suboptimal ®lter, designed for reducing computer requirements; 5. covariance analysis, which uses the Riccati equations for performance-based predictive design of sensor systems; and 6. Kalman ®lter architectures for GPS/INS integration. 8.1 MORE STABLE IMPLEMENTATION METHODS 8.1.1 Effects of Computer Roundoff Computer roundoff limits the precision of numerical representation in the imple-mentation of Kalman ®lters. It has been shown to cause severe degradation of ®lter performance in many applications, and alternative implementations of the Kalman ®lter equations (the Riccati equations, in particular) have been shown to improve robustness against roundoff errors. 229 230 KALMAN FILTER ENGINEERING Computer roundoff for ¯oating-point arithmetic is often characterized by a single parameter eroundoff, which is the largest number such that 1 eroundoff 1 in machine precision. 8:1 The following example, due to Dyer and McReynolds [32], shows how a problem that is well conditioned, as posed, can be made ill-conditioned by the ®lter implementation. Example 8.1 Let In denote the n n identity matrix. Consider the ®ltering problem with measurement sensitivity matrix H 1 1 1 1 1d and covariance matrices P0 I3, and R d2I2; where d2 < eroundoff but d > eroundoff. In this case, although H clearly has rank 2 in machine precision, the product HP0H with roundoff will equal 3 3d 3 d 32d which is singular. The result is unchanged when R is added to HP HT. In this case, then, the ®lter observational update fails because the matrix HP0HT R is not invertible. 8.1.2 Alternative Implementations The covariance correction process (observational update) in the solution of the Riccati equation was found to be the dominant source of numerical instability in the Kalman ®lter implementation, with the more common symptoms of failure being asymmetry of the covariance matrix (easily ®xed) or, worse by far, negative terms on its diagonal. These implementation problems could be avoided for some problems by using more precision, but they were eventually solved for most applications by using alternatives to the covariance matrix P as the dependent variable in the covariance correction equation. However, each of these methods required a com-patible method for covariance prediction. Table 8.1 lists several of these compatible implementation methods for improving the numerical stability of Kalman ®lters. Figure 8.1 illustrates how these methods perform on the ill-conditioned problem of Example 8.1 as the conditioning parameter d 0. For this particular test case, using 64-bit ¯oating-point precision (52-bit mantissa), the accuracy of the Carlson 8.1 MORE STABLE IMPLEMENTATION METHODS 231 TABLE 8.1 Compatible Methods for Implementing the Riccati Equation Covariance Implementation Methods Matrix Format Symmetric nonnegative de®nite Square Cholesky factor C Triangular Cholesky factor C Corrector Method Kalman [71], Joseph [19] Potter [100, 8] Carlson [20] Predictor Method Kalman [71] Kalman [71] Ck1 FkCk Kailath±Schmidta Triangular Morf±Kailath combined [93] Cholesky factor C Modi®ed Bierman [10] Thornton [116] Cholesky factors U;D a From unpublished sources. Fig. 8.1 Degradation of numerical solutions with problem conditioning. 232 KALMAN FILTER ENGINEERING [20] and Bierman [10] implementations degrade more gracefully than the others as d e, the machine precision limit. The Carlson and Bierman solutions still maintain about nine digits ( 30 bits) of accuracy at d e, when the other methods have essentially no bits of accuracy in the computed solution. These results, by themselves, do not prove the general superiority of the Carlson and Bierman solutions for the Riccati equation. Relative performance of alternative implementation methods may depend upon details of the speci®c application, and for many applications, the standard Kalman ®lter implementation will suf®ce. For many other applications, it has been found suf®cient to constrain the covariance matrix to remain symmetric. 8.1.3 SerialMeasurement Processing It is shown in [73] that it is more ef®cient to process the components of a measurement vector serially, one component at a time, than to process them as a vector. This may seem counterintuitive, but it is true even if its implementation requires a transformation of measurement variables to make the associated measure-ment noise covariance R a diagonal matrix (i.e., with noise uncorrelated from one component to another). 8.1.3.1 Measurement Decorrelation If the covariance matrix R of measure-ment noise is not a diagonal matrix, then it can be made so by UDUT decomposition (Eq. B.22) and changing the measurement variables, Rcorr URDRUT; 8:2 Rdecorr DR (a diagonal matrix); 8:3 zdecorr URzcorr; 8:4 Hdecorr URHcorr; 8:5 where Rcorr is the nondiagonal (i.e., correlated component to component) measure-ment noise covariance matrix, and the new decorrelated measurement vector zdecorr has a diagonal measurement noise covariance matrix Rdecorr and measurement sensitivity matrix Hdecorr. 8.1.3.2 Serial Processing of Decorrelated Measurements The compo-nents of zdecorr can now be processed one component at a time using the corresponding row of Hdecorr as its measurement sensitivity matrix and the corresponding diagonal element of Rdecorr as its measurement noise variance. A MATLAB implementation for this procedure is listed in Table 8.2, where the ®nal line is a ``symmetrizing`` procedure designed to improve robustness. 8.1 MORE STABLE IMPLEMENTATION METHODS 233 TABLE 8.2 Matlab Implementation of SerialMeasurement Update x = xk[-] P = Pk[-] for j=1: `, z = zk(j), H = Hk(j,:); R = Rdecorr (j,j); K= PH`/(HPH`+R) x = K(z-Hx); P = P-KHP; end; xk[+] = x Pk[+] = (P+P`)/2; 8.1.4 Joseph Stabilized Implementation This implementation of the Kalman ®lter is in [19], where it is demonstrated that numerical stability of the solution to the Riccati equation can be improved by rearranging the standard formulas for the measurement update into the following formats (given here for scalar measurements): z ; 8:6 H zH; 8:7 K HPHT 11PHT; 8:8 P I KHPI KHT KKT: 8:9 These equations would replace those for K and P within the loop in Table 8.2. The Joseph stabilized implementation and re®nements (mostly taking advantage of partial results and the redundancy due to symmetry) in [10], [46] and are implemented in the MATLAB ®les Joseph.m, Josephb.m, and Josephdv.m, respectively, on the accompanying diskette. 8.1.5 Factorization Methods 8.1.5.1 Historical Background Robust implementation methods were intro-duced ®rst for the covariance correction (measurement updates), observed to be the principal source of numerical instability. In [100, 8], the idea of using a Cholesky factor (de®ned in Section B.8.1) of the covariance matrix P, as the dependent variable in the Riccati equation is introduced. Carlson [20] discovered a more robust method using triangular Cholesky factors, which have zeros either above or below their main diagonals. Bierman [10] extended ... - tailieumienphi.vn
nguon tai.lieu . vn