Xem mẫu

Chapter 17 Real-Time Online Processing of Hyperspectral Imagery for Target Detection and Discrimination Qian Du, Missisipi State University Contents 17.1 Introduction ......................................................... 398 17.2 Real-Time Implementation .......................................... 399 17.2.1 BIP Format .................................................. 399 17.2.2 BIL Format ................................................. 401 17.2.3 BSQ Format ................................................. 401 17.3 Computer Simulation ................................................ 402 17.4 Practical Considerations ............................................. 404 17.4.1 Algorithm Simplification Using R−1 ......................... 404 17.4.2 Algorithm Implementation with Matrix Inversion ............. 405 17.4.3 Unsupervised Processing .................................... 405 17.5 Application to Other Techniques ..................................... 407 17.6 Summary ........................................................... 407 Acknowledgment ............................................................ 408 References .................................................................. 408 Hyperspectral imaging is a new technology in remote sensing. It acquires hundreds of images in very narrow spectral bands (normally 10nm wide) for the same area on the Earth. Because of higher spectral resolutions and the resultant contiguous spectral signatures, hyperspectral image data are capable of providing more accurate identificationofsurfacematerialsthanmultispectraldata,andareparticularlyusefulin national defense related applications. The major challenge of hyperspectral imaging is how to take full advantage of the plenty spectral information while efficiently handling the data with vast volume. Insomecases,suchasnationaldisasterassessment,lawenforcementactivities,and military applications, real-time data processing is inevitable to quickly process data andprovidetheinformationforimmediateresponse.Inthischapter,wepresentareal-timeonlineprocessingtechniqueusinghyperspectralimageryforthepurposeoftarget 397 © 2008 by Taylor & Francis Group, LLC 398 High-Performance Computing in Remote Sensing detectionanddiscrimination.Thistechniqueisdevelopedforourproposedalgorithm, called the constrained linear discriminant analysis (CLDA) approach. However, it is applicable to quite a few target detection algorithms employing matched filters. The implementation scheme is also developed for different remote sensing data formats, such as band interleaved by pixel (BIP), band interleaved by line (BIL), and band sequential (BSQ). 17.1 Introduction We have developed the constrained linear discriminant analysis (CLDA) algorithm for hyperspectral image classification [1, 2]. In CLDA, the original high-dimensional dataareprojectedontoalow-dimensionalspaceasdonebyFisher’sLDA,butdifferent classesareforcedtobealongdifferentdirectionsinthislow-dimensionalspace.Thus all classes are expected to be better separated and the classification is achieved simul-taneously with the CLDA transform. The transformation matrix in CLDA maximizes the ratio of interclass distance to intraclass distance while satisfying the constraint that the means of different classes are aligned with different directions, which can be constructed by using an orthogonal subspace projection (OSP) method [3] coupled with a data whitening process. The experimental results in [1],[2] demonstrated that the CLDA algorithm could provide more accurate classification results than other popular methods in hyperspectral image processing, such as the OSP classifier [3] and the constrained energy minimization (CEM) operator [4]. It is particularly useful to detect and discriminate small man-made targets with similar spectral signatures. Assume that there are c classes and the k-th class contains Nk patterns. Let N = N1 + N2 +··· Nc be the number of pixels. The j-th pattern in the k-th class, denoted by xk = [x1j,x2j,··· ,xk j]T , is an L-dimensional pixel vector (L is the number of spectral bands, i.e., data dimensionality). Let μk = 1 j=1 xj be the mean of the k-th class. Define J(F) to be the ratio of the interclass distance to the intraclass distance after a linear transformation F, which is given by c(c−1) Pi=1 Pj=i+1 kF(μi) − F(μj)k2 CN k=1[ j=1 kF(xj k ) − F(μk)k2] and F(x) = (WL×c)T ;x = [w1,w2,··· ,wc]T x (17.1) (17.2) The optimal linear transformation F∗is the one that maximizes J(F) subject to tk = F(μk) for all k, where tk = (0···01···0)T is a c × 1 unit column vector with one in the k-th component and zeros elsewhere. F∗ can be determined by w∗ = μˆiT P⊥ (17.3) © 2008 by Taylor & Francis Group, LLC Real-Time Online Processing of Hyperspectral Imagery for Target Detection 399 where P⊥ = I −Ui(UT Ui)−1UT (17.4) with Ui = [μˆ1 ···μˆj ···μˆc]j=i and I the identity matrix. The ‘hat’ operator specifies the whitened data, i.e., x = Pwx, where Pw is the data whitening operator. Let S denote the entire class signature matrix, i.e., c class means. It was proved in [2] that the CLDA-based classifier using Eqs. (17.4)–(17.5) can be equivalently expressed as PT = [0···010···0]µST X−1 S¶−1 ST X−1 (17.5) for classifying the k-th class in S, where P is the sample covariance matrix. 17.2 Real-Time Implementation In our research, we assume that an image is acquired from left to right and from top to bottom. Three real-time processing fashions will be discussed to fit the three remote sensing data formats: pixel-by-pixel processing for BIP formats, line-by-line processing for BIL formats, and band-by-band processing for BSQ formants. In the pixel-by-pixel fashion, a pixel vector is processed right after it is received and the analysis result is generated within an acceptable delay; in the line-by-line fashion, a line of pixel vectors is processed after the entire line is received; in the band-by-band fashion, a band is processed after it is received. In order to implement the CLDA algorithm in real time, Eq. (17.6) is used. The major advantage of using Eq. (17.6) instead of Eqs. (17.4) and (17.5) is the simplicity of real-time implementation since the data whitening process is avoided. So the key becomestheadaptationof −1,theinversesamplecovariancematrix.Inotherwords, −1 at time t can be quickly calculated by updating the previous −1 at t −1 using the data received at time t, without recalculating the and −1 completely. As a result, the intermediate data analysis result (e.g., target detection) is available in support of decision-making even when the entire data set is not received; and when the entire data set is received, the final data analysis result is completed (within a reasonable delay). 17.2.1 BIP Format This format is easy to handle because a pixel vector of size L ×1 is received contin-uously. It fits well a spectral-analysis based algorithm, such as CLDA. © 2008 by Taylor & Francis Group, LLC 400 High-Performance Computing in Remote Sensing Let the sample correlation matrix R be defined as R = 1 Pi=1 xi ·xT , which can be related to and sample mean μ by X = R −μ·μ (17.6) Using the data matrix X, Eq. (17.7) can be written as N ·P = X ·XT − N ·μ·μT . If ˜ denotes N · , R denotes N · R, and μ˜ denotes N ·μ˜, then ˜ = R − 1 ·μ˜ ·μ˜T (17.7) t Suppose that at time t we receive the pixel vector xt. The data matrix Xt including all the pixels received up to time t is Xt = [x1,x2,··· ,xt] with Nt pixel vectors. The sample mean, sample correlation, and covariance matrices at time t are denoted as μt, Rt, and t, respectively. Then Eq. (17.8) becomes ˜ t = Rt − 1t ·μ˜t ·μ˜T (17.8) The following Woodbury’s formula can be used to update P−1: (A +BCD)−1 = A−1 −A−1B(C−1 +DA−1B)−1DA−1 (17.9) where A and C are two positive-definite matrices, and the sizes of matrices A, B, C, and D allow the operation (A + BCD). It should be noted that Eq. (17.10) is for the most general case. Actually, A, B, C, and D can be reduced to vector or scalar as long asEq.(17.10)isapplicable.ComparingEq.(17.9)withEq.(17.10),A = Rt, B = μ˜t, C = − 1 , D = μ˜T , ˜ −1 can be calculated using the variables at time (t −1) as t X−1 = R−1 +R−1 ˜t(Nt −uT R−1 ˜t)−1 ˜T R−1 The μ˜t can be updated by μ˜t = μ˜t−1 +xt Since Rt and Rt−1 can be related as ˜t = Rt−1 +xt ·xT (17.10) (17.11) (17.12) ˜−1 in Eq. (17.12) can be updated by using the Woodbury’s formula again: Rt 1 = Rt−1 −Rt−1xt(1 +xT ˜t−1xt)−1xT ˜t−1 (17.13) Notethat(1+xT R−1 xt)inEq.(17.14)and(Nt −uT R−1ut)inEq.(17.11)arescalars. This means no matrix inversion is involved in each adaptation. © 2008 by Taylor & Francis Group, LLC Real-Time Online Processing of Hyperspectral Imagery for Target Detection 401 In summary, the real-time CLDA algorithm includes the following steps: r Use Eq. (17.14) to update the inverse sample correlation matrix R−1 at time t. r Use Eq. (17.12) to update the sample mean μt+1 at time t +1. r Use Eq. (17.11) to update the inverse sample covariance matrix ˜ t+1 at time t +1. r Use Eq. (17.6) to generate the CLDA result. 17.2.2 BIL Format If the data are in BIL format, we can simply wait for all the pixels in a line to be received. Let M be the total number of pixels in each line. M pixel vectors can be constructed by sorting the received data. Assume the data processing is carried out line-by-line from left to right and top to bottom in an image, the line received at time t forms a data matrix Yt = [xt1xt2 ···xtM]. Assume that the number of lines received up to time t is Kt, then Eq. (17.10) remains almost the same as X−1 = Rt−1 −Rt−1ut(Kt M −uT ˜t 1ut)−1uT ˜t 1 (17.14) Eq. (17.11) becomes M μ˜t = μ˜t−1 + xti (17.15) i=1 and Eq. (17.12) becomes Rt 1 = Rt−1 −Rt−1Yt(IM×M +YT ˜t−1Yt)−1YT ˜t−1 (17.16) ³ ´ where IM×M is an M × M identity matrix. Note that IM×M +Yt Rt−1Yt in Eq.(17.16)isamatrix.Thismeansthematrixinversionisinvolvedineachadaptation. 17.2.3 BSQ Format If the data format is BSQ, the sample covariance matrix P and its inverse P−1 have to be updated in a different way, because no single completed pixel vector is available until all of the data are received. Let 1 denote the covariance matrix when Band 1 is received, which actually is a scalar, calculated by the average of pixel squared values in Band 1. Then can P P be related to 2 as 2 = P21 P22 , where 22 is the average of pixel squared values in Band 2, 12 = 21 is the average of the products of corresponding pixel © 2008 by Taylor & Francis Group, LLC ... - tailieumienphi.vn
nguon tai.lieu . vn