Xem mẫu

3 A Universal Neural Network–Based Infrasound Event Classifier Fredri c M. Ham and Ranjan Acha ryya CONTE NTS 3.1 Over view of Infrasound and Why Cl assify Infrasou nd Ev ents? ....... ............... ........ 3.2 Neu ral Netw orks for Infrasound Class ification ............... ............... ..... ............... .......... 3.3 Detail s of the Approa ch ............... ........... ............... ............... ............... ..... ............... .......... 3.3.1 Infraso und Data Collect ed for Trainin g and Testin g .............. ............... .......... 3.3.2 Radial Ba sis Functio n Neu ral Ne tworks ............... ............... ...... ............... .......... 3.4 Data Pr eprocessin g ............... ............... .... ............... ............... ............... ..... ......3..8........ ........ 3.4.1 Noi se Filterin g ............... ............... ............... ............... ............... ..... ............... .......... 3 3.4.2 Feat ure Extractio n Pro cess ......... ............... ............... ............... ..... ............... .......... 3.4.3 Useful Definition s ............... ......... ............... ............... ............... ..... ............... .......... 42 3.4.4 Sel ection Proce ss for the Optima l Numbe r of Feature Vect or Compo nents ............... ........ ............... ............... ............... ..... ............... ........ 44 3.4.5 Optima l Outp ut Thres hold Value s and 3-D ROC Curves ...... ............... ........ 3.5 Simul ation Results ............... ............... ..... ............... ............... ............... ..... .......4..7...... ....... 3.6 Conc lusions ............... ............... ............... .. ............... ............... ............... ..............5..1...... . Ackno wledg ments ............... ............... .............. ............... ............... ............... ................5..1.... ... Refere nces ........ ............... ............... ............... ..... ............... ............... .................................51 3.1 Overview of I nfrasound and Why C las sify Inf rasound Events? Infrasound is a longitudinal pressure wave [1–4]. The characteristics of these waves are similar to audible acoustic waves but the frequency range is far below what the human ear can detect. The typical frequenc y ran ge is from 0.01 to 10 Hz (Figure 3 .1). Natu incredible creator of infrasonic signals that can emanate from sources such as volcano eruptions, earthquakes, severe weather, tsunamis, meteors (bolides), gravity waves, microbaroms (infrasound radiated from ocean waves), surf, mountain ranges (mountain associated waves), avalanches, and auroral waves to name a few. Infrasound can also result from man-made events such as mining blasts, the space shuttle, high-speed aircraft, artillery fire, rockets, vehicles, and nuclear events. Because of relatively low atmospheric absorption at low frequencies, infrasound waves can travel long distances in the Earth’s atmosphere and can be detected with sensitive ground-based sensors. An integral part of the comprehensive nuclear test ban treaty (CTBT) international monitoring system (IMS) is an infrasound network system [3]. The goal is to have 60 ß 2007 by Taylor & Francis Group, LLC. 0.01 0.03 0.1 0.2 1.0 10.0 Hz 1 Megaton yield Volcano events Gravity waves Mountain associated waves 1 Kiloton yield Micro-baroms Bolide Impulsive events FIGURE 3.1 Infrasound spectrum. 0.01 0.03 0.1 0.2 1.0 10.0 Hz infrasound arrays operational worldwide over the next several years. The main objective of the infrasound monitoring system is the detection and verification, localization, and classification of nuclear explosions as well as other infrasonic signals-of-interest (SOI). Detection refers to the problem of detecting an SOI in the presence of all other unwanted sources and noises. Localization deals with finding the origin of a source, and classifica-tion deals with the discrimination of different infrasound events of interest. This chapter concentrates on the classification part only. 3.2 Neural Networks for Infrasound Classification Humans excel at the task of classifying patterns. We all perform this task on a daily basis. Do we wear the checkered or the striped shirt today? For example, we will probably select from a group of checkered shirts versus a group of striped shirts. The grouping process is carried out (probably at a near subconscious level) by our ability to discriminate among all shirts in our closet and we group the striped ones in the striped class and the checkered ones in the checkered class (that is, without physically moving them around in the closet, only in our minds). However, if the closet is dimly lit, this creates a potential problem and diminishes our ability to make the right selection (that is, we are working in a ‘‘noisy’’ environment). In the case of using an artificial neural network for classification of patterns (or various ‘‘events’’) the same problem exists with noise. Noise is everywhere. In general, a common problem associated with event classification (or detection and localization for that matter) is environmental noise. In the infrasound problem, many times the distance between the source and the sensors is relatively large (as opposed to region infrasonic phenomena). Increases in the distance between sources and sensors heighten the environmental dependence of the signals. For example, the signal of an infrasonic event that takes place near an ocean may have significantly different charac-teristics as compared to the same event that occurs in a desert. A major contributor of noise for the signal near an ocean is microbaroms. As mentioned above, microbaroms are generated in the air from large ocean waves. One important characteristic of neural networks is their noise rejection capability [5]. This, and several other attributes, makes them highly desirable to use as classifiers. ß 2007 by Taylor & Francis Group, LLC. 3.3 Details of the Approach Our approach of classifying infrasound events is based on a parallel bank neural network structure [6–10]. The basic architecture is shown in Figure 3.2. There are several reasons for using such an architecture; however, one very important advantage of dedicating one module to perform the classification of one event class is that the architecture is fault tolerant (i.e., if one module fails, the rest of the individual classifiers will continue to function). However, the overall performance of the classifier is enhanced when the parallel bank neural network classifier (PBNNC) architecture is used. Individual banks (or mod-ules) within the classifier architecture are radial basis function neural networks (RBF NNs) [5]. Also, each classifier has its own dedicated preprocessor. Customized feature vectors are computed optimally for each classifier and are based on cepstral coefficients and a subset of their associated derivatives (differences) [11]. This will be explained in detail later. The different neural modules are trained to classify one and only one class; however, for the requisite module responsible for one of the classes, it is also trained not to recognize all other classes (negative reinforcement). During the training process, the output is set to a ‘‘1’’ for a correct class and a ‘‘0’’ for all the other signals associated with all the other classes. When the training process is complete the final output thresholds will be set to an optimal value based on a three-dimensional receiver operating characteristic (3-D ROC) curve for each one of the neural modules (see Figure 3.2). Infrasound signal Pre-processor 1 Pre-processor 2 Pre-processor 3 Pre-processor 4 Pre-processor 5 Pre-processor 6 Infrasound class 1 neural network Infrasound class 2 neural network Infrasound class 3 neural network Infrasound class 4 neural network Infrasound class 5 neural network Infrasound class 6 neural network Optimum threshold set by ROC curve 1 0 Optimum threshold set by ROC curve 1 0 Optimum threshold set by ROC curve 1 0 Optimum threshold set by ROC curve 1 0 Optimum threshold set by ROC curve 1 0 Optimum threshold set by ROC curve 1 0 FIGURE 3.2 Basic parallel bank neural network classifier (PBNNC) architecture. ß 2007 by Taylor & Francis Group, LLC. 3.3. 1 Infras ound Data Colle cted fo r Traini ng and Testing The data used for train ing and testing the individual network s are obt ained from mu infr asound arr ays locate d in differen t geogr aphi cal regions with differen t geome tri six infr asound classes used in this study are shown in Table 3.1, and the vari ous geomet ries are shown in Figure 3.3(a) through Figu re 3.3(e) [12,13 ]. Table 3.2 sho w vario us classes, along with the arr ay numbe rs where the data were collected , an ass ociated sa mpling freque ncies. 3.3. 2 Radial Basis Fu nction Neur al Networ ks As previousl y mentioned , eac h of the neural netw ork modu les in Figure 3.2 is an A brief overview of RBF NNs will be given here. This is not meant to be an exhaustive discourse on the subject, but only an introduction to the subject. More details can be found in Refs. [5,14]. Earlier work on the RBF NN was carried out for handling multivariate interpolation problems [15,16]. However, more recently they have been used for probability density estimation [17–19] and approximations of smooth multivariate functions [20]. In prin-ciple, the RBF NN makes adjustments of its weights so that the error between the actual and the desired responses is minimized relative to an optimization criterion through a defined learning algorithm [5]. Once trained, the network performs the interpolation in the output vector space, thus the generalization property. Radial basis functions are one type of positive-definite kernels that are extensively used for multivariate interpolation and approximation. Radial basis functions can be used for problems of any dimension, and the smoothness of the interpolants can be achieved to any desirable extent. Moreover, the structures of the interpolants are very simple. How-ever, there are several challenges that go along with the aforementioned attributes of RBF NNs. For example, many times an ill-conditioned linear system must be solved, and the complexity of both time and space increases with the number of interpolation points. But these types of problems can be overcome. The interpolation problem may be formulated as follows. Assume M distinct data points X ¼ {x1,..., xM}. Also assume the data set is bounded in a region V (for a specific class). Each observed data point x 2 R (u corresponds to the dimension of the input space) may correspond to some function of x. Mathematically, the interpolation problem may be stated as follows. Given a set of M points, i.e., {xi 2 Ruji ¼ 1, 2,..., M} and a corresponding set of M real numbers {di 2 Rji ¼ 1, 2,..., M} (desired outputs or the targets), find a function F:RM ! R that satisfies the interpolation condition F(xi) ¼ di, i ¼ 1, 2,..., M (3:1) TABLE 3.1 Infrasound Classes Used for Training and Testing Class Number 1 2 3 4 5 6 Event Vehicle Artillery fire (ARTY) Jet Missile Rocket Shuttle No. SOI (n¼574) 8 264 12 24 70 196 No. SOI Used for Training (n¼351) 4 132 8 16 45 146 No. SOI Used for Testing (n¼223) 4 132 4 8 25 50 ß 2007 by Taylor & Francis Group, LLC. (a) YDIS, m 30 20 (b) Sensor 3 (3.1, 19.9) YDIS, m 30 Sensor 3 20 (0.7, 20.1) 10 Sensor 1 (0.0, 0.0) Sensor 5 (−19.8, 0.0) 10 Sensor 1 (0.0, 0.0) −40 −30 −20 −10 10 20 30 XDIS, m −10 −40 −30 −20 −10 −10 10 20 30 Sensor 4 XDIS, m (19.8, −0.1) Sensor 2 (−18.6, −7.5) −20 −30 Sensor 4 (15.3, −12.7) −20 Sensor 2 (−1.3, −19.9) −30 Array BP1 YDIS, m (c) 30 Array BP2 YDIS, m (d) 30 Sensor 3 20 20 (0.0, 20.2) Sensor 2 (−22.0, 10.0) 10 Sensor 1 (0.0, 0.0) 10 Sensor 1 (0.0, 0.0) Sensor4 (20.3, 0.5) −40 −30 −20 −10 −10 −20 10 20 30 40 XDIS, m Sensor 4 (45.0, −8.0) −40 −30 −20 −10 10 20 30 Sensor 2 XDIS, m (−20.1, 0.0) −10 ... - tailieumienphi.vn
nguon tai.lieu . vn