Sample records for principal component filtering

  1. Evaluating filterability of different types of sludge by statistical analysis: The role of key organic compounds in extracellular polymeric substances.

    PubMed

    Xiao, Keke; Chen, Yun; Jiang, Xie; Zhou, Yan

    2017-03-01

    An investigation was conducted for 20 different types of sludge in order to identify the key organic compounds in extracellular polymeric substances (EPS) that are important in assessing variations of sludge filterability. The different types of sludge varied in initial total solids (TS) content, organic composition and pre-treatment methods. For instance, some of the sludges were pre-treated by acid, ultrasonic, thermal, alkaline, or advanced oxidation technique. The Pearson's correlation results showed significant correlations between sludge filterability and zeta potential, pH, dissolved organic carbon, protein and polysaccharide in soluble EPS (SB EPS), loosely bound EPS (LB EPS) and tightly bound EPS (TB EPS). The principal component analysis (PCA) method was used to further explore correlations between variables and similarities among EPS fractions of different types of sludge. Two principal components were extracted: principal component 1 accounted for 59.24% of total EPS variations, while principal component 2 accounted for 25.46% of total EPS variations. Dissolved organic carbon, protein and polysaccharide in LB EPS showed higher eigenvector projection values than the corresponding compounds in SB EPS and TB EPS in principal component 1. Further characterization of fractionized key organic compounds in LB EPS was conducted with size-exclusion chromatography-organic carbon detection-organic nitrogen detection (LC-OCD-OND). A numerical multiple linear regression model was established to describe relationship between organic compounds in LB EPS and sludge filterability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  3. Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns

    NASA Astrophysics Data System (ADS)

    Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.

    2014-02-01

    Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.

  4. A measure for objects clustering in principal component analysis biplot: A case study in inter-city buses maintenance cost data

    NASA Astrophysics Data System (ADS)

    Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.

    2017-03-01

    This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.

  5. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  6. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  7. Separation of the global and local components in functional near-infrared spectroscopy signals using principal component spatial filtering

    PubMed Central

    Zhang, Xian; Noah, Jack Adam; Hirsch, Joy

    2016-01-01

    Abstract. Global systemic effects not specific to a task can be prominent in functional near-infrared spectroscopy (fNIRS) signals and the separation of task-specific fNIRS signals and global nonspecific effects is challenging due to waveform correlations. We describe a principal component spatial filter algorithm for separation of the global and local effects. The effectiveness of the approach is demonstrated using fNIRS signals acquired during a right finger-thumb tapping task where the response patterns are well established. Both the temporal waveforms and the spatial pattern consistencies between oxyhemoglobin and deoxyhemoglobin signals are significantly improved, consistent with the basic physiological basis of fNIRS signals and the expected pattern of activity associated with the task. PMID:26866047

  8. Principal Component Noise Filtering for NAST-I Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L., Sr.

    2011-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.

  9. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  10. Use of principal-component, correlation, and stepwise multiple-regression analyses to investigate selected physical and hydraulic properties of carbonate-rock aquifers

    USGS Publications Warehouse

    Brown, C. Erwin

    1993-01-01

    Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.

  11. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  12. Guided filter and principal component analysis hybrid method for hyperspectral pansharpening

    NASA Astrophysics Data System (ADS)

    Qu, Jiahui; Li, Yunsong; Dong, Wenqian

    2018-01-01

    Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.

  13. Effect of noise in principal component analysis with an application to ozone pollution

    NASA Astrophysics Data System (ADS)

    Tsakiri, Katerina G.

    This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction

  14. [Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].

    PubMed

    Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang

    2016-02-01

    Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS

  15. Balancing Vibrations at Harmonic Frequencies by Injecting Harmonic Balancing Signals into the Armature of a Linear Motor/Alternator Coupled to a Stirling Machine

    NASA Technical Reports Server (NTRS)

    Holliday, Ezekiel S. (Inventor)

    2014-01-01

    Vibrations at harmonic frequencies are reduced by injecting harmonic balancing signals into the armature of a linear motor/alternator coupled to a Stirling machine. The vibrations are sensed to provide a signal representing the mechanical vibrations. A harmonic balancing signal is generated for selected harmonics of the operating frequency by processing the sensed vibration signal with adaptive filter algorithms of adaptive filters for each harmonic. Reference inputs for each harmonic are applied to the adaptive filter algorithms at the frequency of the selected harmonic. The harmonic balancing signals for all of the harmonics are summed with a principal control signal. The harmonic balancing signals modify the principal electrical drive voltage and drive the motor/alternator with a drive voltage component in opposition to the vibration at each harmonic.

  16. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  17. Convergence of sampling in protein simulations

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2002-03-01

    With molecular dynamics protein dynamics can be simulated in atomic detail. Current computers are not fast enough to probe all available conformations, but fluctuations around one conformation can be sampled to a reasonable extent. The motions with the largest fluctuations can be filtered out of a simulation using covariance or principal component analysis. A problem with this analysis is that random diffusion can appear as correlated motion. An analysis is presented of how long a simulation should be to obtain relevant results for global motions. The analysis reveals that the cosine content of the principal components is a good indicator for bad sampling.

  18. Finessing filter scarcity problem in face recognition via multi-fold filter convolution

    NASA Astrophysics Data System (ADS)

    Low, Cheng-Yaw; Teoh, Andrew Beng-Jin

    2017-06-01

    The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).

  19. Symbolic dynamic filtering and language measure for behavior identification of mobile robots.

    PubMed

    Mallapragada, Goutham; Ray, Asok; Jin, Xin

    2012-06-01

    This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.

  20. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  1. Behavior of ambient concentrations of natural radionuclides (7)Be, (210)Pb, (40)K in the Mediterranean coastal city of Málaga (Spain).

    PubMed

    Gordo, E; Dueñas, C; Fernández, M C; Liger, E; Cañete, S

    2015-05-01

    During a 4-year period (January 2009-December 2012), the (7)Be, (210)Pb, and (40)K activity concentrations in airborne particulate matter were weekly determined at the Málaga (Spain) located in the southern Iberian Peninsula. Totally 209 polypropylene filters were analyzed in the mentioned period. In 100% of the filters, (7)Be and (40)K activity concentrations were detected while (210)Pb activity concentration was detected in 96% of the filters. The results from individual measurements of (7)Be, (210)Pb, and (40)K concentrations were analyzed to derive the statistical estimates characterizing the distributions. Principal components analysis (PCA) was applied to the datasets and the results of the study reveal that aerosol behavior is represented by two principal components which explain 73.2% of total variance. Components PC1 and PC2 respectively explain 46.0 and 27.2% of total variance. PC1 was related positively to dust content, (7)Be and (40)K concentrations and negatively to sunspot numbers. In contrast, PC2 was related positively to temperature and (210)Pb activity and negatively to precipitation and relative humidity. The (7)Be levels showed a significant correlation with sunspot numbers due to the cosmogenic origin. (40)K activities showed a good correlation with dust deposition in filters mainly because it was transported to the air as resuspended particle from the soil. An inverse relationship was observed between the (210)Pb concentrations and monthly rainfall, indicating washout of atmospheric aerosols carrying these radionuclides and a pronounced positive correlation with the average monthly temperature of air.

  2. Early forest fire detection using principal component analysis of infrared video

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Radjabi, Ryan; Jacobs, John T.

    2011-09-01

    A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.

  3. Data analysis techniques

    NASA Technical Reports Server (NTRS)

    Park, Steve

    1990-01-01

    A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.

  4. ERS-2 SAR and IRS-1C LISS III data fusion: A PCA approach to improve remote sensing based geological interpretation

    NASA Astrophysics Data System (ADS)

    Pal, S. K.; Majumdar, T. J.; Bhattacharya, Amit K.

    Fusion of optical and synthetic aperture radar data has been attempted in the present study for mapping of various lithologic units over a part of the Singhbhum Shear Zone (SSZ) and its surroundings. ERS-2 SAR data over the study area has been enhanced using Fast Fourier Transformation (FFT) based filtering approach, and also using Frost filtering technique. Both the enhanced SAR imagery have been then separately fused with histogram equalized IRS-1C LISS III image using Principal Component Analysis (PCA) technique. Later, Feature-oriented Principal Components Selection (FPCS) technique has been applied to generate False Color Composite (FCC) images, from which corresponding geological maps have been prepared. Finally, GIS techniques have been successfully used for change detection analysis in the lithological interpretation between the published geological map and the fusion based geological maps. In general, there is good agreement between these maps over a large portion of the study area. Based on the change detection studies, few areas could be identified which need attention for further detailed ground-based geological studies.

  5. Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task

    NASA Technical Reports Server (NTRS)

    Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.

    1978-01-01

    Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.

  6. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.

  7. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  8. Analysis of the plugging of the systems autonomy demonstration project brassboard filters

    NASA Technical Reports Server (NTRS)

    Clay, John C.

    1989-01-01

    A fine gray powder was clogging the brassboard filters. The powder appeared to be residue from a galvanic corrosive attack by ammonia of the aluminum and stainless steel components in the system. The corrosion was caused by water and chlorine that had entered into the system and combined with the ammonia. This combination made an electrolyte and a corrosive agent of the ammonia that attacked the metals in the system. The corroded material traveled through the system with the ammonia and clogged the filters. Key conclusions are: the debris collecting in the filters is a by-product of galvanic corrosion; the debris is principally corroded aluminum and stainless from the system; and galvanic corrosion occurred from water and chlorine that entered the system during normal and/or extreme operating and servicing conditions. Key recommendations are: use only one metal in the ammonia system-titanium, aluminum, or stainless steel; make the system as air-tight as possible (replace fittings with welded joints); and replace electron paramagnetic resonance (EPR) O-rings with neoprene O-rings, and do not use freon to clean system components.

  9. Level-1C Product from AIRS: Principal Component Filtering

    NASA Technical Reports Server (NTRS)

    Manning, Evan M.; Jiang, Yibo; Aumann, Hartmut H.; Elliott, Denis A.; Hannon, Scott

    2012-01-01

    The Atmospheric Infrared Sounder (AIRS), launched on the EOS Aqua spacecraft on May 4, 2002, is a grating spectrometer with 2378 channels in the range 3.7 to 15.4 microns. In a grating spectrometer each individual radiance measurement is largely independent of all others. Most measurements are extremely accurate and have very low noise levels. However, some channels exhibit high noise levels or other anomalous behavior, complicating applications needing radiances throughout a band, such as cross-calibration with other instruments and regression retrieval algorithms. The AIRS Level-1C product is similar to Level-1B but with instrument artifacts removed. This paper focuses on the "cleaning" portion of Level-1C, which identifies bad radiance values within spectra and produces substitute radiances using redundant information from other channels. The substitution is done in two passes, first with a simple combination of values from neighboring channels, then with principal components. After results of the substitution are shown, differences between principal component reconstructed values and observed radiances are used to investigate detailed noise characteristics and spatial misalignment in other channels.

  10. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  11. Water reuse systems: A review of the principal components

    USGS Publications Warehouse

    Lucchetti, G.; Gray, G.A.

    1988-01-01

    Principal components of water reuse systems include ammonia removal, disease control, temperature control, aeration, and particulate filtration. Effective ammonia removal techniques include air stripping, ion exchange, and biofiltration. Selection of a particular technique largely depends on site-specific requirements (e.g., space, existing water quality, and fish densities). Disease control, although often overlooked, is a major problem in reuse systems. Pathogens can be controlled most effectively with ultraviolet radiation, ozone, or chlorine. Simple and inexpensive methods are available to increase oxygen concentration and eliminate gas supersaturation, these include commercial aerators, air injectors, and packed columns. Temperature control is a major advantage of reuse systems, but the equipment required can be expensive, particularly if water temperature must be rigidly controlled and ambient air temperature fluctuates. Filtration can be readily accomplished with a hydrocyclone or sand filter that increases overall system efficiency. Based on criteria of adaptability, efficiency, and reasonable cost, we recommend components for a small water reuse system.

  12. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  13. Distribution of a low dose compound within pharmaceutical tablet by using multivariate curve resolution on Raman hyperspectral images.

    PubMed

    Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2015-01-25

    In this work, Raman hyperspectral images and multivariate curve resolution-alternating least squares (MCR-ALS) are used to study the distribution of actives and excipients within a pharmaceutical drug product. This article is mainly focused on the distribution of a low dose constituent. Different approaches are compared, using initially filtered or non-filtered data, or using a column-wise augmented dataset before starting the MCR-ALS iterative process including appended information on the low dose component. In the studied formulation, magnesium stearate is used as a lubricant to improve powder flowability. With a theoretical concentration of 0.5% (w/w) in the drug product, the spectral variance contained in the data is weak. By using a principal component analysis (PCA) filtered dataset as a first step of the MCR-ALS approach, the lubricant information is lost in the non-explained variance and its associated distribution in the tablet cannot be highlighted. A sufficient number of components to generate the PCA noise-filtered matrix has to be used in order to keep the lubricant variability within the data set analyzed or, otherwise, work with the raw non-filtered data. Different models are built using an increasing number of components to perform the PCA reduction. It is shown that the magnesium stearate information can be extracted from a PCA model using a minimum of 20 components. In the last part, a column-wise augmented matrix, including a reference spectrum of the lubricant, is used before starting MCR-ALS process. PCA reduction is performed on the augmented matrix, so the magnesium stearate contribution is included within the MCR-ALS calculations. By using an appropriate PCA reduction, with a sufficient number of components, or by using an augmented dataset including appended information on the low dose component, the distribution of the two actives, the two main excipients and the low dose lubricant are correctly recovered. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  15. Application of principal component analysis for the optimisation of lead(II) biosorption.

    PubMed

    Wajda, Łukasz; Duda-Chodak, Aleksandra; Tarko, Tomasz; Kamiński, Paweł

    2017-10-03

    Current study was focused on optimising lead(II) biosorption carried out by living cells of Arthrospira platensis using Principal Component Analysis. Various experimental conditions were considered: initial metal concentration (50 and 100 mg/l), solution pH (4.0, 4.5, 5.0, 5.5) and contact time (10, 20, 30, 40, 50 and 60 min) at constant rotary speed 200 rpm. It was found that when the biomass was separated from experimental solutions by the filtration, almost 50% of initial metal dose was removed by the filter paper. Moreover, pH was the most important parameter influencing examined processes. The Principal Component Analysis indicated that the most optimum conditions for lead(II) biosorption were metal initial concentration 100 mg/l, pH 4.5 and time 60 min. According to the analysis of the first component it might be stated that the lead(II) uptake increases in time. In overall, it was found to be useful for analysing data obtained in biosorption experiments and eliminating insignificant experimental conditions. Experimental data fitted Langmuir and Dubinin-Radushkevich models indicating that physical and chemical absorption take place at the same time. Further studies are necessary to verify how sorption-desorption cycles affect A. platensis cells.

  16. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  17. Photometer for detection of sodium day airglow.

    NASA Technical Reports Server (NTRS)

    Mcmahon, D. J.; Manring, E. R.; Patty, R. R.

    1973-01-01

    Description of a photometer for daytime ground-based measurements of sodium airglow emission. The photometer described can be characterized by the following principal features: (1) a narrow (4.5-A) interference filter for initial discrimination; (2) cooled photomultiplier detector to reduce noise from dark current fluctuations and chopping to eliminate the average dark current; (3) a sodium vapor resonance cell to provide an effective bandpass comparable to the Doppler line width; (4) separate detection of all light transmitted by the interference filter to evaluate the Rayleigh and Mie components within the Doppler width of the resonance cell; and (5) temperature quenching of the resonance cell to evaluate and account for instrumental imperfections.

  18. Extracting the regional common-mode component of GPS station position time series from dense continuous network

    NASA Astrophysics Data System (ADS)

    Tian, Yunfeng; Shen, Zheng-Kang

    2016-02-01

    We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.

  19. Adaptive non-local means on local principle neighborhood for noise/artifacts reduction in low-dose CT images.

    PubMed

    Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying

    2017-09-01

    Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.

  20. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  1. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  2. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  3. Enhanced 40 and 80 Gb/s wavelength conversion using a rectangular shaped optical filter for both red and blue spectral slicing.

    PubMed

    Raz, O; Herrera, J; Dorren, H J S

    2009-02-02

    By using a tunable filter with tunability of both bandwidth and wavelength and a very sharp filter roll-off, considerable improvement of all optical Wavelength Conversion, based on Cross Gain and Phase Modulation effects in a Semiconductor Optical Amplifier and spectral slicing, is shown. At 40 Gb/s slicing of blue spectral components is shown to result in a small penalty of 0.7 dB, with a minimal eye broadening, and at 80 Gb/s the low demonstrated 0.5 dB penalty is a dramatic improvement over previously reported wavelength converters using the same principal. Additionally, we give for the first time quantitative results for the case of red spectral slicing at 40 Gb/s which we found to have only 0.5 dB penalty and a narrower time response, as anticipated by previously published theoretical papers. Numerical simulations for the dependence of the eye opening on the filter characteristics highlight the importance of the combination of a sharp filter roll-off and a broad passband.

  4. An Independent Filter for Gene Set Testing Based on Spectral Enrichment.

    PubMed

    Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H

    2015-01-01

    Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.

  5. Driving an Active Vibration Balancer to Minimize Vibrations at the Fundamental and Harmonic Frequencies

    NASA Technical Reports Server (NTRS)

    Holliday, Ezekiel S. (Inventor)

    2014-01-01

    Vibrations of a principal machine are reduced at the fundamental and harmonic frequencies by driving the drive motor of an active balancer with balancing signals at the fundamental and selected harmonics. Vibrations are sensed to provide a signal representing the mechanical vibrations. A balancing signal generator for the fundamental and for each selected harmonic processes the sensed vibration signal with adaptive filter algorithms of adaptive filters for each frequency to generate a balancing signal for each frequency. Reference inputs for each frequency are applied to the adaptive filter algorithms of each balancing signal generator at the frequency assigned to the generator. The harmonic balancing signals for all of the frequencies are summed and applied to drive the drive motor. The harmonic balancing signals drive the drive motor with a drive voltage component in opposition to the vibration at each frequency.

  6. Enhancement of TEM Data and Noise Characterization by Principal Component Analysis

    DTIC Science & Technology

    2010-05-01

    include simply thresholding a noise level and ignoring any signal below the chosen value ( Pasion and Oldenburg, 2001b), stacking, and median filters...to de-trend the data ( Pasion and Oldenburg, 2001a). To date, there has not been a concentrated research effort focused on separating the various...Negative values not displayed) 27 Magnetic soil at Kaho’olawe (and in general) exhibits a t−1 decay in TEM surveys ( Pasion et al., 2002). This signal

  7. Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra

    NASA Astrophysics Data System (ADS)

    Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong

    2017-08-01

    Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.

  8. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  9. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  10. Classification of Hyperspectral Data Based on Guided Filtering and Random Forest

    NASA Astrophysics Data System (ADS)

    Ma, H.; Feng, W.; Cao, X.; Wang, L.

    2017-09-01

    Hyperspectral images usually consist of more than one hundred spectral bands, which have potentials to provide rich spatial and spectral information. However, the application of hyperspectral data is still challengeable due to "the curse of dimensionality". In this context, many techniques, which aim to make full use of both the spatial and spectral information, are investigated. In order to preserve the geometrical information, meanwhile, with less spectral bands, we propose a novel method, which combines principal components analysis (PCA), guided image filtering and the random forest classifier (RF). In detail, PCA is firstly employed to reduce the dimension of spectral bands. Secondly, the guided image filtering technique is introduced to smooth land object, meanwhile preserving the edge of objects. Finally, the features are fed into RF classifier. To illustrate the effectiveness of the method, we carry out experiments over the popular Indian Pines data set, which is collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. By comparing the proposed method with the method of only using PCA or guided image filter, we find that effect of the proposed method is better.

  11. Relation between aerosol sources and meteorological parameters for inhalable atmospheric particles in Sao Paulo City, Brazil

    NASA Astrophysics Data System (ADS)

    Andrade, Fatima; Orsini, Celso; Maenhaut, Willy

    Stacked filter units were used to collect atmospheric particles in separate coarse and fine fractions at the Sao Paulo University Campus during the winter of 1989. The samples were analysed by particle-induced X-ray emission (PIXE) and the data were subjected to an absolute principal component analysis (APCA). Five sources were identified for the fine particles: industrial emissions, which accounted for 13% of the fine mass; emissions from residual oil and diesel, explaining 41%; resuspended soil dust, with 28%; and emissions of Cu and of Mg, together with 18%. For the coarse particles, four sources were identified: soil dust, accounting for 59% of the coarse mass; industrial emissions, with 19%; oil burning, with 8%; and sea salt aerosol, with 14% of the coarse mass. A data set with various meteorological parameters was also subjected to APCA, and a correlation analysis was performed between the meteorological "absolute principal component scores" (APCS) and the APCS from the fine and coarse particle data sets. The soil dust sources for the fine and coarse aerosol were highly correlated with each other and were anticorrelated with the sea breeze component. The industrial components in the fine and coarse size fractions were also highly positively correlated. Furthermore, the industrial component was related with the northeasterly wind direction and, to a lesser extent, with the sea breeze component.

  12. Calibration and filtering strategies for frequency domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret

    2010-01-01

    echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431

  13. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  14. Source apportionment of speciated PM2.5 over Halifax, Nova Scotia, during BORTAS-B, using pragmatic mass closure and principal component analysis

    NASA Astrophysics Data System (ADS)

    Gibson, Mark D.; Kuchta, James; Chisholm, Lucy; Duck, Tom; Hopper, Jason; Beauchamp, Stephen; Waugh, David; King, Gavin; Pierce, Jeffrey; Li, Zhengyan; Leaitch, Richard; Ward, Tony J.; Haelssig, Jan; Palmer, Paul I.

    2013-04-01

    During BORTAS-B, 42 days of contiguous PM2.5 filter samples were collected during the summer of 2011 in Halifax, Nova Scotia. The aim of the PM2.5 filter sampling was to apportion the source contribution to the total PM2.5 mass concentration in Halifax to inform and validate other surface measurements and chemical transport models related to BORTAS-B. Sampling was conducted on the roof of a Dalhousie University building at a height of 15 m. The building is located in a residential area of Halifax. Continuous black carbon (BC) was measured using a Magee AE-42 aethalometer. Continuous organic carbon was measured using an Aerodyne, Aerosol Chemical Speciation Monitor. Daily teflon filter samples were collected for the determination of fine particulate with a median aerodynamic diameter less than or equal to 2.5 microns (PM2.5). An additional, daily, nylon filter was used for the determination of PM2.5 cations and anions by IC. The PM2.5 teflon filter was analysed for 33 metals by XRF and 10 trace metals by ICP-MS. The biomass burning marker levoglucosan was analysed by GC-MS following derivatization. Excellent agreement (R2 = 0.88) was observed between continuous and filter based measurements with a gradient of 2.76. The median (min : max) PM2.5 mass concentration during BORTAS-B = 3.9 (0.08 : 13.7) μg-m3. The median (min : max) continuous BC = 0.39 (0.12 : 1.03); SO4 = 0.47 (0.14 : 5.59); NO3 = 0.067 (0.007 : 0.64); OC = 0.77 (0.18 : 2.77); NH4 = 0.15 (0:003 : 1.45); Cl = 0.011 (0.0019 : 0.32); Fe = 0.018 (0.0011 : 0.097); Al = 0.011 (0.0091 : 0.086); Si = 0.03 (0.0044 : 0.29); V = 0.0026 (0.0016 : 0.017) and Ni = 0.0007 (0.0005 : 0.0037) μg-m3 respectively. Absolute principal component scores (APCS) and pragmatic mass closure (PMC) will be used to identify the sources driving the observed PM2.5 variability over Halifax, during BORTAS-B. A comparison of APCS and PMC PM2.5 receptor model output results will be presented. These model data will provide further insight into the source contribution to summertime surface PM2.5 mass in Halifax, Nova Scotia, Canada.

  15. North Atlantic storm track variability and its association to the North Atlantic oscillation and climate variability of northern Europe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, J.C.

    The primary mode of North Atlantic track variability is identified using rotated principal component analysis (RPCA) on monthly fields of root-mean-squares of daily high-pass filtered (2-8-day periods) sea level pressures (SLP) for winters (December-February) 1900-92. It is examined in terms of its association with (1) monthly mean SLP fields, (2) regional low-frequency teleconnections, and (3) the seesaw in winter temperatures between Greenland and northern Europe. 32 refs., 9 figs.

  16. Research on spectroscopic imaging. Volume 1: Technical discussion. [birefringent filters

    NASA Technical Reports Server (NTRS)

    Title, A.; Rosenberg, W.

    1979-01-01

    The principals of operation and the capabilities of birefringent filters systems are examined. Topics covered include: Lyot, Solc, and partial polarizer filters; transmission profile management; tuning birefringent filters; field of view; bandpass control; engineering considerations; and recommendations. Improvements for field of view effects, and the development of birefringent filters for spaceflight are discussed in appendices.

  17. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    PubMed

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  18. Nonparametric Method for Genomics-Based Prediction of Performance of Quantitative Traits Involving Epistasis in Plant Breeding

    PubMed Central

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H.

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression. PMID:23226325

  19. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  20. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  1. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  2. Spatially resolved bimodal spectroscopy for classification/evaluation of mouse skin inflammatory and pre-cancerous stages

    NASA Astrophysics Data System (ADS)

    Díaz-Ayil, Gilberto; Amouroux, Marine; Clanché, Fabien; Granjon, Yves; Blondel, Walter C. P. M.

    2009-07-01

    Spatially-resolved bimodal spectroscopy (multiple AutoFluorescence AF excitation and Diffuse Reflectance DR), was used in vivo to discriminate various healthy and precancerous skin stages in a pre-clinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A specific data preprocessing scheme was applied to intensity spectra (filtering, spectral correction and intensity normalization), and several sets of spectral characteristics were automatically extracted and selected based on their discrimination power, statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of Sensibility (Se) and Specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibres distances and of the numbers of principal components, such that: Se and Sp ~ 100% when discriminating CH vs. others; Sp ~ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ~ 74% and Se ~ 63% for AH vs. D.

  3. A baseline drift detrending technique for fast scan cyclic voltammetry.

    PubMed

    DeWaele, Mark; Oh, Yoonbae; Park, Cheonho; Kang, Yu Min; Shin, Hojin; Blaha, Charles D; Bennet, Kevin E; Kim, In Young; Lee, Kendall H; Jang, Dong Pyo

    2017-11-06

    Fast scan cyclic voltammetry (FSCV) has been commonly used to measure extracellular neurotransmitter concentrations in the brain. Due to the unstable nature of the background currents inherent in FSCV measurements, analysis of FSCV data is limited to very short amounts of time using traditional background subtraction. In this paper, we propose the use of a zero-phase high pass filter (HPF) as the means to remove the background drift. Instead of the traditional method of low pass filtering across voltammograms to increase the signal to noise ratio, a HPF with a low cutoff frequency was applied to the temporal dataset at each voltage point to remove the background drift. As a result, the HPF utilizing cutoff frequencies between 0.001 Hz and 0.01 Hz could be effectively used to a set of FSCV data for removing the drifting patterns while preserving the temporal kinetics of the phasic dopamine response recorded in vivo. In addition, compared to a drift removal method using principal component analysis, this was found to be significantly more effective in reducing the drift (unpaired t-test p < 0.0001, t = 10.88) when applied to data collected from Tris buffer over 24 hours although a drift removal method using principal component analysis also showed the effective background drift reduction. The HPF was also applied to 5 hours of FSCV in vivo data. Electrically evoked dopamine peaks, observed in the nucleus accumbens, were clearly visible even without background subtraction. This technique provides a new, simple, and yet robust, approach to analyse FSCV data with an unstable background.

  4. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.

  5. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  6. Multi-spectral endogenous fluorescence imaging for bacterial differentiation

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Babayants, Margarita V.; Korotkov, Oleg V.; Kudrin, Konstantin G.; Rimskaya, Elena N.; Shikunova, Irina A.; Kurlov, Vladimir N.; Cherkasova, Olga P.; Komandin, Gennady A.; Reshetov, Igor V.; Zaytsev, Kirill I.

    2017-07-01

    In this paper, the multi-spectral endogenous fluorescence imaging was implemented for bacterial differentiation. The fluorescence imaging was performed using a digital camera equipped with a set of visual bandpass filters. Narrowband 365 nm ultraviolet radiation passed through a beam homogenizer was used to excite the sample fluorescence. In order to increase a signal-to-noise ratio and suppress a non-fluorescence background in images, the intensity of the UV excitation was modulated using a mechanical chopper. The principal components were introduced for differentiating the samples of bacteria based on the multi-spectral endogenous fluorescence images.

  7. Emergency sacrificial sealing method in filters, equipment, or systems

    DOEpatents

    Brown, Erik P

    2014-09-30

    A system seals a filter or equipment component to a base and will continue to seal the filter or equipment component to the base in the event of hot air or fire. The system includes a first sealing material between the filter or equipment component and the base; and a second sealing material between the filter or equipment component and the base and proximate the first sealing material. The first sealing material and the second seal material are positioned relative to each other and relative to the filter or equipment component and the base to seal the filter or equipment component to the base and upon the event of fire the second sealing material will be activated and expand to continue to seal the filter or equipment component to the base in the event of hot air or fire.

  8. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  9. On reliable time-frequency characterization and delay estimation of stimulus frequency otoacoustic emissions

    NASA Astrophysics Data System (ADS)

    Biswal, Milan; Mishra, Srikanta

    2018-05-01

    The limited information on origin and nature of stimulus frequency otoacoustic emissions (SFOAEs) necessitates a thorough reexamination into SFOAE analysis procedures. This will lead to a better understanding of the generation of SFOAEs. The SFOAE response waveform in the time domain can be interpreted as a summation of amplitude modulated and frequency modulated component waveforms. The efficiency of a technique to segregate these components is critical to describe the nature of SFOAEs. Recent advancements in robust time-frequency analysis algorithms have staked claims on the more accurate extraction of these components, from composite signals buried in noise. However, their potential has not been fully explored for SFOAEs analysis. Indifference to distinct information, due to nature of these analysis techniques, may impact the scientific conclusions. This paper attempts to bridge this gap in literature by evaluating the performance of three linear time-frequency analysis algorithms: short-time Fourier transform (STFT), continuous Wavelet transform (CWT), S-transform (ST) and two nonlinear algorithms: Hilbert-Huang Transform (HHT), synchrosqueezed Wavelet transform (SWT). We revisit the extraction of constituent components and estimation of their magnitude and delay, by carefully evaluating the impact of variation in analysis parameters. The performance of HHT and SWT from the perspective of time-frequency filtering and delay estimation were found to be relatively less efficient for analyzing SFOAEs. The intrinsic mode functions of HHT does not completely characterize the reflection components and hence IMF based filtering alone, is not recommended for segregating principal emission from multiple reflection components. We found STFT, WT, and ST to be suitable for canceling multiple internal reflection components with marginal altering in SFOAE.

  10. Air filtration systems and restrictive access conditions improve indoor air quality in clinical units: Penicillium as a general indicator of hospital indoor fungal levels.

    PubMed

    Araujo, Ricardo; Cabral, João Paulo; Rodrigues, Acácio Gonçalves

    2008-03-01

    High-efficiency particulate air (HEPA) filters do not completely prevent nosocomial fungal infections. The first aim of this study was to evaluate the impact of different filters and access conditions upon airborne fungi in hospital facilities. Additionally, this study identified fungal indicators of indoor air concentrations. Eighteen rooms and wards equipped with different air filter systems, and access conditions were sampled weekly, during 16 weeks. Tap water samples were simultaneously collected. The overall mean concentration of atmospheric fungi for all wards was 100 colony forming units/m(3). We found a direct proportionality between the levels of the different fungi in the studied atmospheres. Wards with HEPA filters at positive air flow yielded lower fungal levels. Also, the existence of an anteroom and the use of protective clothes were associated to the lowest fungal levels. Principal component analysis showed that penicillia afforded the best separation between wards' air fungal levels. Fungal strains were rarely recovered from tap water samples. In addition to air filtration systems, some access conditions to hospital units, like presence of anteroom and use of protective clothes, may prevent high fungal air load. Penicillia can be used as a general indicator of indoor air fungal levels at Hospital S. João.

  11. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  12. Interdecadal variability in pan-Pacific and global SST, revisited

    NASA Astrophysics Data System (ADS)

    Tung, Ka-Kit; Chen, Xianyao; Zhou, Jiansong; Li, King-Fai

    2018-05-01

    Interest in the "Interdecadal Pacific Oscillation (IPO)" in the global SST has surged recently on suggestions that the Pacific may be the source of prominent interdecadal variations observed in the global-mean surface temperature possibly through the mechanism of low-frequency modulation of the interannual El Nino-Southern Oscillation (ENSO) phenomenon. IPO was defined by performing empirical orthogonal function (EOF) analysis of low-pass filtered SST. The low-pass filtering creates its unique set of mathematical problems—in particular, mode mixing—and has led to some questions, many unanswered. To understand what these EOFs are, we express them first in terms of the recently developed pairwise rotated EOFs of the unfiltered SST, which can largely separate the high and low frequency bands without resorting to filtering. As reported elsewhere, the leading rotated dynamical modes (after the global warming trend) of the unfiltered global SST are: ENSO, Pacific Decadal Oscillation (PDO), and Atlantic Multidecadal Oscillation (AMO). IPO is not among them. The leading principal component (PC) of the low-pass filtered global SST is usually defined as IPO and it is seen to comprise of ENSO, PDO and AMO in various proportions depending on the filter threshold. With decadal filtering, the contribution of the interannual ENSO is understandably negligible. The leading dynamical mode of the filtered global SST is mostly AMO, and therefore should not have been called the Interdecadal "Pacific" Oscillation. The leading dynamical mode of the filtered pan-Pacific SST is mostly PDO. This and other low-frequency variability that have the action center in the Pacific, from either the pan-Pacific or global SST, have near zero global mean.

  13. Label-free observation of tissues by high-speed stimulated Raman spectral microscopy and independent component analysis

    NASA Astrophysics Data System (ADS)

    Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi

    2013-02-01

    We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.

  14. Detection and tracking of gas plumes in LWIR hyperspectral video sequence data

    NASA Astrophysics Data System (ADS)

    Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.

    2013-05-01

    Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.

  15. Emergency sacrificial sealing method in filters, equipment, or systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Erik P.

    A system seals a filter or equipment component to abase and will continue to seal the filter or equipment component to the base in the event of hot air or fire. The system includes a first sealing material between the filter or equipment component and the base; and a second sealing material between the filter or equipment component and the base and proximate the first sealing material. The first sealing material and the second seal material are positioned relative to each other and relative to the filter or equipment component and the base to seal the filter or equipment componentmore » to the base and upon the event of fire the second sealing material will be activated and expand to continue to seal the filter or equipment component to the base in the event of hot air or fire.« less

  16. Discrimination of a chestnut-oak forest unit for geologic mapping by means of a principal component enhancement of Landsat multispectral scanner data.

    USGS Publications Warehouse

    Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.

    1981-01-01

    A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors

  17. 4D Cone-beam CT reconstruction using a motion model based on principal component analysis

    PubMed Central

    Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.

    2011-01-01

    Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852

  18. An ECG signals compression method and its validation using NNs.

    PubMed

    Fira, Catalina Monica; Goras, Liviu

    2008-04-01

    This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.

  19. Using recurrence plot analysis for software execution interpretation and fault detection

    NASA Astrophysics Data System (ADS)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  20. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  1. Optical path switching based differential absorption radiometry for substance detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2005-01-01

    An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  2. Optical path switching based differential absorption radiometry for substance detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2003-01-01

    An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  3. Automated alignment of a reconfigurable optical system using focal-plane sensing and Kalman filtering.

    PubMed

    Fang, Joyce; Savransky, Dmitry

    2016-08-01

    Automation of alignment tasks can provide improved efficiency and greatly increase the flexibility of an optical system. Current optical systems with automated alignment capabilities are typically designed to include a dedicated wavefront sensor. Here, we demonstrate a self-aligning method for a reconfigurable system using only focal plane images. We define a two lens optical system with 8 degrees of freedom. Images are simulated given misalignment parameters using ZEMAX software. We perform a principal component analysis on the simulated data set to obtain Karhunen-Loève modes, which form the basis set whose weights are the system measurements. A model function, which maps the state to the measurement, is learned using nonlinear least-squares fitting and serves as the measurement function for the nonlinear estimator (extended and unscented Kalman filters) used to calculate control inputs to align the system. We present and discuss simulated and experimental results of the full system in operation.

  4. Single and two-shot quantitative phase imaging using Hilbert-Huang Transform based fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos

    2016-08-01

    In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.

  5. Multiple Optical Filter Design Simulation Results

    NASA Astrophysics Data System (ADS)

    Mendelsohn, J.; Englund, D. C.

    1986-10-01

    In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.

  6. Multivariate frequency domain analysis of protein dynamics

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Fuchigami, Sotaro; Kidera, Akinori

    2009-03-01

    Multivariate frequency domain analysis (MFDA) is proposed to characterize collective vibrational dynamics of protein obtained by a molecular dynamics (MD) simulation. MFDA performs principal component analysis (PCA) for a bandpass filtered multivariate time series using the multitaper method of spectral estimation. By applying MFDA to MD trajectories of bovine pancreatic trypsin inhibitor, we determined the collective vibrational modes in the frequency domain, which were identified by their vibrational frequencies and eigenvectors. At near zero temperature, the vibrational modes determined by MFDA agreed well with those calculated by normal mode analysis. At 300 K, the vibrational modes exhibited characteristic features that were considerably different from the principal modes of the static distribution given by the standard PCA. The influences of aqueous environments were discussed based on two different sets of vibrational modes, one derived from a MD simulation in water and the other from a simulation in vacuum. Using the varimax rotation, an algorithm of the multivariate statistical analysis, the representative orthogonal set of eigenmodes was determined at each vibrational frequency.

  7. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less

  8. On the Fallibility of Principal Components in Research

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2017-01-01

    The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

  9. Spatial variation analyses of Thematic Mapper data for the identification of linear features in agricultural landscapes

    NASA Technical Reports Server (NTRS)

    Pelletier, R. E.

    1984-01-01

    A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.

  10. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    NASA Astrophysics Data System (ADS)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.

  11. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  12. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases.

    PubMed

    Sengur, Abdulkadir

    2008-03-01

    In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.

  13. A data-driven approach for denoising GNSS position time series

    NASA Astrophysics Data System (ADS)

    Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin

    2017-12-01

    Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.

  14. Occurrence, distribution and ecological risk assessment of multiple classes of UV filters in surface waters from different countries.

    PubMed

    Tsui, Mirabelle M P; Leung, H W; Wai, Tak-Cheung; Yamashita, Nobuyoshi; Taniyasu, Sachi; Liu, Wenhua; Lam, Paul K S; Murphy, Margaret B

    2014-12-15

    Organic UV filters are common ingredients of personal care products (PCPs), but little is known about their distribution in and potential impacts to the marine environment. This study reports the occurrence and risk assessment of twelve widely used organic UV filters in surface water collected in eight cities in four countries (China, the United States, Japan, and Thailand) and the North American Arctic. The number of compounds detected, Hong Kong (12), Tokyo (9), Bangkok (9), New York (8), Los Angeles (8), Arctic (6), Shantou (5) and Chaozhou (5), generally increased with population density. Median concentrations of all detectable UV filters were <250 ng/L. The presence of these compounds in the Arctic is likely due to a combination of inadequate wastewater treatment and long-range oceanic transport. Principal component analysis (PCA) and two-way analysis of variance (ANOVA) were conducted to explore spatiotemporal patterns and difference in organic UV filter levels in Hong Kong. In general, spatial patterns varied with sampling month and all compounds showed higher concentrations in the wet season except benzophenone-4 (BP-4). Probabilistic risk assessment showed that 4-methylbenzylidene camphor (4-MBC) posed greater risk to algae, while benzophenone-3 (BP-3) and ethylhexyl methoxycinnamate (EHMC) were more likely to pose a risk to fishes and also posed high risk of bleaching in hard corals in aquatic recreational areas in Hong Kong. This study is the first to report the occurrence of organic UV filters in the Arctic and provides a wider assessment of their potential negative impacts in the marine environment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  16. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    NASA Astrophysics Data System (ADS)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  17. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  18. Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate

    PubMed Central

    Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.

    2009-01-01

    Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786

  19. Discrimination of gender-, speed-, and shoe-dependent movement patterns in runners using full-body kinematics.

    PubMed

    Maurer, Christian; Federolf, Peter; von Tscharner, Vinzenz; Stirling, Lisa; Nigg, Benno M

    2012-05-01

    Changes in gait kinematics have often been analyzed using pattern recognition methods such as principal component analysis (PCA). It is usually just the first few principal components that are analyzed, because they describe the main variability within a dataset and thus represent the main movement patterns. However, while subtle changes in gait pattern (for instance, due to different footwear) may not change main movement patterns, they may affect movements represented by higher principal components. This study was designed to test two hypotheses: (1) speed and gender differences can be observed in the first principal components, and (2) small interventions such as changing footwear change the gait characteristics of higher principal components. Kinematic changes due to different running conditions (speed - 3.1m/s and 4.9 m/s, gender, and footwear - control shoe and adidas MicroBounce shoe) were investigated by applying PCA and support vector machine (SVM) to a full-body reflective marker setup. Differences in speed changed the basic movement pattern, as was reflected by a change in the time-dependent coefficient derived from the first principal. Gender was differentiated by using the time-dependent coefficient derived from intermediate principal components. (Intermediate principal components are characterized by limb rotations of the thigh and shank.) Different shoe conditions were identified in higher principal components. This study showed that different interventions can be analyzed using a full-body kinematic approach. Within the well-defined vector space spanned by the data of all subjects, higher principal components should also be considered because these components show the differences that result from small interventions such as footwear changes. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  20. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  1. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  2. Direct characterization of quantum dynamics with noisy ancilla

    DOE PAGES

    Dumitrescu, Eugene F.; Humble, Travis S.

    2015-11-23

    We present methods for the direct characterization of quantum dynamics (DCQD) in which both the principal and ancilla systems undergo noisy processes. Using a concatenated error detection code, we discriminate between located and unlocated errors on the principal system in what amounts to filtering of ancilla noise. The example of composite noise involving amplitude damping and depolarizing channels is used to demonstrate the method, while we find the rate of noise filtering is more generally dependent on code distance. Furthermore our results indicate the accuracy of quantum process characterization can be greatly improved while remaining within reach of current experimentalmore » capabilities.« less

  3. Leukocyte-reduced blood components: patient benefits and practical applications.

    PubMed

    Higgins, V L

    1996-05-01

    To review the various types of filters used for red blood cell and platelet transfusions and to explain the trend in the use of leukocyte removal filters, practical information about their use, considerations in the selection of a filtration method, and cost-effectiveness issues. Published articles, books, and the author's experience. Leukocyte removal filters are used to reduce complications associated with transfused white blood cells that are contained in units of red blood cells and platelets. These complications include nonhemolytic febrile transfusion reactions (NHFTRs), alloimmunization and refractoriness to platelet transfusion, transfusion-transmitted cytomegalovirus (CMV), and immunomodulation. Leukocyte removal filters may be used at the bedside, in a hospital blood bank, or in a blood collection center. Factors that affect the flow rate of these filters include the variations in the blood component, the equipment used, and filter priming. Studies on the cost-effectiveness of using leukocyte-reduced blood components demonstrate savings based on the reduction of NHFTRs, reduction in the number of blood components used, and the use of filtered blood components as the equivalent of CMV seronegative-screened products. The use of leukocyte-reduced blood components significantly diminishes or prevents many of the adverse transfusion reactions associated with donor white blood cells. Leukocyte removal filters are cost-effective, and filters should be selected based on their ability to consistently achieve low leukocyte residual levels as well as their ease of use. Physicians may order leukocyte-reduced blood components for specific patients, or the components may be used because of an established institutional transfusion policy. Nurses often participate in deciding on a filtration method, primarily based on ease of use. Understanding the considerations in selecting a filtration method will help nurses make appropriate decisions to ensure quality patient care.

  4. Sub-wavelength efficient polarization filter (SWEP filter)

    DOEpatents

    Simpson, Marcus L.; Simpson, John T.

    2003-12-09

    A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

  5. Optical Path Switching Based Differential Absorption Radiometry for Substance Detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2000-01-01

    A system and method are provided for detecting one or more substances. An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. The first wavelength band and second wavelength band are unique. Further, spectral absorption of a substance of interest is different at the first wavelength band as compared to the second wavelength band. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  6. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  7. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  8. The Butterflies of Principal Components: A Case of Ultrafine-Grained Polyphase Units

    NASA Astrophysics Data System (ADS)

    Rietmeijer, F. J. M.

    1996-03-01

    Dusts in the accretion regions of chondritic interplanetary dust particles [IDPs] consisted of three principal components: carbonaceous units [CUs], carbon-bearing chondritic units [GUs] and carbon-free silicate units [PUs]. Among others, differences among chondritic IDP morphologies and variable bulk C/Si ratios reflect variable mixtures of principal components. The spherical shapes of the initially amorphous principal components remain visible in many chondritic porous IDPs but fusion was documented for CUs, GUs and PUs. The PUs occur as coarse- and ultrafine-grained units that include so called GEMS. Spherical principal components preserved in an IDP as recognisable textural units have unique proporties with important implications for their petrological evolution from pre-accretion processing to protoplanet alteration and dynamic pyrometamorphism. Throughout their lifetime the units behaved as closed-systems without chemical exchange with other units. This behaviour is reflected in their mineralogies while the bulk compositions of principal components define the environments wherein they were formed.

  9. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  10. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  11. Identifying and mitigating batch effects in whole genome sequencing data.

    PubMed

    Tom, Jennifer A; Reeder, Jens; Forrest, William F; Graham, Robert R; Hunkapiller, Julie; Behrens, Timothy W; Bhangale, Tushar R

    2017-07-24

    Large sample sets of whole genome sequencing with deep coverage are being generated, however assembling datasets from different sources inevitably introduces batch effects. These batch effects are not well understood and can be due to changes in the sequencing protocol or bioinformatics tools used to process the data. No systematic algorithms or heuristics exist to detect and filter batch effects or remove associations impacted by batch effects in whole genome sequencing data. We describe key quality metrics, provide a freely available software package to compute them, and demonstrate that identification of batch effects is aided by principal components analysis of these metrics. To mitigate batch effects, we developed new site-specific filters that identified and removed variants that falsely associated with the phenotype due to batch effect. These include filtering based on: a haplotype based genotype correction, a differential genotype quality test, and removing sites with missing genotype rate greater than 30% after setting genotypes with quality scores less than 20 to missing. This method removed 96.1% of unconfirmed genome-wide significant SNP associations and 97.6% of unconfirmed genome-wide significant indel associations. We performed analyses to demonstrate that: 1) These filters impacted variants known to be disease associated as 2 out of 16 confirmed associations in an AMD candidate SNP analysis were filtered, representing a reduction in power of 12.5%, 2) In the absence of batch effects, these filters removed only a small proportion of variants across the genome (type I error rate of 3%), and 3) in an independent dataset, the method removed 90.2% of unconfirmed genome-wide SNP associations and 89.8% of unconfirmed genome-wide indel associations. Researchers currently do not have effective tools to identify and mitigate batch effects in whole genome sequencing data. We developed and validated methods and filters to address this deficiency.

  12. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    PubMed

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  13. Computerized detection of breast lesions in multi-centre and multi-instrument DCE-MR data using 3D principal component maps and template matching

    NASA Astrophysics Data System (ADS)

    Ertas, Gokhan; Doran, Simon; Leach, Martin O.

    2011-12-01

    In this study, we introduce a novel, robust and accurate computerized algorithm based on volumetric principal component maps and template matching that facilitates lesion detection on dynamic contrast-enhanced MR. The study dataset comprises 24 204 contrast-enhanced breast MR images corresponding to 4034 axial slices from 47 women in the UK multi-centre study of MRI screening for breast cancer and categorized as high risk. The scans analysed here were performed on six different models of scanner from three commercial vendors, sited in 13 clinics around the UK. 1952 slices from this dataset, containing 15 benign and 13 malignant lesions, were used for training. The remaining 2082 slices, with 14 benign and 12 malignant lesions, were used for test purposes. To prevent false positives being detected from other tissues and regions of the body, breast volumes are segmented from pre-contrast images using a fast semi-automated algorithm. Principal component analysis is applied to the centred intensity vectors formed from the dynamic contrast-enhanced T1-weighted images of the segmented breasts, followed by automatic thresholding to eliminate fatty tissues and slowly enhancing normal parenchyma and a convolution and filtering process to minimize artefacts from moderately enhanced normal parenchyma and blood vessels. Finally, suspicious lesions are identified through a volumetric sixfold neighbourhood connectivity search and calculation of two morphological features: volume and volumetric eccentricity, to exclude highly enhanced blood vessels, nipples and normal parenchyma and to localize lesions. This provides satisfactory lesion localization. For a detection sensitivity of 100%, the overall false-positive detection rate of the system is 1.02/lesion, 1.17/case and 0.08/slice, comparing favourably with previous studies. This approach may facilitate detection of lesions in multi-centre and multi-instrument dynamic contrast-enhanced breast MR data.

  14. Fiber comb filters based on UV-writing Bragg gratings in graded-index multimode fibers

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Lit, John; Gu, Xijia; Wei, Li

    2005-10-01

    We report a new kind of comb filters based on fiber Bragg gratings in graded-index multimode fibers. It produces two groups of spectra with a total of 36 reflection peaks that correspond to 18 principal modes and cross coupled modes. The mode indices and wavelength spacings have been investigated theoretically and experimentally. This kind of comb filters may be used to construct multi-wavelength light sources for sensing, optical communications, and instrumentations

  15. Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying

    2011-01-01

    Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706

  16. Bimodal spectroscopic evaluation of ultra violet-irradiated mouse skin inflammatory and precancerous stages: instrumentation, spectral feature extraction/selection and classification (k-NN, LDA and SVM)

    NASA Astrophysics Data System (ADS)

    Díaz-Ayil, G.; Amouroux, M.; Blondel, W. C. P. M.; Bourg-Heckly, G.; Leroux, A.; Guillemin, F.; Granjon, Y.

    2009-07-01

    This paper deals with the development and application of in vivo spatially-resolved bimodal spectroscopy (AutoFluorescence AF and Diffuse Reflectance DR), to discriminate various stages of skin precancer in a preclinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A programmable instrumentation was developed for acquiring AF emission spectra using 7 excitation wavelengths: 360, 368, 390, 400, 410, 420 and 430 nm, and DR spectra in the 390-720 nm wavelength range. After various steps of intensity spectra preprocessing (filtering, spectral correction and intensity normalization), several sets of spectral characteristics were extracted and selected based on their discrimination power statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of sensitivity (Se) and specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibers distances and of the numbers of principal components, such that: Se and Sp ≈ 100% when discriminating CH vs. others; Sp ≈ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ≈ 74% and Se ≈ 63%for AH vs. D.

  17. Identification of regional activation by factorization of high-density surface EMG signals: A comparison of Principal Component Analysis and Non-negative Matrix factorization.

    PubMed

    Gallina, Alessio; Garland, S Jayne; Wakeling, James M

    2018-05-22

    In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Statistical process control of cocrystallization processes: A comparison between OPLS and PLS.

    PubMed

    Silva, Ana F T; Sarraguça, Mafalda Cruz; Ribeiro, Paulo R; Santos, Adenilson O; De Beer, Thomas; Lopes, João Almeida

    2017-03-30

    Orthogonal partial least squares regression (OPLS) is being increasingly adopted as an alternative to partial least squares (PLS) regression due to the better generalization that can be achieved. Particularly in multivariate batch statistical process control (BSPC), the use of OPLS for estimating nominal trajectories is advantageous. In OPLS, the nominal process trajectories are expected to be captured in a single predictive principal component while uncorrelated variations are filtered out to orthogonal principal components. In theory, OPLS will yield a better estimation of the Hotelling's T 2 statistic and corresponding control limits thus lowering the number of false positives and false negatives when assessing the process disturbances. Although OPLS advantages have been demonstrated in the context of regression, its use on BSPC was seldom reported. This study proposes an OPLS-based approach for BSPC of a cocrystallization process between hydrochlorothiazide and p-aminobenzoic acid monitored on-line with near infrared spectroscopy and compares the fault detection performance with the same approach based on PLS. A series of cocrystallization batches with imposed disturbances were used to test the ability to detect abnormal situations by OPLS and PLS-based BSPC methods. Results demonstrated that OPLS was generally superior in terms of sensibility and specificity in most situations. In some abnormal batches, it was found that the imposed disturbances were only detected with OPLS. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Virtual directions in paleomagnetism: A global and rapid approach to evaluate the NRM components.

    NASA Astrophysics Data System (ADS)

    Ramón, Maria J.; Pueyo, Emilio L.; Oliva-Urcia, Belén; Larrasoaña, Juan C.

    2017-02-01

    We introduce a method and software to process demagnetization data for a rapid and integrative estimation of characteristic remanent magnetization (ChRM) components. The virtual directions (VIDI) of a paleomagnetic site are “all” possible directions that can be calculated from a given demagnetization routine of “n” steps (being m the number of specimens in the site). If the ChRM can be defined for a site, it will be represented in the VIDI set. Directions can be calculated for successive steps using principal component analysis, both anchored to the origin (resultant virtual directions RVD; m * (n2+n)/2) and not anchored (difference virtual directions DVD; m * (n2-n)/2). The number of directions per specimen (n2) is very large and will enhance all ChRM components with noisy regions where two components were fitted together (mixing their unblocking intervals). In the same way, resultant and difference virtual circles (RVC, DVC) are calculated. Virtual directions and circles are a global and objective approach to unravel different natural remanent magnetization (NRM) components for a paleomagnetic site without any assumption. To better constrain the stable components, some filters can be applied, such as establishing an upper boundary to the MAD, removing samples with anomalous intensities, or stating a minimum number of demagnetization steps (objective filters) or selecting a given unblocking interval (subjective but based on the expertise). On the other hand, the VPD program also allows the application of standard approaches (classic PCA fitting of directions a circles) and other ancillary methods (stacking routine, linearity spectrum analysis) giving an objective, global and robust idea of the demagnetization structure with minimal assumptions. Application of the VIDI method to natural cases (outcrops in the Pyrenees and u-channel data from a Roman dam infill in northern Spain) and their comparison to other approaches (classic end-point, demagnetization circle analysis, stacking routine and linearity spectrum analysis) allows validation of this technique. The VIDI is a global approach and it is especially useful for large data sets and rapid estimation of the NRM components.

  20. Contributions of depth filter components to protein adsorption in bioprocessing.

    PubMed

    Khanal, Ohnmar; Singh, Nripen; Traylor, Steven J; Xu, Xuankuo; Ghose, Sanchayita; Li, Zheng J; Lenhoff, Abraham M

    2018-04-16

    Depth filtration is widely used in downstream bioprocessing to remove particulate contaminants via depth straining and is therefore applied to harvest clarification and other processing steps. However, depth filtration also removes proteins via adsorption, which can contribute variously to impurity clearance and to reduction in product yield. The adsorption may occur on the different components of the depth filter, that is, filter aid, binder, and cellulose filter. We measured adsorption of several model proteins and therapeutic proteins onto filter aids, cellulose, and commercial depth filters at pH 5-8 and ionic strengths <50 mM and correlated the adsorption data to bulk measured properties such as surface area, morphology, surface charge density, and composition. We also explored the role of each depth filter component in the adsorption of proteins with different net charges, using confocal microscopy. Our findings show that a complete depth filter's maximum adsorptive capacity for proteins can be estimated by its protein monolayer coverage values, which are of order mg/m 2 , depending on the protein size. Furthermore, the extent of adsorption of different proteins appears to depend on the nature of the resin binder and its extent of coating over the depth filter surface, particularly in masking the cation-exchanger-like capacity of the siliceous filter aids. In addition to guiding improved depth filter selection, the findings can be leveraged in inspiring a more intentional selection of components and design of depth filter construction for particular impurity removal targets. © 2018 Wiley Periodicals, Inc.

  1. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  2. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  3. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  4. Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images

    PubMed Central

    Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali

    2015-01-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077

  5. 2D/3D facial feature extraction

    NASA Astrophysics Data System (ADS)

    Çinar Akakin, Hatice; Ali Salah, Albert; Akarun, Lale; Sankur, Bülent

    2006-02-01

    We propose and compare three different automatic landmarking methods for near-frontal faces. The face information is provided as 480x640 gray-level images in addition to the corresponding 3D scene depth information. All three methods follow a coarse-to-fine suite and use the 3D information in an assist role. The first method employs a combination of principal component analysis (PCA) and independent component analysis (ICA) features to analyze the Gabor feature set. The second method uses a subset of DCT coefficients for template-based matching. These two methods employ SVM classifiers with polynomial kernel functions. The third method uses a mixture of factor analyzers to learn Gabor filter outputs. We contrast the localization performance separately with 2D texture and 3D depth information. Although the 3D depth information per se does not perform as well as texture images in landmark localization, the 3D information has still a beneficial role in eliminating the background and the false alarms.

  6. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE PAGES

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.; ...

    2015-03-03

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  7. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  8. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  9. A two-step ultra-high-performance liquid chromatography-quadrupole/time of flight mass spectrometry with mass defect filtering method for rapid identification of analogues from known components of different chemical structure types in Fructus Gardeniae-Fructus Forsythiae herb pair extract and in rat's blood.

    PubMed

    Zhou, Wei; Shan, Jinjun; Meng, Minxin

    2018-08-17

    Fructus Gardeniae-Fructus Forsythiae herb pair is an herbal formula used extensively to treat inflammation and fever, but few systematic identification studies of the bioactive components have been reported. Herein, the unknown analogues in the first-step screening were rapidly identified from representative compounds in different structure types (geniposide as iridoid type, crocetin as crocetin type, jasminoside B as monocyclic monoterpene type, oleanolic acid as saponin type, 3-caffeoylquinic acid as organic acid type, forsythoside A as phenylethanoid type, phillyrin as lignan type and quercetin 3-rutinoside as flavonoid type) by UPLC-Q-Tof/MS combined with mass defect filtering (MDF), and further confirmed with reference standards and published literatures. Similarly, in the second step, other unknown components were rapidly discovered from the compounds identified in the first step by MDF. Using the two-step screening method, a total of 58 components were characterized in Fructus Gardeniae-Fructus Forsythiae (FG-FF) decoction. In rat's blood, 36 compounds in extract and 16 metabolites were unambiguously or tentatively identified. Besides, we found the principal metabolites were glucuronide conjugates, with the glucuronide conjugates of caffeic acid, quercetin and kaempferol confirmed as caffeic acid 3-glucuronide, quercetin 3-glucuronide and kaempferol 3-glucuronide by reference standards, respectively. Additionally, most of them bound more strongly to human serum albumin than their respective prototypes, predicted by Molecular Docking and Simulation, indicating that they had lower blood clearance in vivo and possibly more contribution to pharmacological effects. This study developed a novel two-step screening method in addressing how to comprehensively screen components in herbal medicine by UPLC-Q-Tof/MS with MDF. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Detection of circuit-board components with an adaptive multiclass correlation filter

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  11. An Introductory Application of Principal Components to Cricket Data

    ERIC Educational Resources Information Center

    Manage, Ananda B. W.; Scariano, Stephen M.

    2013-01-01

    Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…

  12. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  13. Identifying apple surface defects using principal components analysis and artifical neural networks

    USDA-ARS?s Scientific Manuscript database

    Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...

  14. Relative importance of habitat filtering and limiting similarity on species assemblages of alpine and subalpine plant communities.

    PubMed

    Takahashi, Koichi; Tanaka, Saeka

    2016-11-01

    This study examined how habitat filtering and limiting similarity affect species assemblages of alpine and subalpine plant communities along a slope gradient on Mt. Norikura in central Japan. Plant traits (plant height, individual leaf area, specific leaf area (SLA), leaf linearity, leaf nitrogen and chlorophyll concentrations) and abiotic environmental factors (elevation, slope inclination, ground surface texture, soil water, soil pH, soil nutrient concentrations of NH 4 -N and NO 3 -N) were examined. The metrics of variance, range, kurtosis and the standard deviation of neighbor distance divided by the range of traits present (SDNDr) were calculated for each plant trait to measure trait distribution patterns. Limiting similarity was detected only for chlorophyll concentration. By contrast, habitat filtering was detected for individual leaf area, SLA, leaf linearity, chlorophyll concentration. Abiotic environmental factors were summarized by the principal component analysis (PCA). The first PCA axis positively correlated with elevation and soil pH, and negatively correlated with sand cover, soil water, NH 4 -N and NO 3 -N concentrations. High values of the first PCA axis represent the wind-exposed upper slope with lower soil moisture and nutrient availabilities. Plant traits changed along the first PCA axis. Leaf area, SLA and chlorophyll concentration decreased, and leaf linearity increased with the first PCA axis. This study showed that the species assemblage of alpine and subalpine plants was determined mainly by habitat filtering, indicating that abiotic environmental factors are more important for species assemblage than interspecific competition. Therefore, only species adapting to abiotic environments can distribute to these environments.

  15. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  17. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  18. Shielded multi-stage EMI noise filter

    DOEpatents

    Kisner, Roger Allen; Fugate, David Lee

    2016-11-08

    Electromagnetic interference (EMI) noise filter embodiments and methods for filtering are provided herein. EMI noise filters include multiple signal exclusion enclosures. The multiple signal exclusion enclosures contain filter circuit stages. The signal exclusion enclosures can attenuate noise generated external to the enclosures and/or isolate noise currents generated by the corresponding filter circuits within the enclosures. In certain embodiments, an output of one filter circuit stage is connected to an input of the next filter circuit stage. The multiple signal exclusion enclosures can be chambers formed using conductive partitions to divide an outer signal exclusion enclosure. EMI noise filters can also include mechanisms to maintain the components of the filter circuit stages at a consistent temperature. For example, a metal base plate can distribute heat among filter components, and an insulating material can be positioned inside signal exclusion enclosures.

  19. Directly reconstructing principal components of heterogeneous particles from cryo-EM images.

    PubMed

    Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali

    2015-08-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule... management plan. (c) Operator training and qualification. (d) Emission limitations and operating limits. (e...

  1. 40 CFR 60.2570 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c...

  2. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  3. Free energy landscape of a biomolecule in dihedral principal component space: sampling convergence and correspondence between structures and minima.

    PubMed

    Maisuradze, Gia G; Leitner, David M

    2007-05-15

    Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.

  4. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  5. Principal Workload: Components, Determinants and Coping Strategies in an Era of Standardization and Accountability

    ERIC Educational Resources Information Center

    Oplatka, Izhar

    2017-01-01

    Purpose: In order to fill the gap in theoretical and empirical knowledge about the characteristics of principal workload, the purpose of this paper is to explore the components of principal workload as well as its determinants and the coping strategies commonly used by principals to face this personal state. Design/methodology/approach:…

  6. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  7. Analysis of indoor air pollutants checklist using environmetric technique for health risk assessment of sick building complaint in nonindustrial workplace

    PubMed Central

    Syazwan, AI; Rafee, B Mohd; Juahir, Hafizan; Azman, AZF; Nizar, AM; Izwyn, Z; Syahidatussyakirah, K; Muhaimin, AA; Yunos, MA Syafiq; Anita, AR; Hanafiah, J Muhamad; Shaharuddin, MS; Ibthisham, A Mohd; Hasmadi, I Mohd; Azhar, MN Mohamad; Azizan, HS; Zulfadhli, I; Othman, J; Rozalini, M; Kamarul, FT

    2012-01-01

    Purpose To analyze and characterize a multidisciplinary, integrated indoor air quality checklist for evaluating the health risk of building occupants in a nonindustrial workplace setting. Design A cross-sectional study based on a participatory occupational health program conducted by the National Institute of Occupational Safety and Health (Malaysia) and Universiti Putra Malaysia. Method A modified version of the indoor environmental checklist published by the Department of Occupational Health and Safety, based on the literature and discussion with occupational health and safety professionals, was used in the evaluation process. Summated scores were given according to the cluster analysis and principal component analysis in the characterization of risk. Environmetric techniques was used to classify the risk of variables in the checklist. Identification of the possible source of item pollutants was also evaluated from a semiquantitative approach. Result Hierarchical agglomerative cluster analysis resulted in the grouping of factorial components into three clusters (high complaint, moderate-high complaint, moderate complaint), which were further analyzed by discriminant analysis. From this, 15 major variables that influence indoor air quality were determined. Principal component analysis of each cluster revealed that the main factors influencing the high complaint group were fungal-related problems, chemical indoor dispersion, detergent, renovation, thermal comfort, and location of fresh air intake. The moderate-high complaint group showed significant high loading on ventilation, air filters, and smoking-related activities. The moderate complaint group showed high loading on dampness, odor, and thermal comfort. Conclusion This semiquantitative assessment, which graded risk from low to high based on the intensity of the problem, shows promising and reliable results. It should be used as an important tool in the preliminary assessment of indoor air quality and as a categorizing method for further IAQ investigations and complaints procedures. PMID:23055779

  8. Analysis of indoor air pollutants checklist using environmetric technique for health risk assessment of sick building complaint in nonindustrial workplace.

    PubMed

    Syazwan, Ai; Rafee, B Mohd; Juahir, Hafizan; Azman, Azf; Nizar, Am; Izwyn, Z; Syahidatussyakirah, K; Muhaimin, Aa; Yunos, Ma Syafiq; Anita, Ar; Hanafiah, J Muhamad; Shaharuddin, Ms; Ibthisham, A Mohd; Hasmadi, I Mohd; Azhar, Mn Mohamad; Azizan, Hs; Zulfadhli, I; Othman, J; Rozalini, M; Kamarul, Ft

    2012-01-01

    To analyze and characterize a multidisciplinary, integrated indoor air quality checklist for evaluating the health risk of building occupants in a nonindustrial workplace setting. A cross-sectional study based on a participatory occupational health program conducted by the National Institute of Occupational Safety and Health (Malaysia) and Universiti Putra Malaysia. A modified version of the indoor environmental checklist published by the Department of Occupational Health and Safety, based on the literature and discussion with occupational health and safety professionals, was used in the evaluation process. Summated scores were given according to the cluster analysis and principal component analysis in the characterization of risk. Environmetric techniques was used to classify the risk of variables in the checklist. Identification of the possible source of item pollutants was also evaluated from a semiquantitative approach. Hierarchical agglomerative cluster analysis resulted in the grouping of factorial components into three clusters (high complaint, moderate-high complaint, moderate complaint), which were further analyzed by discriminant analysis. From this, 15 major variables that influence indoor air quality were determined. Principal component analysis of each cluster revealed that the main factors influencing the high complaint group were fungal-related problems, chemical indoor dispersion, detergent, renovation, thermal comfort, and location of fresh air intake. The moderate-high complaint group showed significant high loading on ventilation, air filters, and smoking-related activities. The moderate complaint group showed high loading on dampness, odor, and thermal comfort. This semiquantitative assessment, which graded risk from low to high based on the intensity of the problem, shows promising and reliable results. It should be used as an important tool in the preliminary assessment of indoor air quality and as a categorizing method for further IAQ investigations and complaints procedures.

  9. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  10. Tomato seeds maturity detection system based on chlorophyll fluorescence

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Meng, Zhijun

    2016-10-01

    Chlorophyll fluorescence intensity can be used as seed maturity and quality evaluation indicator. Chlorophyll fluorescence intensity of seed coats is tested to judge the level of chlorophyll content in seeds, and further to judge the maturity and quality of seeds. This research developed a detection system of tomato seeds maturity based on chlorophyll fluorescence spectrum technology, the system included an excitation light source unit, a fluorescent signal acquisition unit and a data processing unit. The excitation light source unit consisted of two high power LEDs, two radiators and two constant current power supplies, and it was designed to excite chlorophyll fluorescence of tomato seeds. The fluorescent signal acquisition unit was made up of a fluorescence spectrometer, an optical fiber, an optical fiber scaffolds and a narrowband filter. The data processing unit mainly included a computer. Tomato fruits of green ripe stage, discoloration stage, firm ripe stage and full ripe stage were harvested, and their seeds were collected directly. In this research, the developed tomato seeds maturity testing system was used to collect fluorescence spectrums of tomato seeds of different maturities. Principal component analysis (PCA) method was utilized to reduce the dimension of spectral data and extract principal components, and PCA was combined with linear discriminant analysis (LDA) to establish discriminant model of tomato seeds maturity, the discriminant accuracy was greater than 90%. Research results show that using chlorophyll fluorescence spectrum technology is feasible for seeds maturity detection, and the developed tomato seeds maturity testing system has high detection accuracy.

  11. HybridICE® filter: ice separation in freeze desalination of mine waste waters.

    PubMed

    Adeniyi, A; Maree, J P; Mbaya, R K K; Popoola, A P I; Mtombeni, T; Zvinowanda, C M

    2014-01-01

    Freeze desalination is an alternative method for the treatment of mine waste waters. HybridICE(®) technology is a freeze desalination process which generates ice slurry in surface scraper heat exchangers that use R404a as the primary refrigerant. Ice separation from the slurry takes place in the HybridICE filter, a cylindrical unit with a centrally mounted filter element. Principally, the filter module achieves separation of the ice through buoyancy force in a continuous process. The HybridICE filter is a new and economical means of separating ice from the slurry and requires no washing of ice with water. The performance of the filter at a flow-rate of 25 L/min was evaluated over time and with varied evaporating temperature of the refrigerant. Behaviours of the ice fraction and residence time were also investigated. The objective was to find ways to improve the performance of the filter. Results showed that filter performance can be improved by controlling the refrigerant evaporating temperature and eliminating overflow.

  12. Kalman Filter for Spinning Spacecraft Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Sedlak, Joseph E.

    2008-01-01

    This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.

  13. Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone

    PubMed Central

    Qian, Jiuchao; Pei, Ling; Ma, Jiabin; Ying, Rendong; Liu, Peilin

    2015-01-01

    The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts. PMID:25738763

  14. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  15. Decomposition of ECG by linear filtering.

    PubMed

    Murthy, I S; Niranjan, U C

    1992-01-01

    A simple method is developed for the delineation of a given electrocardiogram (ECG) signal into its component waves. The properties of discrete cosine transform (DCT) are exploited for the purpose. The transformed signal is convolved with appropriate filters and the component waves are obtained by computing the inverse transform (IDCT) of the filtered signals. The filters are derived from the time signal itself. Analysis of continuous strips of ECG signals with various arrhythmias showed that the performance of the method is satisfactory both qualitatively and quantitatively. The small amplitude P wave usually had a high percentage rms difference (PRD) compared to the other large component waves.

  16. Asteroid age distributions determined by space weathering and collisional evolution models

    NASA Astrophysics Data System (ADS)

    Willman, Mark; Jedicke, Robert

    2011-01-01

    We provide evidence of consistency between the dynamical evolution of main belt asteroids and their color evolution due to space weathering. The dynamical age of an asteroid's surface (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H. [2005]. Icarus 175 (1), 111-140; Nesvorný, D., Jedicke, R., Whiteley, R.J., Ivezić, Ž. [2005]. Icarus 173, 132-152) is the time since its last catastrophic disruption event which is a function of the object's diameter. The age of an S-complex asteroid's surface may also be determined from its color using a space weathering model (e.g. Willman, M., Jedicke, R., Moskovitz, N., Nesvorný, D., Vokrouhlický, D., Mothé-Diniz, T. [2010]. Icarus 208, 758-772; Jedicke, R., Nesvorný, D., Whiteley, R.J., Ivezić, Ž., Jurić, M. [2004]. Nature 429, 275-277; Willman, M., Jedicke, R., Nesvorny, D., Moskovitz, N., Ivezić, Ž., Fevig, R. [2008]. Icarus 195, 663-673. We used a sample of 95 S-complex asteroids from SMASS and obtained their absolute magnitudes and u, g, r, i, z filter magnitudes from SDSS. The absolute magnitudes yield a size-derived age distribution. The u, g, r, i, z filter magnitudes lead to the principal component color which yields a color-derived age distribution by inverting our color-age relationship, an enhanced version of the 'dual τ' space weathering model of Willman et al. (2010). We fit the size-age distribution to the enhanced dual τ model and found characteristic weathering and gardening times of τw = 2050 ± 80 Myr and τg=4400-500+700Myr respectively. The fit also suggests an initial principal component color of -0.05 ± 0.01 for fresh asteroid surface with a maximum possible change of the probable color due to weathering of Δ PC = 1.34 ± 0.04. Our predicted color of fresh asteroid surface matches the color of fresh ordinary chondritic surface of PC1 = 0.17 ± 0.39.

  17. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  18. The Influence Function of Principal Component Analysis by Self-Organizing Rule.

    PubMed

    Higuchi; Eguchi

    1998-07-28

    This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.

  19. Method and apparatus for measuring flow velocity using matched filters

    DOEpatents

    Raptis, Apostolos C.

    1983-01-01

    An apparatus and method for measuring the flow velocities of individual phase flow components of a multiphase flow utilizes matched filters. Signals arising from flow noise disturbance are extracted from the flow, at upstream and downstream locations. The signals are processed through pairs of matched filters which are matched to the flow disturbance frequency characteristics of the phase flow component to be measured. The processed signals are then cross-correlated to determine the transit delay time of the phase flow component between sensing positions.

  20. Genetic algorithm applied to the selection of factors in principal component-artificial neural networks: application to QSAR study of calcium channel antagonist activity of 1,4-dihydropyridines (nifedipine analogous).

    PubMed

    Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba

    2003-01-01

    A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.

  1. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  2. 40 CFR 62.14505 - What are the principal components of this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What are the principal components of this subpart? 62.14505 Section 62.14505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... components of this subpart? This subpart contains the eleven major components listed in paragraphs (a...

  3. Elucidating Molecular Motion through Structural and Dynamic Filters of Energy-Minimized Conformer Ensembles

    PubMed Central

    2015-01-01

    Complex RNA structures are constructed from helical segments connected by flexible loops that move spontaneously and in response to binding of small molecule ligands and proteins. Understanding the conformational variability of RNA requires the characterization of the coupled time evolution of interconnected flexible domains. To elucidate the collective molecular motions and explore the conformational landscape of the HIV-1 TAR RNA, we describe a new methodology that utilizes energy-minimized structures generated by the program “Fragment Assembly of RNA with Full-Atom Refinement (FARFAR)”. We apply structural filters in the form of experimental residual dipolar couplings (RDCs) to select a subset of discrete energy-minimized conformers and carry out principal component analyses (PCA) to corroborate the choice of the filtered subset. We use this subset of structures to calculate solution T1 and T1ρ relaxation times for 13C spins in multiple residues in different domains of the molecule using two simulation protocols that we previously published. We match the experimental T1 times to within 2% and the T1ρ times to within less than 10% for helical residues. These results introduce a protocol to construct viable dynamic trajectories for RNA molecules that accord well with experimental NMR data and support the notion that the motions of the helical portions of this small RNA can be described by a relatively small number of discrete conformations exchanging over time scales longer than 1 μs. PMID:24479561

  4. Modern Display Technologies for Airborne Applications.

    DTIC Science & Technology

    1983-04-01

    the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique

  5. Vibrational Spectroscopy as a Promising Toolbox for Analyzing Functionalized Ceramic Membranes.

    PubMed

    Kiefer, Johannes; Bartels, Julia; Kroll, Stephen; Rezwan, Kurosch

    2018-01-01

    Ceramic materials find use in many fields including the life sciences and environmental engineering. For example, ceramic membranes have shown to be promising filters for water treatment and virus retention. The analysis of such materials, however, remains challenging. In the present study, the potential of three vibrational spectroscopic methods for characterizing functionalized ceramic membranes for water treatment is evaluated. For this purpose, Raman scattering, infrared (IR) absorption, and solvent infrared spectroscopy (SIRS) were employed. The data were analyzed with respect to spectral changes as well as using principal component analysis (PCA). The Raman spectra allow an unambiguous discrimination of the sample types. The IR spectra do not change systematically with functionalization state of the material. Solvent infrared spectroscopy allows a systematic distinction and enables studying the molecular interactions between the membrane surface and the solvent.

  6. Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki

    2004-04-01

    We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.

  7. Switching non-local vector median filter

    NASA Astrophysics Data System (ADS)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2016-04-01

    This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.

  8. SU-E-J-256: Predicting Metastasis-Free Survival of Rectal Cancer Patients Treated with Neoadjuvant Chemo-Radiotherapy by Data-Mining of CT Texture Features of Primary Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wang, J; Shen, L

    Purpose: The purpose of this study is to investigate the relationship between computed tomographic (CT) texture features of primary lesions and metastasis-free survival for rectal cancer patients; and to develop a datamining prediction model using texture features. Methods: A total of 220 rectal cancer patients treated with neoadjuvant chemo-radiotherapy (CRT) were enrolled in this study. All patients underwent CT scans before CRT. The primary lesions on the CT images were delineated by two experienced oncologists. The CT images were filtered by Laplacian of Gaussian (LoG) filters with different filter values (1.0–2.5: from fine to coarse). Both filtered and unfiltered imagesmore » were analyzed using Gray-level Co-occurrence Matrix (GLCM) texture analysis with different directions (transversal, sagittal, and coronal). Totally, 270 texture features with different species, directions and filter values were extracted. Texture features were examined with Student’s t-test for selecting predictive features. Principal Component Analysis (PCA) was performed upon the selected features to reduce the feature collinearity. Artificial neural network (ANN) and logistic regression were applied to establish metastasis prediction models. Results: Forty-six of 220 patients developed metastasis with a follow-up time of more than 2 years. Sixtyseven texture features were significantly different in t-test (p<0.05) between patients with and without metastasis, and 12 of them were extremely significant (p<0.001). The Area-under-the-curve (AUC) of ANN was 0.72, and the concordance index (CI) of logistic regression was 0.71. The predictability of ANN was slightly better than logistic regression. Conclusion: CT texture features of primary lesions are related to metastasisfree survival of rectal cancer patients. Both ANN and logistic regression based models can be developed for prediction.« less

  9. Principals' Perceptions Regarding Their Supervision and Evaluation

    ERIC Educational Resources Information Center

    Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann

    2015-01-01

    This study examined the perceptions of principals concerning principal evaluation and supervisory feedback. Principals were asked two open-ended questions. Respondents included 82 principals in the Rocky Mountain region. The emerging themes were "Superintendent Performance," "Principal Evaluation Components," "Specific…

  10. Trickling Filters. Student Manual. Biological Treatment Process Control.

    ERIC Educational Resources Information Center

    Richwine, Reynold D.

    The textual material for a unit on trickling filters is presented in this student manual. Topic areas discussed include: (1) trickling filter process components (preliminary treatment, media, underdrain system, distribution system, ventilation, and secondary clarifier); (2) operational modes (standard rate filters, high rate filters, roughing…

  11. The BepiColombo Laser Altimeter (BeLA) power converter module (PCM): Concept and characterisation.

    PubMed

    Rodrigo, J; Gasquet, E; Castro, J-M; Herranz, M; Lara, L-M; Muñoz, M; Simon, A; Behnke, T; Thomas, N

    2017-03-01

    This paper presents the principal considerations when designing DC-DC converters for space instruments, in particular for the power converter module as part of the first European space laser altimeter: "BepiColombo Laser Altimeter" on board the European Space Agency-Japan Aerospace Exploration Agency (JAXA) mission BepiColombo. The main factors which determine the design of the DC-DC modules in space applications are printed circuit board occupation, mass, DC-DC converter efficiency, and environmental-survivability constraints. Topics included in the appropriated DC-DC converter design flow are hereby described. The topology and technology for the primary and secondary stages, input filters, transformer design, and peripheral components are discussed. Component selection and design trade-offs are described. Grounding, load and line regulation, and secondary protection circuitry (under-voltage, over-voltage, and over-current) are then introduced. Lastly, test results and characterization of the final flight design are also presented. Testing of the inrush current, the regulated output start-up, and the switching function of the power supply indicate that these performances are fully compliant with the requirements.

  12. Big data in reciprocal space: Sliding fast Fourier transforms for determining periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasudevan, Rama K., E-mail: rvv@ornl.gov; Belianinov, Alex; Baddorf, Arthur P.

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La{sub 5/8}Ca{sub 3/8}MnO{sub 3} (LCMO) films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis and Independent Component Analysismore » of the Sliding FFT dataset reveal the distinct changes in crystallography, step edges, and boundaries between the multiple sub-lattices. The implications for the LCMO system are discussed. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  13. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  14. Method and apparatus for measuring flow velocity using matched filters

    DOEpatents

    Raptis, A.C.

    1983-09-06

    An apparatus and method for measuring the flow velocities of individual phase flow components of a multiphase flow utilizes matched filters. Signals arising from flow noise disturbance are extracted from the flow, at upstream and downstream locations. The signals are processed through pairs of matched filters which are matched to the flow disturbance frequency characteristics of the phase flow component to be measured. The processed signals are then cross-correlated to determine the transit delay time of the phase flow component between sensing positions. 8 figs.

  15. Experimental study of cake formation on heat treated and membrane coated needle felts in a pilot scale pulse jet bag filter using optical in-situ cake height measurement

    PubMed Central

    Saleem, Mahmood; Khan, Rafi Ullah; Tahir, M. Suleman; Krammer, Gernot

    2011-01-01

    Pulse-jet bag filters are frequently employed for particle removal from off gases. Separated solids form a layer on the permeable filter media called filter cake. The cake is responsible for increasing pressure drop. Therefore, the cake has to be detached at a predefined upper pressure drop limit or at predefined time intervals. Thus the process is intrinsically semi-continuous. The cake formation and cake detachment are interdependent and may influence the performance of the filter. Therefore, understanding formation and detachment of filter cake is important. In this regard, the filter media is the key component in the system. Needle felts are the most commonly used media in bag filters. Cake formation studies with heat treated and membrane coated needle felts in pilot scale pulse jet bag filter were carried out. The data is processed according to the procedures that were published already [Powder Technology, Volume 173, Issue 2, 19 April 2007, Pages 93–106]. Pressure drop evolution, cake height distribution evolution, cake patches area distribution and their characterization using fractal analysis on different needle felts are presented here. It is observed that concavity of pressure drop curve for membrane coated needle felt is principally caused by presence of inhomogeneous cake area load whereas it is inherent for heat treated media. Presence of residual cake enhances the concavity of pressure drop at the start of filtration cycle. Patchy cleaning is observed only when jet pulse pressure is too low and unable to provide the necessary force to detach the cake. The border line is very sharp. Based on experiments with limestone dust and three types of needle felts, for the jet pulse pressure above 4 bar and filtration velocity below 50 mm/s, cake is detached completely except a thin residual layer (100–200 μm). Uniformity and smoothness of residual cake depends on the surface characteristics of the filter media. Cake height distribution of residual cake and newly formed cake during filtration prevails. The patch size analysis and fractal analysis reveal that residual cake grow in size (latterly) following regeneration initially on the base with edges smearing out, however, the cake heights are not leveled off. Fractal dimension of cake patches boundary falls in the range of 1–1.4 and depends on vertical position as well as time of filtration. Cake height measurements with Polyimide (PI) needle felts were hampered on account of its photosensitive nature. PMID:24415801

  16. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  17. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. [Assessment of the strength of tobacco control on creating smoke-free hospitals using principal components analysis].

    PubMed

    Liu, Hui-lin; Wan, Xia; Yang, Gong-huan

    2013-02-01

    To explore the relationship between the strength of tobacco control and the effectiveness of creating smoke-free hospital, and summarize the main factors that affect the program of creating smoke-free hospitals. A total of 210 hospitals from 7 provinces/municipalities directly under the central government were enrolled in this study using stratified random sampling method. Principle component analysis and regression analysis were conducted to analyze the strength of tobacco control and the effectiveness of creating smoke-free hospitals. Two principal components were extracted in the strength of tobacco control index, which respectively reflected the tobacco control policies and efforts, and the willingness and leadership of hospital managers regarding tobacco control. The regression analysis indicated that only the first principal component was significantly correlated with the progression in creating smoke-free hospital (P<0.001), i.e. hospitals with higher scores on the first principal component had better achievements in smoke-free environment creation. Tobacco control policies and efforts are critical in creating smoke-free hospitals. The principal component analysis provides a comprehensive and objective tool for evaluating the creation of smoke-free hospitals.

  19. Chemical Protection Testing of Sorbent-Based Air Purification Components (APCs)

    DTIC Science & Technology

    2016-06-24

    APC’s ability to filter air in a chemically contaminated environment. 15. SUBJECT TERMS Air purification component; APC; filtration fabric...FF, filter media, collective protection; individual protection. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...incoming air. The intent of this process is to produce traceable, quantifiable, and defensible data that can be used to analyze an APC’s ability to filter

  20. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    ERIC Educational Resources Information Center

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  1. Single and tandem Fabry-Perot etalons as solar background filters for lidar.

    PubMed

    McKay, J A

    1999-09-20

    Atmospheric lidar is difficult in daylight because of sunlight scattered into the receiver field of view. In this research methods for the design and performance analysis of Fabry-Perot etalons as solar background filters are presented. The factor by which the signal to background ratio is enhanced is defined as a measure of the performance of the etalon as a filter. Equations for evaluating this parameter are presented for single-, double-, and triple-etalon filter systems. The role of reflective coupling between etalons is examined and shown to substantially reduce the contributions of the second and third etalons to the filter performance. Attenuators placed between the etalons can improve the filter performance, at modest cost to the signal transmittance. The principal parameter governing the performance of the etalon filters is the etalon defect finesse. Practical limitations on etalon plate smoothness and parallelism cause the defect finesse to be relatively low, especially in the ultraviolet, and this sets upper limits to the capability of tandem etalon filters to suppress the solar background at tolerable cost to the signal.

  2. Molecular dynamics in principal component space.

    PubMed

    Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L

    2012-07-26

    A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.

  3. Optimized principal component analysis on coronagraphic images of the fomalhaut system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less

  4. [A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].

    PubMed

    Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei

    2010-04-01

    It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.

  5. Sand Type Filters for Swimming Pools. Standard No. 10, Revised October, 1966.

    ERIC Educational Resources Information Center

    National Sanitation Foundation, Ann Arbor, MI.

    Sand type filters are covered in this standard. The filters described are intended to be designed and used specifically for swimming pool water filtration, both public and residential. Included are the basic components which are a necessary part of the sand type filter such as filter housing, upper and lower distribution systems filter media,…

  6. 42 CFR 84.33 - Approval labels and markings; approval of contents; use.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... position on the harness, container, canister, cartridge, filter, or other component, together with... Respirator container and filter container. Abbreviated Filters. Chemical-cartridge respirator Entire Respirator container, cartridge container, and filter containers (where applicable). Abbreviated Cartridges...

  7. 42 CFR 84.33 - Approval labels and markings; approval of contents; use.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... position on the harness, container, canister, cartridge, filter, or other component, together with... Respirator container and filter container. Abbreviated Filters. Chemical-cartridge respirator Entire Respirator container, cartridge container, and filter containers (where applicable). Abbreviated Cartridges...

  8. 42 CFR 84.33 - Approval labels and markings; approval of contents; use.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... position on the harness, container, canister, cartridge, filter, or other component, together with... Respirator container and filter container. Abbreviated Filters. Chemical-cartridge respirator Entire Respirator container, cartridge container, and filter containers (where applicable). Abbreviated Cartridges...

  9. 42 CFR 84.33 - Approval labels and markings; approval of contents; use.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... position on the harness, container, canister, cartridge, filter, or other component, together with... Respirator container and filter container. Abbreviated Filters. Chemical-cartridge respirator Entire Respirator container, cartridge container, and filter containers (where applicable). Abbreviated Cartridges...

  10. 42 CFR 84.33 - Approval labels and markings; approval of contents; use.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... position on the harness, container, canister, cartridge, filter, or other component, together with... Respirator container and filter container. Abbreviated Filters. Chemical-cartridge respirator Entire Respirator container, cartridge container, and filter containers (where applicable). Abbreviated Cartridges...

  11. A class of optimum digital phase locked loops for the DSN advanced receiver

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Kumar, R.

    1985-01-01

    A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.

  12. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  13. Variable Selection through Correlation Sifting

    NASA Astrophysics Data System (ADS)

    Huang, Jim C.; Jojic, Nebojsa

    Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.

  14. How multi segmental patterns deviate in spastic diplegia from typical developed.

    PubMed

    Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela

    2017-10-01

    The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A reduction in ag/residential signature conflict using principal components analysis of LANDSAT temporal data

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Borden, F. Y.

    1977-01-01

    Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.

  16. Constrained Principal Component Analysis: Various Applications.

    ERIC Educational Resources Information Center

    Hunter, Michael; Takane, Yoshio

    2002-01-01

    Provides example applications of constrained principal component analysis (CPCA) that illustrate the method on a variety of contexts common to psychological research. Two new analyses, decompositions into finer components and fitting higher order structures, are presented, followed by an illustration of CPCA on contingency tables and the CPCA of…

  17. The linkage between geopotential height and monthly precipitation in Iran

    NASA Astrophysics Data System (ADS)

    Shirvani, Amin; Fadaei, Amir Sabetan; Landman, Willem A.

    2018-04-01

    This paper investigates the linkage between large-scale atmospheric circulation and monthly precipitation during November to April over Iran. Canonical correlation analysis (CCA) is used to set up the statistical linkage between the 850 hPa geopotential height large-scale circulation and monthly precipitation over Iran for the period 1968-2010. The monthly precipitation dataset for 50 synoptic stations distributed in different climate regions of Iran is considered as the response variable in the CCA. The monthly geopotential height reanalysis dataset over an area between 10° N and 60° N and from 20° E to 80° E is utilized as the explanatory variable in the CCA. Principal component analysis (PCA) as a pre-filter is used for data reduction for both explanatory and response variables before applying CCA. The optimal number of principal components and canonical variables to be retained in the CCA equations is determined using the highest average cross-validated Kendall's tau value. The 850 hPa geopotential height pattern over the Red Sea, Saudi Arabia, and Persian Gulf is found to be the major pattern related to Iranian monthly precipitation. The Pearson correlation between the area averaged of the observed and predicted precipitation over the study area for Jan, Feb, March, April, November, and December months are statistically significant at the 5% significance level and are 0.78, 0.80, 0.82, 0.74, 0.79, and 0.61, respectively. The relative operating characteristic (ROC) indicates that the highest scores for the above- and below-normal precipitation categories are, respectively, for February and April and the lowest scores found for December.

  18. Assessment of the capacity of vehicle cabin air inlet filters to reduce diesel exhaust-induced symptoms in human volunteers

    PubMed Central

    2014-01-01

    Background Exposure to particulate matter (PM) air pollution especially derived from traffic is associated with increases in cardiorespiratory morbidity and mortality. In this study, we evaluated the ability of novel vehicle cabin air inlet filters to reduce diesel exhaust (DE)-induced symptoms and markers of inflammation in human subjects. Methods Thirty healthy subjects participated in a randomized double-blind controlled crossover study where they were exposed to filtered air, unfiltered DE and DE filtered through two selected particle filters, one with and one without active charcoal. Exposures lasted for one hour. Symptoms were assessed before and during exposures and lung function was measured before and after each exposure, with inflammation assessed in peripheral blood five hours after exposures. In parallel, PM were collected from unfiltered and filtered DE and assessed for their capacity to drive damaging oxidation reactions in a cell-free model, or promote inflammation in A549 cells. Results The standard particle filter employed in this study reduced PM10 mass concentrations within the exposure chamber by 46%, further reduced to 74% by the inclusion of an active charcoal component. In addition use of the active charcoal filter was associated by a 75% and 50% reduction in NO2 and hydrocarbon concentrations, respectively. As expected, subjects reported more subjective symptoms after exposure to unfiltered DE compared to filtered air, which was significantly reduced by the filter with an active charcoal component. There were no significant changes in lung function after exposures. Similarly diesel exhaust did not elicit significant increases in any of the inflammatory markers examined in the peripheral blood samples 5 hour post-exposure. Whilst the filters reduced chamber particle concentrations, the oxidative activity of the particles themselves, did not change following filtration with either filter. In contrast, diesel exhaust PM passed through the active charcoal combination filter appeared less inflammatory to A549 cells. Conclusions A cabin air inlet particle filter including an active charcoal component was highly effective in reducing both DE particulate and gaseous components, with reduced exhaust-induced symptoms in healthy volunteers. These data demonstrate the effectiveness of cabin filters to protect subjects travelling in vehicles from diesel exhaust emissions. PMID:24621126

  19. UV Filters, Ingredients with a Recognized Anti-Inflammatory Effect

    PubMed Central

    Couteau, Céline; Chauvet, Catherine; Paparis, Eva; Coiffard, Laurence

    2012-01-01

    Background To explain observed differences during SPF determination using either an in vivo or in vitro method, we hypothesized on the presence of ingredients having anti-inflammatory properties. Methodology/Principal Findings To research our hypothesis, we studied the 21 UV filters both available on the market and authorized by European regulations and subjected these filters to the phorbol-myristate-acetate test using mice. We then catalogued the 13 filters demonstrating a significant anti-inflammatory effect with edema inhibition percentages of more than 70%. The filters are: diethylhexyl butamido triazone (92%), benzophenone-5 and titanium dioxide (90%), benzophenone-3 (83%), octocrylène and isoamyl p-methoxycinnamate (82%), PEG-25 PABA and homosalate (80%), octyl triazone and phenylbenzimidazole sulfonic acid (78%), octyl dimethyl PABA (75%), bis-ethylhexyloxyphenol methoxyphenyl triazine and diethylamino hydroxybenzoyl hexylbenzoate (70%). These filters were tested at various concentrations, including their maximum authorized dose. We detected a dose-response relationship. Conclusions/Significance The anti-inflammatory effect of a sunscreen ingredient may affect the in vivo SPF value. PMID:23284607

  20. Survey to Identify Substandard and Falsified Tablets in Several Asian Countries with Pharmacopeial Quality Control Tests and Principal Component Analysis of Handheld Raman Spectroscopy.

    PubMed

    Kakio, Tomoko; Nagase, Hitomi; Takaoka, Takashi; Yoshida, Naoko; Hirakawa, Junichi; Macha, Susan; Hiroshima, Takashi; Ikeda, Yukihiro; Tsuboi, Hirohito; Kimura, Kazuko

    2018-06-01

    The World Health Organization has warned that substandard and falsified medical products (SFs) can harm patients and fail to treat the diseases for which they were intended, and they affect every region of the world, leading to loss of confidence in medicines, health-care providers, and health systems. Therefore, development of analytical procedures to detect SFs is extremely important. In this study, we investigated the quality of pharmaceutical tablets containing the antihypertensive candesartan cilexetil, collected in China, Indonesia, Japan, and Myanmar, using the Japanese pharmacopeial analytical procedures for quality control, together with principal component analysis (PCA) of Raman spectrum obtained with handheld Raman spectrometer. Some samples showed delayed dissolution and failed to meet the pharmacopeial specification, whereas others failed the assay test. These products appeared to be substandard. Principal component analysis showed that all Raman spectra could be explained in terms of two components: the amount of the active pharmaceutical ingredient and the kinds of excipients. Principal component analysis score plot indicated one substandard, and the falsified tablets have similar principal components in Raman spectra, in contrast to authentic products. The locations of samples within the PCA score plot varied according to the source country, suggesting that manufacturers in different countries use different excipients. Our results indicate that the handheld Raman device will be useful for detection of SFs in the field. Principal component analysis of that Raman data clarify the difference in chemical properties between good quality products and SFs that circulate in the Asian market.

  1. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  2. 21 CFR 177.2260 - Filters, resin-bonded.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Filters, resin-bonded. 177.2260 Section 177.2260... Components of Articles Intended for Repeated Use § 177.2260 Filters, resin-bonded. Resin-bonded filters may... of this section. (a) Resin-bonded filters are prepared from natural or synthetic fibers to which have...

  3. 21 CFR 177.2260 - Filters, resin-bonded.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Filters, resin-bonded. 177.2260 Section 177.2260... Components of Articles Intended for Repeated Use § 177.2260 Filters, resin-bonded. Resin-bonded filters may... of this section. (a) Resin-bonded filters are prepared from natural or synthetic fibers to which have...

  4. 21 CFR 177.2260 - Filters, resin-bonded.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Filters, resin-bonded. 177.2260 Section 177.2260... Components of Articles Intended for Repeated Use § 177.2260 Filters, resin-bonded. Resin-bonded filters may... of this section. (a) Resin-bonded filters are prepared from natural or synthetic fibers to which have...

  5. 21 CFR 177.2260 - Filters, resin-bonded.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Filters, resin-bonded. 177.2260 Section 177.2260... Components of Articles Intended for Repeated Use § 177.2260 Filters, resin-bonded. Resin-bonded filters may... of this section. (a) Resin-bonded filters are prepared from natural or synthetic fibers to which have...

  6. Test Operations Procedure (TOP) 08-2-197 Chemical Protection Testing of Sorbent-Based Air Purification Components (APCs)

    DTIC Science & Technology

    2016-06-24

    APC’s ability to filter air in a chemically contaminated environment. 15. SUBJECT TERMS Air purification component; APC; filtration fabric...FF, filter media, collective protection; individual protection. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...incoming air. The intent of this process is to produce traceable, quantifiable, and defensible data that can be used to analyze an APC’s ability to filter

  7. A novel approach to spinal 3-D kinematic assessment using inertial sensors: Towards effective quantitative evaluation of low back pain in clinical settings.

    PubMed

    Ashouri, Sajad; Abedi, Mohsen; Abdollahi, Masoud; Dehghan Manshadi, Farideh; Parnianpour, Mohamad; Khalaf, Kinda

    2017-10-01

    This paper presents a novel approach for evaluating LBP in various settings. The proposed system uses cost-effective inertial sensors, in conjunction with pattern recognition techniques, for identifying sensitive classifiers towards discriminate identification of LB patients. 24 healthy individuals and 28 low back pain patients performed trunk motion tasks in five different directions for validation. Four combinations of these motions were selected based on literature, and the corresponding kinematic data was collected. Upon filtering (4th order, low pass Butterworth filter) and normalizing the data, Principal Component Analysis was used for feature extraction, while Support Vector Machine classifier was applied for data classification. The results reveal that non-linear Kernel classification can be adequately employed for low back pain identification. Our preliminary results demonstrate that using a single inertial sensor placed on the thorax, in conjunction with a relatively simple test protocol, can identify low back pain with an accuracy of 96%, a sensitivity of %100, and specificity of 92%. While our approach shows promising results, further validation in a larger population is required towards using the methodology as a practical quantitative assessment tool for the detection of low back pain in clinical/rehabilitation settings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. How well are the climate indices related to the GRACE-observed total water storage changes in China?

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Vishwakarma, B.; Sneeuw, N. J.

    2017-12-01

    The fresh water availability over land masses is changing rapidly under the influence of climate change and human intervention. In order to manage our water resources and plan for a better future, we need to demarcate the role of climate change. The total water storage change in a region can be obtained from the GRACE satellite mission. On the other hand, many climate change indicators, for example ENSO, are derived from sea surface temperature. In this contribution we investigate the relationship between the total water storage change over China with the climate indices using statistical time-series decomposition techniques, such as Seasonal and Trend decomposition using Loess (STL), Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA). The anomalies in climate variables, such as sea surface temperature, are responsible for anomalous precipitation and thus an anomalous total water storage change over land. Therefore, it is imperative that we use a GRACE product that can capture anomalous water storage changes with unprecedented accuracy. Since filtering decreases the sensitivity of GRACE products substantially, we use the data-driven method of deviation for recovering the signal lost due to filtering. To this end, we are able to obtain the spatial fingerprint of individual climate index on total water storage change observed over China.

  9. Microbial Community Profiles in Wastewaters from Onsite Wastewater Treatment Systems Technology

    PubMed Central

    Jałowiecki, Łukasz; Chojniak, Joanna Małgorzata; Dorgeloh, Elmar; Hegedusova, Berta; Ejhed, Helene; Magnér, Jörgen; Płaza, Grażyna Anna

    2016-01-01

    The aim of the study was to determine the potential of community-level physiological profiles (CLPPs) methodology as an assay for characterization of the metabolic diversity of wastewater samples and to link the metabolic diversity patterns to efficiency of select onsite biological wastewater facilities. Metabolic fingerprints obtained from the selected samples were used to understand functional diversity implied by the carbon substrate shifts. Three different biological facilities of onsite wastewater treatment were evaluated: fixed bed reactor (technology A), trickling filter/biofilter system (technology B), and aerated filter system (the fluidized bed reactor, technology C). High similarities of the microbial community functional structures were found among the samples from the three onsite wastewater treatment plants (WWTPs), as shown by the diversity indices. Principal components analysis (PCA) showed that the diversity and CLPPs of microbial communities depended on the working efficiency of the wastewater treatment technologies. This study provided an overall picture of microbial community functional structures of investigated samples in WWTPs and discerned the linkages between microbial communities and technologies of onsite WWTPs used. The results obtained confirmed that metabolic profiles could be used to monitor treatment processes as valuable biological indicators of onsite wastewater treatment technologies efficiency. This is the first step toward understanding relations of technology types with microbial community patterns in raw and treated wastewaters. PMID:26807728

  10. Attitude Representations for Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.

  11. Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system

    NASA Astrophysics Data System (ADS)

    Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.

    2016-12-01

    In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.

  12. The El Nino/Southern Oscillation and Future Soybean Prices

    NASA Technical Reports Server (NTRS)

    Keppenne, C.

    1993-01-01

    Recently, it was shown that the application of a method combining singular spectrum analysis (SSA) and the maximum entropy method to univariate indicators of the coupled ocean-atmosphere El Nino/Southern Oscillation (ENSO) phenomenon can be helpful in determining whether an El Nino (EN) or La Nina (LN) event will occur. SSA - a variant of principal component analysis applied in the time domain - filters out variability unrelated to ENSO and separates the quasi-biennial (QB), two-to-three year variability, from a lower-frequency (LF) four-to-six year EN-LN cycle; the total variance associated with ENSO combines the QB and LF modes. ENSO has been known to affect weather conditions over much of the globe. For example, EN events have been connected with unusually rainy weather over the Central and Western US, while the opposite phases of the oscillation (LN) have been plausibly associated with extreme dry conditions over much of the same geographical area...

  13. The 2008 Passage of Jupiter's Great Red Spot and Oval BA as Observed from Hubble/WFPC2

    NASA Technical Reports Server (NTRS)

    Simon-Miller, Amy A.; Chanover, N. J.; Orton, G. S.; Tsavaris, I.

    2008-01-01

    Hubble Space Telescope data of the passage of Jupiter's Great Red Spot (GRS) and Oval BA were acquired on May 15, June 28 (near closest approach), and July 8. Wind fields were measured from Wide Field Planetary Camera 2 (WFPC2) data with 10-hour separations before and after closest approach, and within the GRS with 40-minute separations on all three dates. Color information was also derived using 8 narrowband WFPC2 filters from 343 to 673-nm on all three dates. We will present the results of principal components and wind analyses and discuss unique features seen in this data set. In addition, we will highlight any changes observed in the GRS, Oval BA and their surroundings as a result of the passage, including the movement of a smaller red anticyclone from west of the GRS, around its southern periphery, and to the east of the GRS.

  14. Optimization of a Multi-Stage ATR System for Small Target Identification

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Han; Lu, Thomas; Braun, Henry; Edens, Western; Zhang, Yuhan; Chao, Tien- Hsin; Assad, Christopher; Huntsberger, Terrance

    2010-01-01

    An Automated Target Recognition system (ATR) was developed to locate and target small object in images and videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof- interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI contains the desired target or not. The ATR system was found useful in identifying small boats in open sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for generalization of representative features to reduce false-alarm rate. The neural networks are compared for their performance in classification accuracy, classifying time, and training time.

  15. Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.

    PubMed

    Apostolou, N; Papazoglou, Th; Koutsouris, D

    2006-01-01

    Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.

  16. Recognition of a porphyry system using ASTER data in Bideghan - Qom province (central of Iran)

    NASA Astrophysics Data System (ADS)

    Feizi, F.; Mansouri, E.

    2014-07-01

    The Bideghan area is located south of the Qom province (central of Iran). The most impressive geological features in the studied area are the Eocene sequences which are intruded by volcanic rocks with basic compositions. Advanced Space borne Thermal Emission and Reflection Radiometer (ASTER) image processing have been used for hydrothermal alteration mapping and lineaments identification in the investigated area. In this research false color composite, band ratio, Principal Component Analysis (PCA), Least Square Fit (LS-Fit) and Spectral Angel Mapping (SAM) techniques were applied on ASTER data and argillic, phyllic, Iron oxide and propylitic alteration zones were separated. Lineaments were identified by aid of false color composite, high pass filters and hill-shade DEM techniques. The results of this study demonstrate the usefulness of remote sensing method and ASTER multi-spectral data for alteration and lineament mapping. Finally, the results were confirmed by field investigation.

  17. Hunting for Active Galactic Nuclei in JWST/MIRI Imaging

    NASA Astrophysics Data System (ADS)

    Lin, Kenneth W.; Pope, Alexandra; Kirkpatrick, Allison

    2018-01-01

    The mid-infrared is uniquely sensitive to both star formation and active galactic nuclei (AGN) activity in galaxies. While spectra in this range can unambiguously identify these two processes, imaging data from the Spitzer Space Telescope found that the mid-infrared colors are also able to separate AGN from star forming galaxies. With the launch of the James Webb Space Telescope, our access to mid-infrared will be renewed; specifically, MIRI will provide imaging in 9 bands from 5.6-25.5 microns. While predictions show that color diagnostics will be useful with JWST/MIRI, this does not exploit the full dataset of MIRI imaging. In this poster, we discuss a Principal Component Analysis to identify the JWST filters that are most sensitive to the AGN contribution and demonstrate how to use it to identify large samples of AGN from planned MIRI imaging surveys.

  18. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  19. Recognition of units in coarse, unconsolidated braided-stream deposits from geophysical log data with principal components analysis

    USGS Publications Warehouse

    Morin, R.H.

    1997-01-01

    Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.

  20. Analysis and Evaluation of the Characteristic Taste Components in Portobello Mushroom.

    PubMed

    Wang, Jinbin; Li, Wen; Li, Zhengpeng; Wu, Wenhui; Tang, Xueming

    2018-05-10

    To identify the characteristic taste components of the common cultivated mushroom (brown; Portobello), Agaricus bisporus, taste components in the stipe and pileus of Portobello mushroom harvested at different growth stages were extracted and identified, and principal component analysis (PCA) and taste active value (TAV) were used to reveal the characteristic taste components during the each of the growth stages of Portobello mushroom. In the stipe and pileus, 20 and 14 different principal taste components were identified, respectively, and they were considered as the principal taste components of Portobello mushroom fruit bodies, which included most amino acids and 5'-nucleotides. Some taste components that were found at high levels, such as lactic acid and citric acid, were not detected as Portobello mushroom principal taste components through PCA. However, due to their high content, Portobello mushroom could be used as a source of organic acids. The PCA and TAV results revealed that 5'-GMP, glutamic acid, malic acid, alanine, proline, leucine, and aspartic acid were the characteristic taste components of Portobello mushroom fruit bodies. Portobello mushroom was also found to be rich in protein and amino acids, so it might also be useful in the formulation of nutraceuticals and functional food. The results in this article could provide a theoretical basis for understanding and regulating the characteristic flavor components synthesis process of Portobello mushroom. © 2018 Institute of Food Technologists®.

  1. Applications of principal component analysis to breath air absorption spectra profiles classification

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.

    2015-12-01

    The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.

  2. 14 CFR 23.997 - Fuel strainer or filter.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fuel strainer or filter. 23.997 Section 23... Components § 23.997 Fuel strainer or filter. There must be a fuel strainer or filter between the fuel tank..., whichever is nearer the fuel tank outlet. This fuel strainer or filter must— (a) Be accessible for draining...

  3. 14 CFR 23.997 - Fuel strainer or filter.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fuel strainer or filter. 23.997 Section 23... Components § 23.997 Fuel strainer or filter. There must be a fuel strainer or filter between the fuel tank..., whichever is nearer the fuel tank outlet. This fuel strainer or filter must— (a) Be accessible for draining...

  4. 14 CFR 23.997 - Fuel strainer or filter.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fuel strainer or filter. 23.997 Section 23... Components § 23.997 Fuel strainer or filter. There must be a fuel strainer or filter between the fuel tank..., whichever is nearer the fuel tank outlet. This fuel strainer or filter must— (a) Be accessible for draining...

  5. 14 CFR 23.997 - Fuel strainer or filter.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fuel strainer or filter. 23.997 Section 23... Components § 23.997 Fuel strainer or filter. There must be a fuel strainer or filter between the fuel tank..., whichever is nearer the fuel tank outlet. This fuel strainer or filter must— (a) Be accessible for draining...

  6. 14 CFR 23.997 - Fuel strainer or filter.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fuel strainer or filter. 23.997 Section 23... Components § 23.997 Fuel strainer or filter. There must be a fuel strainer or filter between the fuel tank..., whichever is nearer the fuel tank outlet. This fuel strainer or filter must— (a) Be accessible for draining...

  7. Diatomite Type Filters for Swimming Pools. Standard No. 9, Revised October, 1966.

    ERIC Educational Resources Information Center

    National Sanitation Foundation, Ann Arbor, MI.

    Pressure and vacuum diatomite type filters are covered in this standard. The filters herein described are intended to be designed and used specifically for swimming pool water filtration, both public and residential. Included are the basic components which are a necessary part of the diatomite type filter such as filter housing, element supports,…

  8. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  9. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  10. On Using the Average Intercorrelation Among Predictor Variables and Eigenvector Orientation to Choose a Regression Solution.

    ERIC Educational Resources Information Center

    Mugrage, Beverly; And Others

    Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…

  11. A Note on McDonald's Generalization of Principal Components Analysis

    ERIC Educational Resources Information Center

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  12. CLUSFAVOR 5.0: hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles

    PubMed Central

    Peterson, Leif E

    2002-01-01

    CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816

  13. Arsenic removal via ZVI in a hybrid spouted vessel/fixed bed filter system

    PubMed Central

    Calo, Joseph M.; Madhavan, Lakshmi; Kirchner, Johannes; Bain, Euan J.

    2012-01-01

    The description and operation of a novel, hybrid spouted vessel/fixed bed filter system for the removal of arsenic from water are presented. The system utilizes zero-valent iron (ZVI) particles circulating in a spouted vessel that continuously generates active colloidal iron corrosion products via the “self-polishing” action between ZVI source particles rolling in the moving bed that forms on the conical bottom of the spouted vessel. This action also serves as a “surface renewal” mechanism for the particles that provides for maximum utilization of the ZVI material. (Results of batch experiments conducted to examine this mechanism are also presented.) The colloidal material produced in this fashion is continuously captured and concentrated in a fixed bed filter located within the spouted vessel reservoir wherein arsenic complexation occurs. It is demonstrated that this system is very effective for arsenic removal in the microgram per liter arsenic concentration (i.e., drinking water treatment) range, reducing 100 μg/L of arsenic to below detectable levels (≪10 μg/L) in less than an hour. A mechanistic analysis of arsenic behavior in the system is presented, identifying the principal components of the population of active colloidal material for arsenic removal that explains the experimental observations and working principles of the system. It is concluded that the apparent kinetic behavior of arsenic in systems where colloidal (i.e., micro/nano) iron corrosion products are dominant can be complex and may not be explained by simple first or zeroth order kinetics. PMID:22539917

  14. Filter-based chemical sensors for hazardous materials

    NASA Astrophysics Data System (ADS)

    Major, Kevin J.; Ewing, Kenneth J.; Poutous, Menelaos K.; Sanghera, Jasbinder S.; Aggarwal, Ishwar D.

    2014-05-01

    The development of new techniques for the detection of homemade explosive devices is an area of intense research for the defense community. Such sensors must exhibit high selectivity to detect explosives and/or explosives related materials in a complex environment. Spectroscopic techniques such as FTIR are capable of discriminating between the volatile components of explosives; however, there is a need for less expensive systems for wide-range use in the field. To tackle this challenge we are investigating the use of multiple, overlapping, broad-band infrared (IR) filters to enable discrimination of volatile chemicals associated with an explosive device from potential background interferants with similar chemical signatures. We present an optical approach for the detection of fuel oil (the volatile component in ammonium nitrate-fuel oil explosives) that relies on IR absorption spectroscopy in a laboratory environment. Our proposed system utilizes a three filter set to separate the IR signals from fuel oil and various background interferants in the sample headspace. Filter responses for the chemical spectra are calculated using a Gaussian filter set. We demonstrate that using a specifically chosen filter set enables discrimination of pure fuel oil, hexanes, and acetone, as well as various mixtures of these components. We examine the effects of varying carrier gasses and humidity on the collected spectra and corresponding filter response. We study the filter response on these mixtures over time as well as present a variety of methods for observing the filter response functions to determine the response of this approach to detecting fuel oil in various environments.

  15. Developing a Model Component

    NASA Technical Reports Server (NTRS)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is,. responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) is a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The purpose of the UCTS is to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems s:luring their development. As an intern at KSC, my assignment was to develop a model component for the UCTS. I was given a fluid component (drier) to model in Matlab. The drier was a Catch All replaceable core type filter-drier. The filter-drier provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-drier also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. I completed training for UNIX and Simulink to help aid in my assignment. The filter-drier was modeled by determining affects it has on the pressure, velocity and temperature of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my model filter-drier in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements.

  16. Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.

    PubMed

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-11-23

    The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.

  17. The Complexity of Human Walking: A Knee Osteoarthritis Study

    PubMed Central

    Kotti, Margarita; Duffell, Lynsey D.; Faisal, Aldo A.; McGregor, Alison H.

    2014-01-01

    This study proposes a framework for deconstructing complex walking patterns to create a simple principal component space before checking whether the projection to this space is suitable for identifying changes from the normality. We focus on knee osteoarthritis, the most common knee joint disease and the second leading cause of disability. Knee osteoarthritis affects over 250 million people worldwide. The motivation for projecting the highly dimensional movements to a lower dimensional and simpler space is our belief that motor behaviour can be understood by identifying a simplicity via projection to a low principal component space, which may reflect upon the underlying mechanism. To study this, we recruited 180 subjects, 47 of which reported that they had knee osteoarthritis. They were asked to walk several times along a walkway equipped with two force plates that capture their ground reaction forces along 3 axes, namely vertical, anterior-posterior, and medio-lateral, at 1000 Hz. Data when the subject does not clearly strike the force plate were excluded, leaving 1–3 gait cycles per subject. To examine the complexity of human walking, we applied dimensionality reduction via Probabilistic Principal Component Analysis. The first principal component explains 34% of the variance in the data, whereas over 80% of the variance is explained by 8 principal components or more. This proves the complexity of the underlying structure of the ground reaction forces. To examine if our musculoskeletal system generates movements that are distinguishable between normal and pathological subjects in a low dimensional principal component space, we applied a Bayes classifier. For the tested cross-validated, subject-independent experimental protocol, the classification accuracy equals 82.62%. Also, a novel complexity measure is proposed, which can be used as an objective index to facilitate clinical decision making. This measure proves that knee osteoarthritis subjects exhibit more variability in the two-dimensional principal component space. PMID:25232949

  18. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; hide

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  20. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    PubMed

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  1. Evaluation of an Enhanced Bank of Kalman Filters for In-Flight Aircraft Engine Sensor Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2004-01-01

    In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.

  2. Laser system using regenerative amplifier

    DOEpatents

    Emmett, John L. [Pleasanton, CA

    1980-03-04

    High energy laser system using a regenerative amplifier, which relaxes all constraints on laser components other than the intrinsic damage level of matter, so as to enable use of available laser system components. This can be accomplished by use of segmented components, spatial filters, at least one amplifier using solid state or gaseous media, and separated reflector members providing a long round trip time through the regenerative cavity, thereby allowing slower switching and adequate time to clear the spatial filters, etc. The laser system simplifies component requirements and reduces component cost while providing high energy output.

  3. 21 CFR 177.2250 - Filters, microporous polymeric.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Filters, microporous polymeric. 177.2250 Section... as Components of Articles Intended for Repeated Use § 177.2250 Filters, microporous polymeric. Microporous polymeric filters identified in paragraph (a) of this section may be safely used, subject to the...

  4. 21 CFR 177.2250 - Filters, microporous polymeric.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Filters, microporous polymeric. 177.2250 Section... as Components of Articles Intended for Repeated Use § 177.2250 Filters, microporous polymeric. Microporous polymeric filters identified in paragraph (a) of this section may be safely used, subject to the...

  5. 21 CFR 177.2250 - Filters, microporous polymeric.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Filters, microporous polymeric. 177.2250 Section... as Components of Articles Intended for Repeated Use § 177.2250 Filters, microporous polymeric. Microporous polymeric filters identified in paragraph (a) of this section may be safely used, subject to the...

  6. 21 CFR 177.2250 - Filters, microporous polymeric.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Filters, microporous polymeric. 177.2250 Section... as Components of Articles Intended for Repeated Use § 177.2250 Filters, microporous polymeric. Microporous polymeric filters identified in paragraph (a) of this section may be safely used, subject to the...

  7. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  8. Pepper seed variety identification based on visible/near-infrared spectral technology

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen

    2016-11-01

    Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.

  9. Analysis of environmental variation in a Great Plains reservoir using principal components analysis and geographic information systems

    USGS Publications Warehouse

    Long, J.M.; Fisher, W.L.

    2006-01-01

    We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.

  10. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  11. Factors associated with successful transition among children with disabilities in eight European countries

    PubMed Central

    2017-01-01

    Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning. PMID:28636649

  12. Factors associated with successful transition among children with disabilities in eight European countries.

    PubMed

    Ravenscroft, John; Wazny, Kerri; Davis, John M

    2017-01-01

    This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), 'child inclusive ethos,' contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43-7.18, p<0.0001). To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.

  13. Dynamics of short-pulse generation via spectral filtering from intensely excited gain-switched 1.55-μm distributed-feedback laser diodes.

    PubMed

    Chen, Shaoqiang; Yoshita, Masahiro; Sato, Aya; Ito, Takashi; Akiyama, Hidefumi; Yokoyama, Hiroyuki

    2013-05-06

    Picosecond-pulse-generation dynamics and pulse-width limiting factors via spectral filtering from intensely pulse-excited gain-switched 1.55-μm distributed-feedback laser diodes were studied. The spectral and temporal characteristics of the spectrally filtered pulses indicated that the short-wavelength component stems from the initial part of the gain-switched main pulse and has a nearly linear down-chirp of 5.2 ps/nm, whereas long-wavelength components include chirped pulse-lasing components and steady-state-lasing components. Rate-equation calculations with a model of linear change in refractive index with carrier density explained the major features of the experimental results. The analysis of the expected pulse widths with optimum spectral widths was also consistent with the experimental data.

  14. Patient phenotypes associated with outcomes after aneurysmal subarachnoid hemorrhage: a principal component analysis.

    PubMed

    Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch

    2014-03-01

    Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.

  15. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  16. MEMBRANE FILTER PROCEDURE FOR ENUMERATING THE COMPONENT GENERA OF THE COLIFORM GROUP IN SEAWATER

    EPA Science Inventory

    A facile, quantitative, membrane filter procedure (mC) for defining the distribution of coliform populations in seawater according to the component genera was developed. The procedure, which utilizes a series of in situ substrate tests to obviate the picking of colonies for ident...

  17. The Performance of A Sampled Data Delay Lock Loop Implemented with a Kalman Loop Filter.

    DTIC Science & Technology

    1980-01-01

    que for analysis is computer simulation. Other techniques include state variable techniques and z-transform methods. Since the Kalman filter is linear...LOGIC NOT SHOWN Figure 2. Block diagram of the sampled data delay lock loop (SDDLL) Es A/ A 3/A/ Figure 3. Sampled error voltage ( Es ) as a function of...from a sum of two components. The first component is the previous filtered es - timate advanced one step forward by the state transition matrix. The 8

  18. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  19. Adaptive Fading Memory H∞ Filter Design for Compensation of Delayed Components in Self Powered Flux Detectors

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2015-08-01

    The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.

  20. Novel and Advanced Techniques for Complex IVC Filter Retrieval.

    PubMed

    Daye, Dania; Walker, T Gregory

    2017-04-01

    Inferior vena cava (IVC) filter placement is indicated for the treatment of venous thromboembolism (VTE) in patients with a contraindication to or a failure of anticoagulation. With the advent of retrievable IVC filters and their ease of placement, an increasing number of such filters are being inserted for prophylaxis in patients at high risk for VTE. Available data show that only a small number of these filters are retrieved within the recommended period, if at all, prompting the FDA to issue a statement on the need for their timely removal. With prolonged dwell times, advanced techniques may be needed for filter retrieval in up to 60% of the cases. In this article, we review standard and advanced IVC filter retrieval techniques including single-access, dual-access, and dissection techniques. Complicated filter retrievals carry a non-negligible risk for complications such as filter fragmentation and resultant embolization of filter components, venous pseudoaneurysms or stenoses, and breach of the integrity of the caval wall. Careful pre-retrieval assessment of IVC filter position, any significant degree of filter tilting or of hook, and/or strut epithelialization and caval wall penetration by filter components should be considered using dedicated cross-sectional imaging for procedural planning. In complex cases, the risk for retrieval complications should be carefully weighed against the risks of leaving the filter permanently indwelling. The decision to remove an embedded IVC filter using advanced techniques should be individualized to each patient and made with caution, based on the patient's age and existing comorbidities.

  1. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.

  2. Introduction to uses and interpretation of principal component analyses in forest biology.

    Treesearch

    J. G. Isebrands; Thomas R. Crow

    1975-01-01

    The application of principal component analysis for interpretation of multivariate data sets is reviewed with emphasis on (1) reduction of the number of variables, (2) ordination of variables, and (3) applications in conjunction with multiple regression.

  3. Principal component analysis of phenolic acid spectra

    USDA-ARS?s Scientific Manuscript database

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  6. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    NASA Astrophysics Data System (ADS)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  7. Filter-based multiscale entropy analysis of complex physiological time series.

    PubMed

    Xu, Yuesheng; Zhao, Liang

    2013-08-01

    Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.

  8. Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework.

    PubMed

    Davila, Juan Carlos; Cretu, Ana-Maria; Zaremba, Marek

    2017-06-07

    The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.

  9. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  10. Damping control of micromachined lowpass mechanical vibration isolation filters using electrostatic actuation with electronic signal processing

    NASA Astrophysics Data System (ADS)

    Dean, Robert; Flowers, George; Sanders, Nicole; MacAllister, Ken; Horvath, Roland; Hodel, A. S.; Johnson, Wayne; Kranz, Michael; Whitley, Michael

    2005-05-01

    Some harsh environments, such as those encountered by aerospace vehicles and various types of industrial machinery, contain high frequency/amplitude mechanical vibrations. Unfortunately, some very useful components are sensitive to these high frequency mechanical vibrations. Examples include MEMS gyroscopes and resonators, oscillators and some micro optics. Exposure of these components to high frequency mechanical vibrations present in the operating environment can result in problems ranging from an increased noise floor to component failure. Passive micromachined silicon lowpass filter structures (spring-mass-damper) have been demonstrated in recent years. However, the performance of these filter structures is typically limited by low damping (especially if operated in near-vacuum environments) and a lack of tunability after fabrication. Active filter topologies, such as piezoelectric, electrostrictive-polymer-film and SMA have also been investigated in recent years. Electrostatic actuators, however, are utilized in many micromachined silicon devices to generate mechanical motion. They offer a number of advantages, including low power, fast response time, compatibility with silicon micromachining, capacitive position measurement and relative simplicity of fabrication. This paper presents an approach for realizing active micromachined mechanical lowpass vibration isolation filters by integrating an electrostatic actuator with the micromachined passive filter structure to realize an active mechanical lowpass filter. Although the electrostatic actuator can be used to adjust the filter resonant frequency, the primary application is for increasing the damping to an acceptable level. The physical size of these active filters is suitable for use in or as packaging for sensitive electronic and MEMS devices, such as MEMS vibratory gyroscope chips.

  11. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  12. An experimental adaptive radar MTI filter

    NASA Astrophysics Data System (ADS)

    Gong, Y. H.; Cooling, J. E.

    The theoretical and practical features of a self-adaptive filter designed to remove clutter noise from a radar signal are described. The hardware employs an 8-bit microprocessor/fast hardware multiplier combination along with analog-digital and digital-analog interfaces. The software here is implemented in assembler language. It is assumed that there is little overlap between the signal and the noise spectra and that the noise power is much greater than that of the signal. It is noted that one of the most important factors to be considered when designing digital filters is the quantization noise. This works to degrade the steady state performance from that of the ideal (infinite word length) filter. The principal limitation of the filter described here is its low sampling rate (1.72 kHz), due mainly to the time spent on the multiplication routines. The methods discussed here, however, are general and can be applied to both traditional and more complex radar MTI systems, provided that the filter sampling frequency is increased. Dedicated VLSI signal processors are seen as holding considerable promise.

  13. Nonlinear diffusion filtering of the GOCE-based satellite-only MDT

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Mikula, Karol

    2015-04-01

    A combination of the GRACE/GOCE-based geoid models and mean sea surface models provided by satellite altimetry allows modelling of the satellite-only mean dynamic topography (MDT). Such MDT models are significantly affected by a stripping noise due to omission errors of the spherical harmonics approach. Appropriate filtering of this kind of noise is crucial in obtaining reliable results. In our study we use the nonlinear diffusion filtering based on a numerical solution to the nonlinear diffusion equation on closed surfaces (e.g. on a sphere, ellipsoid or the discretized Earth's surface), namely the regularized surface Perona-Malik model. A key idea is that the diffusivity coefficient depends on an edge detector. It allows effectively reduce the noise while preserve important gradients in filtered data. Numerical experiments present nonlinear filtering of the satellite-only MDT obtained as a combination of the DTU13 mean sea surface model and GO_CONS_GCF_2_DIR_R5 geopotential model. They emphasize an adaptive smoothing effect as a principal advantage of the nonlinear diffusion filtering. Consequently, the derived velocities of the ocean geostrophic surface currents contain stronger signal.

  14. Assessment of Supportive, Conflicted, and Controlling Dimensions of Family Functioning: A Principal Components Analysis of Family Environment Scale Subscales in a College Sample.

    ERIC Educational Resources Information Center

    Kronenberger, William G.; Thompson, Robert J., Jr.; Morrow, Catherine

    1997-01-01

    A principal components analysis of the Family Environment Scale (FES) (R. Moos and B. Moos, 1994) was performed using 113 undergraduates. Research supported 3 broad components encompassing the 10 FES subscales. These results supported previous research and the generalization of the FES to college samples. (SLD)

  15. Envelope analysis of rotating machine vibrations in variable speed conditions: A comprehensive treatment

    NASA Astrophysics Data System (ADS)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2017-02-01

    Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.

  16. Spatiotemporal deformation patterns of the Lake Urmia Causeway as characterized by multisensor InSAR analysis.

    PubMed

    Karimzadeh, Sadra; Matsuoka, Masashi; Ogushi, Fumitaka

    2018-04-03

    We present deformation patterns in the Lake Urmia Causeway (LUC) in NW Iran based on data collected from four SAR sensors in the form of interferometric synthetic aperture radar (InSAR) time series. Sixty-eight images from Envisat (2004-2008), ALOS-1 (2006-2010), TerraSAR-X (2012-2013) and Sentinel-1 (2015-2017) were acquired, and 227 filtered interferograms were generated using the small baseline subset (SBAS) technique. The rate of line-of-sight (LOS) subsidence of the LUC peaked at 90 mm/year between 2012 and 2013, mainly due to the loss of most of the water in Lake Urmia. Principal component analysis (PCA) was conducted on 200 randomly selected time series of the LUC, and the results are presented in the form of the three major components. The InSAR scores obtained from the PCA were used in a hydro-thermal model to investigate the dynamics of consolidation settlement along the LUC based on detrended water level and temperature data. The results can be used to establish a geodetic network around the LUC to identify more detailed deformation patterns and to help plan future efforts to reduce the possible costs of damage.

  17. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  18. Perforation of the IVC: rule rather than exception after longer indwelling times for the Günther Tulip and Celect retrievable filters.

    PubMed

    Durack, Jeremy C; Westphalen, Antonio C; Kekulawela, Stephanie; Bhanu, Shiv B; Avrin, David E; Gordon, Roy L; Kerlan, Robert K

    2012-04-01

    This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Günther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Günther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Günther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissue injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Longer indwelling times usually result in vena caval perforation by retrievable Günther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.

  19. Perforation of the IVC: Rule Rather Than Exception After Longer Indwelling Times for the Guenther Tulip and Celect Retrievable Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durack, Jeremy C., E-mail: jeremy.durack@ucsf.edu; Westphalen, Antonio C.; Kekulawela, Stephanie

    Purpose: This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Guenther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Methods: Guenther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Guenther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissuemore » injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Results: Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Conclusions: Longer indwelling times usually result in vena caval perforation by retrievable Guenther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.« less

  20. Laser system using regenerative amplifier

    DOEpatents

    Emmett, J.L.

    1980-03-04

    High energy laser system is disclosed using a regenerative amplifier, which relaxes all constraints on laser components other than the intrinsic damage level of matter, so as to enable use of available laser system components. This can be accomplished by use of segmented components, spatial filters, at least one amplifier using solid state or gaseous media, and separated reflector members providing a long round trip time through the regenerative cavity, thereby allowing slower switching and adequate time to clear the spatial filters, etc. The laser system simplifies component requirements and reduces component cost while providing high energy output. 10 figs.

  1. High Efficiency Particulate Air (HEPA) Filter Generation, Characterization, and Disposal Experiences at the Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffey, D. E.

    2002-02-28

    High Efficiency Particulate Air filtration is an essential component of the containment and ventilation systems supporting the research and development activities at the Oak Ridge National Laboratory. High Efficiency Particulate Air filters range in size from 7.6cm (3 inch) by 10.2 cm (4 inch) cylindrical shape filters to filter array assemblies up to 2.1 m (7 feet) high by 1.5 m (5 feet) wide. Spent filters are grouped by contaminates trapped in the filter media and become one of the components in the respective waste stream. Waste minimization and pollution prevention efforts are applied for both radiological and non-radiological applications.more » Radiological applications include laboratory hoods, glove boxes, and hot cells. High Efficiency Particulate Air filters also are generated from intake or pre-filtering applications, decontamination activities, and asbestos abatement applications. The disposal avenues include sanitary/industrial waste, Resource Conservation and Recovery Act and Toxic Substance Control Act, regulated waste, solid low-level waste, contact handled transuranic, and remote handled transuranic waste. This paper discusses characterization and operational experiences associated with the disposal of the spent filters across multiple applications.« less

  2. Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...

  3. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  4. The Use of Percolating Filters in Teaching Ecology.

    ERIC Educational Resources Information Center

    Gray, N. F.

    1982-01-01

    Using percolating filters (components of sewage treatment process) reduces problems of organization, avoids damage to habitats, and provides a local study site for field work or rapid collection of biological material throughout the year. Component organisms are easily identified and the habitat can be studied as a simple or complex system.…

  5. [Classification of Children with Attention-Deficit/Hyperactivity Disorder and Typically Developing Children Based on Electroencephalogram Principal Component Analysis and k-Nearest Neighbor].

    PubMed

    Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling

    2016-04-01

    This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention-deficit/hyperactivity disorder individuals.

  6. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  7. Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.

    PubMed

    Nguyen, Phuong H

    2006-12-01

    Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.

  8. Spectroscopic and Chemometric Analysis of Binary and Ternary Edible Oil Mixtures: Qualitative and Quantitative Study.

    PubMed

    Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica

    2016-04-19

    The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.

  9. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    NASA Astrophysics Data System (ADS)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  10. Short communication: Discrimination between retail bovine milks with different fat contents using chemometrics and fatty acid profiling.

    PubMed

    Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar

    2017-06-01

    We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Use of multivariate statistics to identify unreliable data obtained using CASA.

    PubMed

    Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón

    2013-06-01

    In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.

  12. [Spatial distribution characteristics of the physical and chemical properties of water in the Kunes River after the supply of snowmelt during spring].

    PubMed

    Liu, Xiang; Guo, Ling-Peng; Zhang, Fei-Yun; Ma, Jie; Mu, Shu-Yong; Zhao, Xin; Li, Lan-Hai

    2015-02-01

    Eight physical and chemical indicators related to water quality were monitored from nineteen sampling sites along the Kunes River at the end of snowmelt season in spring. To investigate the spatial distribution characteristics of water physical and chemical properties, cluster analysis (CA), discriminant analysis (DA) and principal component analysis (PCA) are employed. The result of cluster analysis showed that the Kunes River could be divided into three reaches according to the similarities of water physical and chemical properties among sampling sites, representing the upstream, midstream and downstream of the river, respectively; The result of discriminant analysis demonstrated that the reliability of such a classification was high, and DO, Cl- and BOD5 were the significant indexes leading to this classification; Three principal components were extracted on the basis of the principal component analysis, in which accumulative variance contribution could reach 86.90%. The result of principal component analysis also indicated that water physical and chemical properties were mostly affected by EC, ORP, NO3(-) -N, NH4(+) -N, Cl- and BOD5. The sorted results of principal component scores in each sampling sites showed that the water quality was mainly influenced by DO in upstream, by pH in midstream, and by the rest of indicators in downstream. The order of comprehensive scores for principal components revealed that the water quality degraded from the upstream to downstream, i.e., the upstream had the best water quality, followed by the midstream, while the water quality at downstream was the worst. This result corresponded exactly to the three reaches classified using cluster analysis. Anthropogenic activity and the accumulation of pollutants along the river were probably the main reasons leading to this spatial difference.

  13. Evidence for age-associated disinhibition of the wake drive provided by scoring principal components of the resting EEG spectrum in sleep-provoking conditions.

    PubMed

    Putilov, Arcady A; Donskaya, Olga G

    2016-01-01

    Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.

  14. Hyperspectral Microwave Atmospheric Sounder (HyMas) - New Capability in the CoSMIR-CoSSIR Scanhead

    NASA Technical Reports Server (NTRS)

    Hilliard, L. M.; Racette, P. E.; Blackwell, W.; Galbraith, C.; Thompson, E.

    2015-01-01

    Lincoln Laboratory and NASA's Goddard Space Flight Center have teamed to re-use an existing instrument platform, the CoSMIRCoSSIR system for atmospheric sounding, to develop a new capability in hyperspectral filtering, data collection, and display. The volume of the scanhead accomodated an intermediate frequency processor(IFP), that provides the filtering and digitization of the raw data and the interoperable remote component (IRC) adapted to CoSMIR, CoSSIR, and HyMAS that stores and archives the data with time tagged calibration and navigation data.The first element of the work is the demonstration of a hyperspectral microwave receiver subsystem that was recently shown using a comprehensive simulation study to yield performance that substantially exceeds current state-of-the-art. Hyperspectral microwave sounders with 100 channels offer temperature and humidity sounding improvements similar to those obtained when infrared sensors became hyperspectral, but with the relative insensitivity to clouds that characterizes microwave sensors. Hyperspectral microwave operation is achieved using independent RF antennareceiver arrays that sample the same areavolume of the Earths surfaceatmosphere at slightly different frequencies and therefore synthesize a set of dense, finely spaced vertical weighting functions. The second, enabling element of the proposal is the development of a compact 52-channel Intermediate Frequency processor module. A principal challenge in the development of a hyperspectral microwave system is the size of the IF filter bank required for channelization. Large bandwidths are simultaneously processed, thus complicating the use of digital back-ends with associated high complexities, costs, and power requirements. Our approach involves passive filters implemented using low-temperature co-fired ceramic (LTCC) technology to achieve an ultra-compact module that can be easily integrated with existing RF front-end technology. This IF processor is universally applicable to other microwave sensing missions requiring compact IF spectrometry.The data include 52 operational channels with low IF module volume (100cm3) and mass (300g) and linearity better than 0.3 over a 330K dynamic range.

  15. The impact of seasonal signals on spatio-temporal filtering

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2016-04-01

    Existence of Common Mode Errors (CMEs) in permanent GNSS networks contribute to spatial and temporal correlation in residual time series. Time series from permanently observing GNSS stations of distance less than 2 000 km are similarly influenced by such CME sources as: mismodelling (Earth Orientation Parameters - EOP, satellite orbits or antenna phase center variations) during the process of the reference frame realization, large-scale atmospheric and hydrospheric effects as well as small scale crust deformations. Residuals obtained as a result of detrending and deseasonalising of topocentric GNSS time series arranged epoch-by-epoch form an observation matrix independently for each component (North, East, Up). CME is treated as internal structure of the data. Assuming a uniform temporal function across the network it is possible to filter CME out using PCA (Principal Component Analysis) approach. Some of above described CME sources may be reflected as a wide range of frequencies in GPS residual time series. In order to determine an impact of seasonal signals modeling to existence of spatial correlation in network and consequently the results of CME filtration, we chose two ways of modeling. The first approach was commonly presented by previous authors, who modeled with the Least-Squares Estimation (LSE) only annual and semi-annual oscillations. In the second one the set of residuals was a result of modeling of deterministic part that included fortnightly periods plus up to 9th harmonics of Chandlerian, tropical and draconitic oscillations. Correlation coefficients for residuals in parallel with KMO (Kaiser-Meyer-Olkin) statistic and Bartlett's test of sphericity were determined. For this research we used time series expressed in ITRF2008 provided by JPL (Jet Propulsion Laboratory). GPS processing was made using GIPSY-OASIS software in a PPP (Precise Point Positioning) mode. In order to form GPS station network that meet demands of uniform spatial response to the CME we chose 18 stations located in Central Europe. Created network extends up to 1500 kilometers. The KMO statistic indicate whether a component analysis may be useful for a chosen data set. We obtained KMO statistic value of 0.87 and 0.62 for residuals of Up component after first and second approaches were applied, what means that both residuals share common errors. Bartlett's test of sphericity analysis met a requirement that in both cases there are correlations in residuals. Another important results are the eigenvalues expressed as a percentage of the total variance explained by the first few components in PCA. For North, East and Up component we obtain respectively 68%, 75%, 65% and 47%, 54%, 52% after first and second approaches were applied. The results of CME filtration using PCA approach performed on both residual time series influence directly the uncertainty of the velocity of permanent stations. In our case spatial filtering reduces the uncertainty of velocity from 0.5 to 0.8 mm for horizontal components and from 0.6 to 0.9 mm on average for Up component when annual and semi-annual signals were assumed. Nevertheless, while second approach to the deterministic part modelling was used, deterioration of velocity uncertainty was noticed only for Up component, probably due to much higher autocorrelation in the time series when comparing to horizontal components.

  16. Removal of the blue component of light significantly decreases retinal damage after high intensity exposure.

    PubMed

    Vicente-Tejedor, Javier; Marchena, Miguel; Ramírez, Laura; García-Ayuso, Diego; Gómez-Vicente, Violeta; Sánchez-Ramos, Celia; de la Villa, Pedro; Germain, Francisco

    2018-01-01

    Light causes damage to the retina (phototoxicity) and decreases photoreceptor responses to light. The most harmful component of visible light is the blue wavelength (400-500 nm). Different filters have been tested, but so far all of them allow passing a lot of this wavelength (70%). The aim of this work has been to prove that a filter that removes 94% of the blue component may protect the function and morphology of the retina significantly. Three experimental groups were designed. The first group was unexposed to light, the second one was exposed and the third one was exposed and protected by a blue-blocking filter. Light damage was induced in young albino mice (p30) by exposing them to white light of high intensity (5,000 lux) continuously for 7 days. Short wavelength light filters were used for light protection. The blue component was removed (94%) from the light source by our filter. Electroretinographical recordings were performed before and after light damage. Changes in retinal structure were studied using immunohistochemistry, and TUNEL labeling. Also, cells in the outer nuclear layer were counted and compared among the three different groups. Functional visual responses were significantly more conserved in protected animals (with the blue-blocking filter) than in unprotected animals. Also, retinal structure was better kept and photoreceptor survival was greater in protected animals, these differences were significant in central areas of the retina. Still, functional and morphological responses were significantly lower in protected than in unexposed groups. In conclusion, this blue-blocking filter decreases significantly photoreceptor damage after exposure to high intensity light. Actually, our eyes are exposed for a very long time to high levels of blue light (screens, artificial light LED, neons…). The potential damage caused by blue light can be palliated.

  17. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    NASA Astrophysics Data System (ADS)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.

  18. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  19. Rosacea assessment by erythema index and principal component analysis segmentation maps

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Rubins, Uldis; Saknite, Inga; Spigulis, Janis

    2017-12-01

    RGB images of rosacea were analyzed using segmentation maps of principal component analysis (PCA) and erythema index (EI). Areas of segmented clusters were compared to Clinician's Erythema Assessment (CEA) values given by two dermatologists. The results show that visible blood vessels are segmented more precisely on maps of the erythema index and the third principal component (PC3). In many cases, a distribution of clusters on EI and PC3 maps are very similar. Mean values of clusters' areas on these maps show a decrease of the area of blood vessels and erythema and an increase of lighter skin area after the therapy for the patients with diagnosis CEA = 2 on the first visit and CEA=1 on the second visit. This study shows that EI and PC3 maps are more useful than the maps of the first (PC1) and second (PC2) principal components for indicating vascular structures and erythema on the skin of rosacea patients and therapy monitoring.

  20. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  1. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  2. [Content of mineral elements of Gastrodia elata by principal components analysis].

    PubMed

    Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei

    2015-03-01

    To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.

  3. Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI

    PubMed Central

    Pearson, William G.; Zumwalt, Ann C.

    2013-01-01

    Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608

  4. Accelerometer Data Analysis and Presentation Techniques

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy

    1997-01-01

    The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.

  5. Obesity, metabolic syndrome, impaired fasting glucose, and microvascular dysfunction: a principal component analysis approach.

    PubMed

    Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G

    2012-11-13

    We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.

  6. The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.

    PubMed

    Bagley, C

    1980-03-01

    The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.

  7. The Derivation of Job Compensation Index Values from the Position Analysis Questionnaire (PAQ). Report No. 6.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.; And Others

    The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…

  8. Perceptions of the Principal Evaluation Process and Performance Criteria: A Qualitative Study of the Challenge of Principal Evaluation

    ERIC Educational Resources Information Center

    Faginski-Stark, Erica; Casavant, Christopher; Collins, William; McCandless, Jason; Tencza, Marilyn

    2012-01-01

    Recent federal and state mandates have tasked school systems to move beyond principal evaluation as a bureaucratic function and to re-imagine it as a critical component to improve principal performance and compel school renewal. This qualitative study investigated the district leaders' and principals' perceptions of the performance evaluation…

  9. APPROXIMATION AND INVERSION OF A COMPLEX METEOROLOGICAL SYSTEM VIA LOCAL LINEAR FILTERS. (R825381)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  10. 2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.

    PubMed

    Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen

    2017-09-19

    A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.

  11. An introduction of component fusion extend Kalman filtering method

    NASA Astrophysics Data System (ADS)

    Geng, Yue; Lei, Xusheng

    2018-05-01

    In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.

  12. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  13. Accuracy of iodine removal using dual-energy CT with or without a tin filter: an experimental phantom study.

    PubMed

    Kawai, Tatsuya; Takeuchi, Mitsuru; Hara, Masaki; Ohashi, Kazuya; Suzuki, Hirochika; Yamada, Kiyotaka; Sugimura, Yuya; Shibamoto, Yuta

    2013-10-01

    The effects of a tin filter on virtual non-enhanced (VNE) images created by dual-energy CT have not been well evaluated. To compare the accuracy of VNE images between those with and without a tin filter. Two different types of columnar phantoms made of agarose gel were evaluated. Phantom A contained various concentrations of iodine (4.5-1590 HU at 120 kVp). Phantom B consisted of a central component (0, 10, 25, and 40 mgI/cm(3)) and a surrounding component (0, 50, 100, and 200 mgI/cm(3)) with variable iodine concentration. They were scanned by dual-source CT in conventional single-energy mode and dual-energy mode with and without a tin filter. CT values on each gel at the corresponding points were measured and the accuracy of iodine removal was evaluated. On VNE images, the CT number of the gel of Phantom A fell within the range between -15 and +15 HU under 626 and 881 HU at single-energy 120 kVp with and without a tin filter, respectively. With attenuation over these thresholds, iodine concentration of gels was underestimated with the tin filter but overestimated without it. For Phantom B, the mean CT numbers on VNE images in the central gel component surrounded by the gel with iodine concentrations of 0, 50, 100, and 200 mgI/cm(3) were in the range of -19-+6 HU and 21-100 HU with and without the tin filter, respectively. Both with and without a tin filter, iodine removal was accurate under a threshold of iodine concentration. Although a surrounding structure with higher attenuation decreased the accuracy, a tin filter improved the margin of error.

  14. Aerosol composition and source apportionment in Santiago de Chile

    NASA Astrophysics Data System (ADS)

    Artaxo, Paulo; Oyola, Pedro; Martinez, Roberto

    1999-04-01

    Santiago de Chile, São Paulo and Mexico City are Latin American urban areas that suffer from heavy air pollution. In order to study air pollution in Santiago area, an aerosol source apportionment study was designed to measure ambient aerosol composition and size distribution for two downtown sampling sites in Santiago. The aerosol monitoring stations were operated in Gotuzo and Las Condes during July and August 1996. The study employed stacked filter units (SFU) for aerosol sampling, collecting fine mode aerosol (dp<2 μm) and coarse mode aerosol (2

  15. Developing a Model Component

    NASA Technical Reports Server (NTRS)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  16. Taking Ecological Function Seriously: Soil Microbial Communities Can Obviate Allelopathic Effects of Released Metabolites

    PubMed Central

    Kaur, Surinder; Baldwin, Ian T.; Inderjit

    2009-01-01

    Background Allelopathy (negative, plant-plant chemical interactions) has been largely studied as an autecological process, often assuming simplistic associations between pairs of isolated species. The growth inhibition of a species in filter paper bioassay enriched with a single chemical is commonly interpreted as evidence of an allelopathic interaction, but for some of these putative examples of allelopathy, the results have not been verifiable in more natural settings with plants growing in soil. Methodology/Principal findings On the basis of filter paper bioassay, a recent study established allelopathic effects of m-tyrosine, a component of root exudates of Festuca rubra ssp. commutata. We re-examined the allelopathic effects of m-tyrosine to understand its dynamics in soil environment. Allelopathic potential of m-tyrosine with filter paper and soil (non-sterile or sterile) bioassays was studied using Lactuca sativa, Phalaris minor and Bambusa arundinacea as assay species. Experimental application of m-tyrosine to non-sterile and sterile soil revealed the impact of soil microbial communities in determining the soil concentration of m-tyrosine and growth responses. Conclusions/Significance Here, we show that the allelopathic effects of m-tyrosine, which could be seen in sterilized soil with particular plant species were significantly diminished when non-sterile soil was used, which points to an important role for rhizosphere-specific and bulk soil microbial activity in determining the outcome of this allelopathic interaction. Our data show that the amounts of m-tyrosine required for root growth inhibition were higher than what would normally be found in F. rubra ssp. commutata rhizosphere. We hope that our study will motivate researchers to integrate the role of soil microbial communities in bioassays in allelopathic research so that its importance in plant-plant competitive interactions can be thoroughly evaluated. PMID:19277112

  17. Development of a bifunctional filter for prion protein and leukoreduction of red blood cell components.

    PubMed

    Yokomizo, Tomo; Kai, Takako; Miura, Morikazu; Ohto, Hitoshi

    2015-02-01

    Leukofiltration of blood components is currently implemented worldwide as a precautionary measure against white blood cell-associated adverse effects and the potential transmission of variant Creutzfeldt-Jakob disease (vCJD). A newly developed bifunctional filter (Sepacell Prima, Asahi Kasei Medical) was assessed for prion removal, leukoreduction (LR), and whether the filter significantly affected red blood cells (RBCs). Sepacell Prima's postfiltration effects on RBCs, including hemolysis, complement activation, and RBC chemistry, were compared with those of a conventional LR filter (Sepacell Pure RC). Prion removal was measured by Western blot after spiking RBCs with microsomal fractions derived from scrapie-infected hamster brain homogenate. Serially diluted exogenous prion solutions (0.05 mL), with or without filtration, were injected intracerebrally into Golden Syrian hamsters. LR efficiency of 4.44 log with the Sepacell Prima was comparable to 4.11 log with the conventional LR filter. There were no significant differences between the two filters in hemoglobin loss, hemolysis, complement activation, and RBC biomarkers. In vitro reduction of exogenously spiked prions by the filter exceeded 3 log. The titer, 6.63 (log ID50 /mL), of prefiltration infectivity of healthy hamsters was reduced to 2.52 (log ID50 /mL) after filtration. The reduction factor was calculated as 4.20 (log ID50 ). With confirmed removal efficacy for exogenous prion protein, this new bifunctional prion and LR filter should reduce the residual risk of vCJD transmission through blood transfusion without adding complexity to component processing. © 2014 AABB.

  18. Evaluation of Delcath Systems' Generation 2 (GEN 2) melphalan hemofiltration system in a porcine model of percutaneous hepatic perfusion.

    PubMed

    Moeslein, Fred M; McAndrew, Elizabeth G; Appling, William M; Hryniewich, Nicole E; Jarvis, Kevin D; Markos, Steven M; Sheets, Timothy P; Uzgare, Rajneesh P; Johnston, Daniel S

    2014-06-01

    A new melphalan hemoperfusion filter (GEN 2) was evaluated in a simulated-use porcine model of percutaneous hepatic perfusion (PHP). The current study evaluated melphalan filtration efficiency, the transfilter pressure gradient, and the removal of specific blood products. A porcine PHP procedure using the GEN 2 filter was performed under Good Laboratory Practice conditions to model the 60-min clinical PHP procedure. The mean filter efficiency for removing melphalan in six filters was 99.0 ± 0.4 %. The transfilter pressure gradient across the filter averaged 20.9 mmHg for the 60-min procedure. Many blood components, including albumin and platelets, decreased on average from 3.55 to 2.02 g/dL and from 342 to 177 × 10.e3/μL, respectively, during the procedure. The increased melphalan extraction efficiency of the new filter is expected to decrease systemic melphalan exposure. In addition, the low transfilter pressure gradient resulted in low resistance to blood flow in the GEN 2 filter, and the changes to blood components are expected to be clinically manageable.

  19. Improving signal-to-noise ratios of liquid chromatography-tandem mass spectrometry peaks using noise frequency spectrum modification between two consecutive matched-filtering procedures.

    PubMed

    Wang, Shau-Chun; Huang, Chih-Min; Chiang, Shu-Min

    2007-08-17

    This paper reports a simple chemometric technique to alter the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS-MS) chromatogram between two consecutive matched filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. This technique is to multiply one match-filtered LC-MS-MS chromatogram with another artificial chromatogram added with thermal noises prior to the second matched filter. Because matched filter cannot eliminate low-frequency components inherent in the flicker noises of spike-like sharp peaks randomly riding on LC-MS-MS chromatograms, efficient peak S/N ratio improvement cannot be accomplished using one-step or consecutive matched filter procedures to process LC-MS-MS chromatograms. In contrast, when the match-filtered LC-MS-MS chromatogram is conditioned with the multiplication alteration prior to the second matched filter, much better efficient ratio improvement is achieved. The noise frequency spectrum of match-filtered chromatogram, which originally contains only low-frequency components, is altered to span a boarder range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward higher frequency regime, the second matched filter, working as a low-pass filter, is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS-MS chromatograms containing random spike-like peaks, of which peak S/N ratio improvement is less than four times with two consecutive matched filters typically, are remedied to accomplish much better ratio enhancement approximately 16-folds when the noise frequency spectrum is modified between two matched filters.

  20. Experimental Researches on the Durability Indicators and the Physiological Comfort of Fabrics using the Principal Component Analysis (PCA) Method

    NASA Astrophysics Data System (ADS)

    Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.

    2017-06-01

    The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.

  1. 40 CFR 141.718 - Treatment performance toolbox components.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Combined filter performance. Systems using conventional filtration treatment or direct filtration... the criteria in this paragraph. Combined filter effluent (CFE) turbidity must be less than or equal to... § 141.74(a) and (c). (b) Individual filter performance. Systems using conventional filtration treatment...

  2. 40 CFR 141.718 - Treatment performance toolbox components.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Combined filter performance. Systems using conventional filtration treatment or direct filtration... the criteria in this paragraph. Combined filter effluent (CFE) turbidity must be less than or equal to... § 141.74(a) and (c). (b) Individual filter performance. Systems using conventional filtration treatment...

  3. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  4. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  5. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  6. Forest filter effect versus cold trapping effect on the altitudinal distribution of PCBs: a case study of Mt. Gongga, eastern Tibetan Plateau.

    PubMed

    Liu, Xin; Li, Jun; Zheng, Qian; Bing, Haijian; Zhang, Ruijie; Wang, Yan; Luo, Chunling; Liu, Xiang; Wu, Yanhong; Pan, Suhong; Zhang, Gan

    2014-12-16

    Mountains are observed to preferentially accumulate persistent organic pollutants (POPs) at higher altitude due to the cold condensation effect. Forest soils characterized by high organic carbon are important for terrestrial storage of POPs. To investigate the dominant factor controlling the altitudinal distribution of POPs in mountainous areas, we measured concentrations of polychlorinated biphenyls (PCBs) in different environmental matrices (soil, moss, and air) from nine elevations on the eastern slope of Mt. Gongga, the highest mountain in Sichuan Province on the Tibetan Plateau. The concentrations of 24 measured PCBs ranged from 41 to 510 pg/g dry weight (dw) (mean: 260 pg/g dw) in the O-horizon soil, 280 to 1200 pg/g dw (mean: 740 pg/g dw) in moss, and 33 to 60 pg/m(3) (mean: 47 pg/m(3)) in air. Soil organic carbon was a key determinant explaining 75% of the variation in concentration along the altitudinal gradient. Across all of the sampling sites, the average contribution of the forest filter effect (FFE) was greater than that of the mountain cold trapping effect based on principal components analysis and multiple linear regression. Our results deviate from the thermodynamic theory involving cold condensation at high altitudes of mountain areas and highlight the importance of the FFE.

  7. Detection of contamination on selected apple cultivars using reflectance hyperspectral and multispectral analysis

    NASA Astrophysics Data System (ADS)

    Mehl, Patrick M.; Chao, Kevin; Kim, Moon S.; Chen, Yud-Ren

    2001-03-01

    Presence of natural or exogenous contaminations on apple cultivars is a food safety and quality concern touching the general public and strongly affecting this commodity market. Accumulations of human pathogens are usually observed on surface lesions of commodities. Detections of either lesions or directly of the pathogens are essential for assuring the quality and safety of commodities. We are presenting the application of hyperspectral image analysis towards the development of multispectral techniques for the detection of defects on chosen apple cultivars, such as Golden Delicious, Red Delicious, and Gala apples. Separate apple cultivars possess different spectral characteristics leading to different approaches for analysis. General preprocessing analysis with morphological treatments is followed by different image treatments and condition analysis for highlighting lesions and contaminations on the apple cultivars. Good isolations of scabs, fungal and soil contaminations and bruises are observed with hyperspectral imaging processing either using principal component analysis or utilizing the chlorophyll absorption peak. Applications of hyperspectral results to a multispectral detection are limited by the spectral capabilities of our RGB camera using either specific band pass filters and using direct neutral filters. Good separations of defects are obtained for Golden Delicious apples. It is however limited for the other cultivars. Having an extra near infrared channel will increase the detection level utilizing the chlorophyll absorption band for detection as demonstrated by the present hyperspectral imaging analysis

  8. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  9. The application of computational mechanics to the analysis of natural data: An example in geomagnetism.

    NASA Astrophysics Data System (ADS)

    Watkins, Nicholas; Clarke, Richard; Freeman, Mervyn

    2002-11-01

    We discuss how the ideal formalism of Computational Mechanics can be adapted to apply to a non-infinite series of corrupted and correlated data, that is typical of most observed natural time series. Specifically, a simple filter that removes the corruption that creates rare unphysical causal states is demonstrated, and the new concept of effective soficity is introduced. The benefits of these new concepts are demonstrated on simulated time series by (a) the effective elimination of white noise corruption from a periodic signal using the expletive filter and (b) the appearance of an effectively sofic region in the statistical complexity of a biased Poisson switch time series that is insensitive to changes in the word length (memory) used in the analysis. The new algorithm is then applied to analysis of a real geomagnetic time series measured at Halley, Antarctica. Two principal components in the structure are detected that are interpreted as the diurnal variation due to the rotation of the earth-based station under an electrical current pattern that is fixed with respect to the sun-earth axis and the random occurrence of a signature likely to be that of the magnetic substorm. In conclusion, a hypothesis is advanced about model construction in general (see also Clarke et al; arXiv::cond-mat/0110228).

  10. Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator

    NASA Astrophysics Data System (ADS)

    Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong

    2011-04-01

    In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.

  11. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    ERIC Educational Resources Information Center

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  12. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  13. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  14. Burst and Principal Components Analyses of MEA Data for 16 Chemicals Describe at Least Three Effects Classes.

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in neuronal network function and have been used for neurotoxicity screening. As a proof-•of-concept, the current study assessed the utility of analytical "fingerprinting" using Principal Components Analysis (P...

  15. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  16. TOLUENE DEGRADATION IN THE RECYCLE LIQUID OF BIOTRICKLING FILTERS FOR AIR POLLUTION CONTROL. (R825392)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  17. Sub-5-ps optical pulse generation from a 1.55-µm distributed-feedback laser diode with nanosecond electric pulse excitation and spectral filtering.

    PubMed

    Chen, Shaoqiang; Sato, Aya; Ito, Takashi; Yoshita, Masahiro; Akiyama, Hidefumi; Yokoyama, Hiroyuki

    2012-10-22

    This paper reports generation of sub-5-ps Fourier-transform limited optical pulses from a 1.55-µm gain-switched single-mode distributed-feedback laser diode via nanosecond electric excitation and a simple spectral-filtering technique. Typical damped oscillations of the whole lasing spectrum were observed in the time-resolved waveform. Through a spectral-filtering technique, the initial relaxation oscillation pulse and the following components in the output pulse can be well separated, and the initial short pulse can be selectively extracted by filtering out the short-wavelength components in the spectrum. Short pulses generated by this simple method are expected to have wide potential applications comparable to mode-locking lasers.

  18. Preliminary design of the spatial filters used in the multipass amplification system of TIL

    NASA Astrophysics Data System (ADS)

    Zhu, Qihua; Zhang, Xiao Min; Jing, Feng

    1998-12-01

    The spatial filters are used in Technique Integration Line, which has a multi-pass amplifier, not only to suppress parasitic high spatial frequency modes but also to provide places for inserting a light isolator and injecting the seed beam, and to relay image while the beam passes through the amplifiers several times. To fulfill these functions, the parameters of the spatial filters are optimized by calculations and analyzes with the consideration of avoiding the plasma blow-off effect and components demanding by ghost beam focus. The 'ghost beams' are calculated by ray tracing. A software was developed to evaluate the tolerance of the spatial filters and their components, and to align the whole system on computer simultaneously.

  19. DCT based interpolation filter for motion compensation in HEVC

    NASA Astrophysics Data System (ADS)

    Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin

    2012-10-01

    High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.

  20. Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.

    PubMed

    Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S

    2016-01-01

    Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Designing a Wien Filter Model with General Particle Tracer

    NASA Astrophysics Data System (ADS)

    Mitchell, John; Hofler, Alicia

    2017-09-01

    The Continuous Electron Beam Accelerator Facility injector employs a beamline component called a Wien filter which is typically used to select charged particles of a certain velocity. The Wien filter is also used to rotate the polarization of a beam for parity violation experiments. The Wien filter consists of perpendicular electric and magnetic fields. The electric field changes the spin orientation, but also imposes a transverse kick which is compensated for by the magnetic field. The focus of this project was to create a simulation of the Wien filter using General Particle Tracer. The results from these simulations were vetted against machine data to analyze the accuracy of the Wien model. Due to the close agreement between simulation and experiment, the data suggest that the Wien filter model is accurate. The model allows a user to input either the desired electric or magnetic field of the Wien filter along with the beam energy as parameters, and is able to calculate the perpendicular field strength required to keep the beam on axis. The updated model will aid in future diagnostic tests of any beamline component downstream of the Wien filter, and allow users to easily calculate the electric and magnetic fields needed for the filter to function properly. Funding support provided by DOE Office of Science's Student Undergraduate Laboratory Internship program.

  2. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  3. 10Be in late deglacial climate simulated by ECHAM5-HAM - Part 2: Isolating the solar signal from 10Be deposition

    NASA Astrophysics Data System (ADS)

    Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.

    2013-10-01

    This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 yr time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 BP and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather or climate driven noise in the 10Be deposition flux during different stages of climate. The production signal varies on lower frequencies, dominated by the 11yr solar cycle within the 30 yr time scale of these experiments. The climatic noise is of higher frequencies. We first apply empirical orthogonal functions (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis on the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low frequency components and the long term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high frequency components represent climate driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.

  4. Efficiency of automotive cabin air filters to reduce acute health effects of diesel exhaust in human subjects

    PubMed Central

    Rudell, B.; Wass, U.; Horstedt, P.; Levin, J. O.; Lindahl, R.; Rannug, U.; Sunesson, A. L.; Ostberg, Y.; Sandstrom, T.

    1999-01-01

    OBJECTIVES: To evaluate the efficiency of different automotive cabin air filters to prevent penetration of components of diesel exhaust and thereby reduce biomedical effects in human subjects. Filtered air and unfiltered diluted diesel exhaust (DDE) were used as negative and positive controls, respectively, and were compared with exposure to DDE filtered with four different filter systems. METHODS: 32 Healthy non- smoking subjects (age 21-53) participated in the study. Each subject was exposed six times for 1 hour in a specially designed exposure chamber: once to air, once to unfiltered DDE, and once to DDE filtered with the four different cabin air filters. Particle concentrations during exposure to unfiltered DDE were kept at 300 micrograms/m3. Two of the filters were particle filters. The other two were particle filters combined with active charcoal filters that might reduce certain gaseous components. Subjective symptoms were recorded and nasal airway lavage (NAL), acoustic rhinometry, and lung function measurements were performed. RESULTS: The two particle filters decreased the concentrations of diesel exhaust particles by about half, but did not reduce the intensity of symptoms induced by exhaust. The combination of active charcoal filters and a particle filter significantly reduced the symptoms and discomfort caused by the diesel exhaust. The most noticable differences in efficacy between the filters were found in the reduction of detection of an unpleasant smell from the diesel exhaust. In this respect even the two charcoal filter combinations differed significantly. The efficacy to reduce symptoms may depend on the abilities of the filters investigated to reduce certain hydrocarbons. No acute effects on NAL, rhinometry, and lung function variables were found. CONCLUSIONS: This study has shown that the use of active charcoal filters, and a particle filter, clearly reduced the intensity of symptoms induced by diesel exhaust. Complementary studies on vehicle cabin air filters may result in further diminishing the biomedical effects of diesel exhaust in subjects exposed in traffic and workplaces.   PMID:10450238

  5. Tailoring noise frequency spectrum between two consecutive second derivative filtering procedures to improve liquid chromatography-mass spectrometry determinations.

    PubMed

    Wang, Shau-Chun; Lin, Chiao-Juan; Chiang, Shu-Min; Yu, Sung-Nien

    2008-03-15

    This paper reports a simple chemometric technique to alter the noise spectrum of a liquid chromatography-mass spectrometry (LC-MS) chromatogram between two consecutive second-derivative filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. This technique is to multiply one second-derivative filtered LC-MS chromatogram with another artificial chromatogram added with thermal noises prior to the other second-derivative filter. Because the second-derivative filter cannot eliminate frequency components within its own filter bandwidth, more efficient peak S/N ratio improvement cannot be accomplished using consecutive second-derivative filter procedures to process LC-MS chromatograms. In contrast, when the second-derivative filtered LC-MS chromatogram is conditioned with the multiplication alteration prior to the other second-derivative filter, much better ratio improvement is achieved. The noise frequency spectrum of the second-derivative filtered chromatogram, which originally contains frequency components within the filter bandwidth, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the other regimes, the other second-derivative filter, working as a band-pass filter, is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS chromatograms, of which 5-fold peak S/N ratio improvement achieved with two consecutive second-derivative filters remains the same S/N ratio improvement using a one-step second-derivative filter, are improved to accomplish much better ratio enhancement, approximately 25-fold or higher when the noise frequency spectrum is modified between two matched filters. The linear standard curve using the filtered LC-MS signals is validated. The filtered LC-MS signals are also more reproducible. The more accurate determinations of very low-concentration samples (S/N ratio about 5-7) are obtained via standard addition procedures using the filtered signals rather than the determinations using the original signals.

  6. A principal components model of soundscape perception.

    PubMed

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  7. LDEF active optical system components experiment

    NASA Technical Reports Server (NTRS)

    Blue, M. D.

    1992-01-01

    A preliminary report on the Active Optical System Components Experiment is presented. This experiment contained 136 components in a six inch deep tray including lasers, infrared detectors and arrays, ultraviolet light detectors, light-emitting diodes, a light modulator, flash lamps, optical filters, glasses, and samples of surface finishes. Thermal, mechanical, and structural considerations leading to the design of the tray hardware are discussed. In general, changes in the retested component characteristics appear as much related to the passage of time as to the effects of the space environment, but organic materials, multilayer optical interference filters, and extreme-infrared reflectivity of black paints show unexpected changes.

  8. [Testing method research for key performance indicator of imaging acousto-optic tunable filter (AOTF)].

    PubMed

    Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui

    2013-01-01

    Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.

  9. Application of acoustic surface wave filter-beam lead component technology to deep space multimission hardware design

    NASA Technical Reports Server (NTRS)

    Kermode, A. W.; Boreham, J. F.

    1974-01-01

    This paper discusses the utilization of acoustic surface wave filters, beam lead components, and thin film metallized ceramic substrate technology as applied to the design of deep space, long-life, multimission transponder. The specific design to be presented is for a second mixer local oscillator module, operating at frequencies as high as 249 MHz.

  10. Application of principal component analysis in protein unfolding: an all-atom molecular dynamics simulation study.

    PubMed

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-28

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  11. Application of principal component analysis in protein unfolding: An all-atom molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-01

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  12. SAS program for quantitative stratigraphic correlation by principal components

    USGS Publications Warehouse

    Hohn, M.E.

    1985-01-01

    A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.

  13. Implementation of an integrating sphere for the enhancement of noninvasive glucose detection using quantum cascade laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.

    2018-05-01

    An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.

  14. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    PubMed

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  15. Principals' Perceptions of Collegial Support as a Component of Administrative Inservice.

    ERIC Educational Resources Information Center

    Daresh, John C.

    To address the problem of increasing professional isolation of building administrators, the Principals' Inservice Project helps establish principals' collegial support groups across the nation. The groups are typically composed of 6 to 10 principals who meet at least once each month over a 2-year period. One collegial support group of seven…

  16. Training the Trainers: Learning to Be a Principal Supervisor

    ERIC Educational Resources Information Center

    Saltzman, Amy

    2017-01-01

    While most principal supervisors are former principals themselves, few come to the role with specific training in how to do the job effectively. For this reason, both the Washington, D.C., and Tulsa, Oklahoma, principal supervisor programs include a strong professional development component. In this article, the author takes a look inside these…

  17. Narrowband 1.5-{mu}m Bragg filter based on a polymer waveguide with a laser-written refractive-index grating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolov, Viktor I; Panchenko, Vladislav Ya; Seminogov, V N

    We report the fabrication of narrowband frequency-selective filters for the 1.5-{mu}m telecom window, which include a single-mode polymer waveguide with a submicron Bragg grating inscribed by a helium-cadmium laser. The filters have a reflectance R > 98 % and a nearly rectangular reflection band with a bandwidth {Delta}{lambda}{approx}0.4nm. They can be used as components of optical multiplexers/demultiplexers for combining and separating signals in high-speed dense wavelength-division multiplexed optical fibre communication systems. (laser components)

  18. Development and test of video systems for airborne surveillance of oil spills

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.

    1975-01-01

    Five video systems - potentially useful for airborne surveillance of oil spills - were developed, flight tested, and evaluated. The systems are: (1) conventional black and white TV, (2) conventional TV with false color, (3) differential TV, (4) prototype Lunar Surface TV, and (5) field sequential TV. Wavelength and polarization filtering were utilized in all systems. Greatly enhanced detection of oil spills, relative to that possible with the unaided eye, was achieved. The most practical video system is a conventional TV camera with silicon-diode-array image tube, filtered with a Corning 7-54 filter and a polarizer oriented with its principal axis in the horizontal direction. Best contrast between oil and water was achieved when winds and sea states were low. The minimum detectable oil film thickness was about 0.1 micrometer.

  19. 40 CFR Appendix Vi to Part 86 - Vehicle and Engine Components

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... exhaust valves. (2) Drive belts. (3) Manifold and cylinder head bolts. (4) Engine oil and filter. (5...) Carburetor-idle RPM, mixture ratio. (3) Choke mechanism. (4) Fuel system filter and fuel system lines and... filter breather cap. (4) Manifold inlet (carburetor spacer, etc.). V. External Exhaust Emission Control...

  20. 40 CFR Appendix Vi to Part 86 - Vehicle and Engine Components

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... exhaust valves. (2) Drive belts. (3) Manifold and cylinder head bolts. (4) Engine oil and filter. (5...) Carburetor-idle RPM, mixture ratio. (3) Choke mechanism. (4) Fuel system filter and fuel system lines and... filter breather cap. (4) Manifold inlet (carburetor spacer, etc.). V. External Exhaust Emission Control...

  1. 40 CFR Appendix Vi to Part 86 - Vehicle and Engine Components

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (2) Drive belts. (3) Manifold and cylinder head bolts. (4) Engine oil and filter. (5) Engine coolant...) Carburetor-idle RPM, mixture ratio. (3) Choke mechanism. (4) Fuel system filter and fuel system lines and... filter breather cap. (4) Manifold inlet (carburetor spacer, etc.). V. External Exhaust Emission Control...

  2. 40 CFR Appendix Vi to Part 86 - Vehicle and Engine Components

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... exhaust valves. (2) Drive belts. (3) Manifold and cylinder head bolts. (4) Engine oil and filter. (5...) Carburetor-idle RPM, mixture ratio. (3) Choke mechanism. (4) Fuel system filter and fuel system lines and... filter breather cap. (4) Manifold inlet (carburetor spacer, etc.). V. External Exhaust Emission Control...

  3. 40 CFR Appendix Vi to Part 86 - Vehicle and Engine Components

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... exhaust valves. (2) Drive belts. (3) Manifold and cylinder head bolts. (4) Engine oil and filter. (5...) Carburetor-idle RPM, mixture ratio. (3) Choke mechanism. (4) Fuel system filter and fuel system lines and... filter breather cap. (4) Manifold inlet (carburetor spacer, etc.). V. External Exhaust Emission Control...

  4. Use of Geochemistry Data Collected by the Mars Exploration Rover Spirit in Gusev Crater to Teach Geomorphic Zonation through Principal Components Analysis

    ERIC Educational Resources Information Center

    Rodrigue, Christine M.

    2011-01-01

    This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…

  5. A COST BENEFIT APPROACH TO REACTOR SIZING AND NUTRIENT SUPPLY FOR BIOTRICKLING FILTERS FOR AIR POLLUTION CONTROL. (R825392)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  6. A Principal Components Analysis and Validation of the Coping with the College Environment Scale (CWCES)

    ERIC Educational Resources Information Center

    Ackermann, Margot Elise; Morrow, Jennifer Ann

    2008-01-01

    The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…

  7. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  8. Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.

    2013-06-01

    Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.

  9. Stability of Nonlinear Principal Components Analysis: An Empirical Study Using the Balanced Bootstrap

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.

    2007-01-01

    Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…

  10. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  11. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  12. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  13. 40 CFR 60.1580 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule...

  14. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  15. Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach

    ERIC Educational Resources Information Center

    Mukorera, Sophia; Nyatanga, Phocenah

    2017-01-01

    Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…

  16. Principal Perspectives about Policy Components and Practices for Reducing Cyberbullying in Urban Schools

    ERIC Educational Resources Information Center

    Hunley-Jenkins, Keisha Janine

    2012-01-01

    This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…

  17. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  18. Learning Principal Component Analysis by Using Data from Air Quality Networks

    ERIC Educational Resources Information Center

    Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia

    2017-01-01

    With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…

  19. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  20. Relationships between Association of Research Libraries (ARL) Statistics and Bibliometric Indicators: A Principal Components Analysis

    ERIC Educational Resources Information Center

    Hendrix, Dean

    2010-01-01

    This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…

  1. Principal component analysis for protein folding dynamics.

    PubMed

    Maisuradze, Gia G; Liwo, Adam; Scheraga, Harold A

    2009-01-09

    Protein folding is considered here by studying the dynamics of the folding of the triple beta-strand WW domain from the Formin-binding protein 28. Starting from the unfolded state and ending either in the native or nonnative conformational states, trajectories are generated with the coarse-grained united residue (UNRES) force field. The effectiveness of principal components analysis (PCA), an already established mathematical technique for finding global, correlated motions in atomic simulations of proteins, is evaluated here for coarse-grained trajectories. The problems related to PCA and their solutions are discussed. The folding and nonfolding of proteins are examined with free-energy landscapes. Detailed analyses of many folding and nonfolding trajectories at different temperatures show that PCA is very efficient for characterizing the general folding and nonfolding features of proteins. It is shown that the first principal component captures and describes in detail the dynamics of a system. Anomalous diffusion in the folding/nonfolding dynamics is examined by the mean-square displacement (MSD) and the fractional diffusion and fractional kinetic equations. The collisionless (or ballistic) behavior of a polypeptide undergoing Brownian motion along the first few principal components is accounted for.

  2. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters.

    PubMed

    Tao, Dapeng; Lin, Xu; Jin, Lianwen; Li, Xuelong

    2016-03-01

    Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.

  3. Dynamic of consumer groups and response of commodity markets by principal component analysis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  4. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  5. [Applications of three-dimensional fluorescence spectrum of dissolved organic matter to identification of red tide algae].

    PubMed

    Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao

    2011-01-01

    The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.

  6. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  7. Hyperspectral optical imaging of human iris in vivo: characteristics of reflectance spectra

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Pereira, Luís M.; Correia, Hélder T.; Nascimento, Sérgio M. C.

    2011-07-01

    We report a hyperspectral imaging system to measure the reflectance spectra of real human irises with high spatial resolution. A set of ocular prosthesis was used as the control condition. Reflectance data were decorrelated by the principal-component analysis. The main conclusion is that spectral complexity of the human iris is considerable: between 9 and 11 principal components are necessary to account for 99% of the cumulative variance in human irises. Correcting image misalignments associated with spontaneous ocular movements did not influence this result. The data also suggests a correlation between the first principal component and different levels of melanin present in the irises. It was also found that although the spectral characteristics of the first five principal components were not affected by the radial and angular position of the selected iridal areas, they affect the higher-order ones, suggesting a possible influence of the iris texture. The results show that hyperspectral imaging in the iris, together with adequate spectroscopic analyses provide more information than conventional colorimetric methods, making hyperspectral imaging suitable for the characterization of melanin and the noninvasive diagnosis of ocular diseases and iris color.

  8. Seeing wholes: The concept of systems thinking and its implementation in school leadership

    NASA Astrophysics Data System (ADS)

    Shaked, Haim; Schechter, Chen

    2013-12-01

    Systems thinking (ST) is an approach advocating thinking about any given issue as a whole, emphasising the interrelationships between its components rather than the components themselves. This article aims to link ST and school leadership, claiming that ST may enable school principals to develop highly performing schools that can cope successfully with current challenges, which are more complex than ever before in today's era of accountability and high expectations. The article presents the concept of ST - its definition, components, history and applications. Thereafter, its connection to education and its contribution to school management are described. The article concludes by discussing practical processes including screening for ST-skilled principal candidates and developing ST skills among prospective and currently performing school principals, pinpointing three opportunities for skills acquisition: during preparatory programmes; during their first years on the job, supported by veteran school principals as mentors; and throughout their entire career. Such opportunities may not only provide school principals with ST skills but also improve their functioning throughout the aforementioned stages of professional development.

  9. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  10. Temporal evolution of financial-market correlations.

    PubMed

    Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  11. Temporal evolution of financial-market correlations

    NASA Astrophysics Data System (ADS)

    Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  12. Buffers and vegetative filter strips

    Treesearch

    Matthew J. Helmers; Thomas M. Isenhart; Michael G. Dosskey; Seth M. Dabney

    2008-01-01

    This chapter describes the use of buffers and vegetative filter strips relative to water quality. In particular, we primarily discuss the herbaceous components of the following NRCS Conservation Practice Standards.

  13. Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.

    PubMed

    Kiran Kumar, G R; Reddy, M Ramasubba

    2018-06-08

    Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.

  14. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  15. QSAR modeling of flotation collectors using principal components extracted from topological indices.

    PubMed

    Natarajan, R; Nirdosh, Inderjit; Basak, Subhash C; Mills, Denise R

    2002-01-01

    Several topological indices were calculated for substituted-cupferrons that were tested as collectors for the froth flotation of uranium. The principal component analysis (PCA) was used for data reduction. Seven principal components (PC) were found to account for 98.6% of the variance among the computed indices. The principal components thus extracted were used in stepwise regression analyses to construct regression models for the prediction of separation efficiencies (Es) of the collectors. A two-parameter model with a correlation coefficient of 0.889 and a three-parameter model with a correlation coefficient of 0.913 were formed. PCs were found to be better than partition coefficient to form regression equations, and inclusion of an electronic parameter such as Hammett sigma or quantum mechanically derived electronic charges on the chelating atoms did not improve the correlation coefficient significantly. The method was extended to model the separation efficiencies of mercaptobenzothiazoles (MBT) and aminothiophenols (ATP) used in the flotation of lead and zinc ores, respectively. Five principal components were found to explain 99% of the data variability in each series. A three-parameter equation with correlation coefficient of 0.985 and a two-parameter equation with correlation coefficient of 0.926 were obtained for MBT and ATP, respectively. The amenability of separation efficiencies of chelating collectors to QSAR modeling using PCs based on topological indices might lead to the selection of collectors for synthesis and testing from a virtual database.

  16. Azimuthal filter to attenuate ground roll noise in the F-kx-ky domain for land 3D-3C seismic data with uneven acquisition geometry

    NASA Astrophysics Data System (ADS)

    Arevalo-Lopez, H. S.; Levin, S. A.

    2016-12-01

    The vertical component of seismic wave reflections is contaminated by surface noise such as ground roll and secondary scattering from near surface inhomogeneities. A common method for attenuating these, unfortunately often aliased, arrivals is via velocity filtering and/or multichannel stacking. 3D-3C acquisition technology provides two additional sources of information about the surface wave noise that we exploit here: (1) areal receiver coverage, and (2) a pair of horizontal components recorded at the same location as the vertical component. Areal coverage allows us to segregate arrivals at each individual receiver or group of receivers by direction. The horizontal components, having much less compressional reflection body wave energy than the vertical component, provide a template of where to focus our energies on attenuating the surface wave arrivals. (In the simplest setting, the vertical component is a scaled 90 degree phase rotated version of the radial horizontal arrival, a potential third possible lever we have not yet tried to integrate.) The key to our approach is to use the magnitude of the horizontal components to outline a data-adaptive "velocity" filter region in the w-Kx-Ky domain. The big advantage for us is that even in the presence of uneven receiver geometries, the filter automatically tracks through aliasing without manual sculpting and a priori velocity and dispersion estimation. The method was applied to an aliased synthetic dataset based on a five layer earth model which also included shallow scatterers to simulate near-surface inhomogeneities and successfully removed both the ground roll and scatterers from the vertical component (Figure 1).

  17. Comparison of the Frequency Response and Voltage Tuning Characteristics of a FFP and a MEMS Fiber Optic Tunable Filter

    DTIC Science & Technology

    2004-05-12

    Structural Engineering, La Jolla, CA 92093 14. ABSTRACT Tunable optical filters based on a Fabry - Perot element are a critical component in many...wavelength based fiber optic sensor systems. This report compares the performance of two fiber-pigtailed tunable optical filters, the fiber Fabry - Perot (FFP...both filters suggests that they can operate at frequencies up to 20 kHz and possibly as high as 100 kHz. 15. SUBJECT TERMS Tunable Fabry - Perot filters

  18. Silicon Micromachining for Terahertz Component Development

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam; Reck, Theodore J.; Jung-Kubiak, Cecile; Siles, Jose V.; Lee, Choonsup; Lin, Robert; Mehdi, Imran

    2013-01-01

    Waveguide component technology at terahertz frequencies has come of age in recent years. Essential components such as ortho-mode transducers (OMT), quadrature hybrids, filters, and others for high performance system development were either impossible to build or too difficult to fabricate with traditional machining techniques. With micromachining of silicon wafers coated with sputtered gold it is now possible to fabricate and test these waveguide components. Using a highly optimized Deep Reactive Ion Etching (DRIE) process, we are now able to fabricate silicon micromachined waveguide structures working beyond 1 THz. In this paper, we describe in detail our approach of design, fabrication, and measurement of silicon micromachined waveguide components and report the results of a 1 THz canonical E-plane filter.

  19. Reliable screening of various foodstuffs with respect to their irradiation status: A comparative study of different analytical techniques

    NASA Astrophysics Data System (ADS)

    Ahn, Jae-Jun; Akram, Kashif; Kwak, Ji-Young; Jeong, Mi-Seon; Kwon, Joong-Ho

    2013-10-01

    Cost-effective and time-efficient analytical techniques are required to screen large food lots in accordance to their irradiation status. Gamma-irradiated (0-10 kGy) cinnamon, red pepper, black pepper, and fresh paprika were investigated using photostimulated luminescence (PSL), direct epifluorescent filter technique/the aerobic plate count (DEFT/APC), and electronic-nose (e-nose) analyses. The screening results were also confirmed with thermoluminescence analysis. PSL analysis discriminated between irradiated (positive, >5000 PCs) and non-irradiated (negative, <700 PCs) cinnamon and red peppers. Black pepper had intermediate results (700-5000 PCs), while paprika had low sensitivity (negative results) upon irradiation. The DEFT/APC technique also showed clear screening results through the changes in microbial profiles, where the best results were found in paprika, followed by red pepper and cinnamon. E-nose analysis showed a dose-dependent discrimination in volatile profiles upon irradiation through principal component analysis. These methods can be used considering their potential applications for the screening analysis of irradiated foods.

  20. Ants impact the composition of the aquatic macroinvertebrate communities of a myrmecophytic tank bromeliad.

    PubMed

    Dejean, Alain; Compin, Arthur; Leponce, Maurice; Azémar, Frédéric; Bonhomme, Camille; Talaga, Stanislas; Pelozuelo, Laurent; Hénaut, Yann; Corbara, Bruno

    2018-03-01

    In an inundated Mexican forest, 89 out of 92 myrmecophytic tank bromeliads (Aechmea bracteata) housed an associated ant colony: 13 sheltered Azteca serica, 43 Dolichoderus bispinosus, and 33 Neoponera villosa. Ant presence has a positive impact on the diversity of the aquatic macroinvertebrate communities (n=30 bromeliads studied). A Principal Component Analysis (PCA) showed that the presence and the species of ant are not correlated to bromeliad size, quantity of water, number of wells, filtered organic matter or incident radiation. The PCA and a generalized linear model showed that the presence of Azteca serica differed from the presence of the other two ant species or no ants in its effects on the aquatic invertebrate community (more predators). Therefore, both ant presence and species of ant affect the composition of the aquatic macroinvertebrate communities in the tanks of A. bracteata, likely due to ant deposition of feces and other waste in these tanks. Copyright © 2018. Published by Elsevier Masson SAS.

  1. A Method for Predicting Protein Complexes from Dynamic Weighted Protein-Protein Interaction Networks.

    PubMed

    Liu, Lizhen; Sun, Xiaowu; Song, Wei; Du, Chao

    2018-06-01

    Predicting protein complexes from protein-protein interaction (PPI) network is of great significance to recognize the structure and function of cells. A protein may interact with different proteins under different time or conditions. Existing approaches only utilize static PPI network data that may lose much temporal biological information. First, this article proposed a novel method that combines gene expression data at different time points with traditional static PPI network to construct different dynamic subnetworks. Second, to further filter out the data noise, the semantic similarity based on gene ontology is regarded as the network weight together with the principal component analysis, which is introduced to deal with the weight computing by three traditional methods. Third, after building a dynamic PPI network, a predicting protein complexes algorithm based on "core-attachment" structural feature is applied to detect complexes from each dynamic subnetworks. Finally, it is revealed from the experimental results that our method proposed in this article performs well on detecting protein complexes from dynamic weighted PPI networks.

  2. Source contribution of PM₂.₅ at different locations on the Malaysian Peninsula.

    PubMed

    Ee-Ling, Ooi; Mustaffa, Nur Ili Hamizah; Amil, Norhaniza; Khan, Md Firoz; Latif, Mohd Talib

    2015-04-01

    This study determined the source contribution of PM2.5 (particulate matter <2.5 μm) in air at three locations on the Malaysian Peninsula. PM2.5 samples were collected using a high volume sampler equipped with quartz filters. Ion chromatography was used to determine the ionic composition of the samples and inductively coupled plasma mass spectrometry was used to determine the concentrations of heavy metals. Principal component analysis with multilinear regressions were used to identify the possible sources of PM2.5. The range of PM2.5 was between 10 ± 3 and 30 ± 7 µg m(-3). Sulfate (SO4 (2-)) was the major ionic compound detected and zinc was found to dominate the heavy metals. Source apportionment analysis revealed that motor vehicle and soil dust dominated the composition of PM2.5 in the urban area. Domestic waste combustion dominated in the suburban area, while biomass burning dominated in the rural area.

  3. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Steven

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less

  4. Flavor release measurement from gum model system.

    PubMed

    Ovejero-López, Isabel; Haahr, Anne-Mette; van den Berg, Frans; Bredie, Wender L P

    2004-12-29

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio-scaled using the signal from acetone in the breath of subjects. Next, APCI-MS and sensory TI curves are smoothed by low-pass filtering. Principal component analysis of the individual curves is used to display graphically the product differentiation by APCI-MS or TI signals. It is shown that differences in gum composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory adaptation and sensitivity differences of human perception versus APCI-MS detection might explain the divergence between the two dynamic measurement methods.

  5. Graph Frequency Analysis of Brain Signals

    PubMed Central

    Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro

    2016-01-01

    This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325

  6. Lithologic and structural mapping of the Abiete-Toko gold district in southern Cameroon, using Landsat 7 ETM+/SRTM

    NASA Astrophysics Data System (ADS)

    Binam Mandeng, Eugène Pascal; Bondjè Bidjeck, Louise Marie; Takodjou Wambo, Jonas Didero; Taku, Agbor; Bineli Betsi, Thierry; Solange Ipan, Antoinette; Tchami Nfada, Lionel; Bitom Dieudonné, Lucien

    2018-03-01

    The geology of the Abiete-Toko gold district in South Cameroon is investigated using a combination of Landsat 7 ETM+/SRTM image processing techniques, conventional geologic field mapping and geostatistical analysis. The satellite images were treated using Principal Component Analysis and Sobel filters to separate the background noise from lithotectonic structures which were matched with field data. The results show that this area has been affected by a polyphase deformation represented by S1 foliation, Sc1 schistosity, L1 lineation, S2 foliation, F2 folds, and F3 shear zones and faults. A detailed analysis of all the structures led to the identification of two major networks of dextral and sinistral shear zones oriented WNW-ESE and NE-SW, respectively. These results may serve in mining prospection, especially in the search for tectonically controlled primary mineralization and so may significantly guide the exploration of primary gold mineralization in the Abiete-Toko area subjected to years of artisanal gold mining.

  7. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  8. Face-iris multimodal biometric scheme based on feature level fusion

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  9. Polycyclic Aromatic Hydrocarbons Bound to PM 2.5 in Urban Coimbatore, India with Emphasis on Source Apportionment

    PubMed Central

    Mohanraj, R.; Dhanakumar, S.; Solaraj, G.

    2012-01-01

    Coimbatore is one of the fast growing industrial cities of Southern India with an urban population of 1.9 million. This study attempts to evaluate the trends of airborne fine particulates (PM 2.5) and polyaromatic hydrocarbons (PAH) on them. The PM 2.5 mass was collected in polytetra fluoroethylene filters using fine particulate sampler at monthly intervals during March 2009 to February 2010. PAHs were extracted from PM 2.5 and estimated by high-performance liquid chromatography. It is alarming to note that PM 2.5 values ranged between 27.85 and 165.75 μg/m3 and exceeded the air quality standards in many sampling events. The sum of 9 PAHs bound to PM 2.5 in a single sampling event ranged from 4.1 to 1632.3 ng/m3. PAH diagnostic ratios and principal component analysis results revealed vehicular emissions and diesel-powered generators as predominant sources of PAH in Coimbatore. PMID:22649329

  10. Pattern Analysis of Dynamic Susceptibility Contrast-enhanced MR Imaging Demonstrates Peritumoral Tissue Heterogeneity

    PubMed Central

    Akbari, Hamed; Macyszyn, Luke; Da, Xiao; Wolf, Ronald L.; Bilello, Michel; Verma, Ragini; O’Rourke, Donald M.

    2014-01-01

    Purpose To augment the analysis of dynamic susceptibility contrast material–enhanced magnetic resonance (MR) images to uncover unique tissue characteristics that could potentially facilitate treatment planning through a better understanding of the peritumoral region in patients with glioblastoma. Materials and Methods Institutional review board approval was obtained for this study, with waiver of informed consent for retrospective review of medical records. Dynamic susceptibility contrast-enhanced MR imaging data were obtained for 79 patients, and principal component analysis was applied to the perfusion signal intensity. The first six principal components were sufficient to characterize more than 99% of variance in the temporal dynamics of blood perfusion in all regions of interest. The principal components were subsequently used in conjunction with a support vector machine classifier to create a map of heterogeneity within the peritumoral region, and the variance of this map served as the heterogeneity score. Results The calculated principal components allowed near-perfect separability of tissue that was likely highly infiltrated with tumor and tissue that was unlikely infiltrated with tumor. The heterogeneity map created by using the principal components showed a clear relationship between voxels judged by the support vector machine to be highly infiltrated and subsequent recurrence. The results demonstrated a significant correlation (r = 0.46, P < .0001) between the heterogeneity score and patient survival. The hazard ratio was 2.23 (95% confidence interval: 1.4, 3.6; P < .01) between patients with high and low heterogeneity scores on the basis of the median heterogeneity score. Conclusion Analysis of dynamic susceptibility contrast-enhanced MR imaging data by using principal component analysis can help identify imaging variables that can be subsequently used to evaluate the peritumoral region in glioblastoma. These variables are potentially indicative of tumor infiltration and may become useful tools in guiding therapy, as well as individualized prognostication. © RSNA, 2014 PMID:24955928

  11. Signal-to-noise contribution of principal component loads in reconstructed near-infrared Raman tissue spectra.

    PubMed

    Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R

    2010-01-01

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.

  12. Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.

    PubMed

    Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang

    2018-01-01

    This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.

  13. Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul

    2007-06-01

    Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.

  14. Frequency comb swept lasers.

    PubMed

    Tsai, Tsung-Han; Zhou, Chao; Adler, Desmond C; Fujimoto, James G

    2009-11-09

    We demonstrate a frequency comb (FC) swept laser and a frequency comb Fourier domain mode locked (FC-FDML) laser for applications in optical coherence tomography (OCT). The fiber-based FC swept lasers operate at a sweep rate of 1kHz and 120kHz, respectively over a 135nm tuning range centered at 1310nm with average output powers of 50mW. A 25GHz free spectral range frequency comb filter in the swept lasers causes the lasers to generate a series of well defined frequency steps. The narrow bandwidth (0.015nm) of the frequency comb filter enables a approximately -1.2dB sensitivity roll off over approximately 3mm range, compared to conventional swept source and FDML lasers which have -10dB and -5dB roll offs, respectively. Measurements at very long ranges are possible with minimal sensitivity loss, however reflections from outside the principal measurement range of 0-3mm appear aliased back into the principal range. In addition, the frequency comb output from the lasers are equally spaced in frequency (linear in k-space). The filtered laser output can be used to self-clock the OCT interference signal sampling, enabling direct fast Fourier transformation of the fringe signals, without the need for fringe recalibration procedures. The design and operation principles of FC swept lasers are discussed and designs for short cavity lasers for OCT and interferometric measurement applications are proposed.

  15. Frequency comb swept lasers

    PubMed Central

    Tsai, Tsung-Han; Zhou, Chao; Adler, Desmond C.; Fujimoto, James G.

    2010-01-01

    We demonstrate a frequency comb (FC) swept laser and a frequency comb Fourier domain mode locked (FC-FDML) laser for applications in optical coherence tomography (OCT). The fiber-based FC swept lasers operate at a sweep rate of 1kHz and 120kHz, respectively over a 135nm tuning range centered at 1310nm with average output powers of 50mW. A 25GHz free spectral range frequency comb filter in the swept lasers causes the lasers to generate a series of well defined frequency steps. The narrow bandwidth (0.015nm) of the frequency comb filter enables a ~−1.2dB sensitivity roll off over ~3mm range, compared to conventional swept source and FDML lasers which have −10dB and −5dB roll offs, respectively. Measurements at very long ranges are possible with minimal sensitivity loss, however reflections from outside the principal measurement range of 0–3mm appear aliased back into the principal range. In addition, the frequency comb output from the lasers are equally spaced in frequency (linear in k-space). The filtered laser output can be used to self-clock the OCT interference signal sampling, enabling direct fast Fourier transformation of the fringe signals, without the need for fringe recalibration procedures. The design and operation principles of FC swept lasers are discussed and designs for short cavity lasers for OCT and interferometric measurement applications are proposed. PMID:19997365

  16. Deeply etched MMI-based components on 4 μm thick SOI for SOA-based optical RAM cell circuits

    NASA Astrophysics Data System (ADS)

    Cherchi, Matteo; Ylinen, Sami; Harjanne, Mikko; Kapulainen, Markku; Aalto, Timo; Kanellos, George T.; Fitsios, Dimitrios; Pleros, Nikos

    2013-02-01

    We present novel deeply etched functional components, fabricated by multi-step patterning in the frame of our 4 μm thick Silicon on Insulator (SOI) platform based on singlemode rib-waveguides and on the previously developed rib-tostrip converter. These novel components include Multi-Mode Interference (MMI) splitters with any desired splitting ratio, wavelength sensitive 50/50 splitters with pre-filtering capability, multi-stage Mach-Zehnder Interferometer (MZI) filters for suppression of Amplified Spontaneous Emission (ASE), and MMI resonator filters. These novel building blocks enable functionalities otherwise not achievable on our SOI platform, and make it possible to integrate optical RAM cell layouts, by resorting to our technology for hybrid integration of Semiconductor Optical Amplifiers (SOAs). Typical SOA-based RAM cell layouts require generic splitting ratios, which are not readily achievable by a single MMI splitter. We present here a novel solution to this problem, which is very compact and versatile and suits perfectly our technology. Another useful functional element when using SOAs is the pass-band filter to suppress ASE. We pursued two complimentary approaches: a suitable interleaved cascaded MZI filter, based on a novel suitably designed MMI coupler with pre-filtering capabilities, and a completely novel MMI resonator concept, to achieve larger free spectral ranges and narrower pass-band response. Simulation and design principles are presented and compared to preliminary experimental functional results, together with scaling rules and predictions of achievable RAM cell densities. When combined with our newly developed ultra-small light-turning concept, these new components are expected to pave the way for high integration density of RAM cells.

  17. Improvement of LOD in Fluorescence Detection with Spectrally Nonuniform Background by Optimization of Emission Filtering.

    PubMed

    Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N

    2017-10-17

    The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.

  18. Removing tidal-period variations from time-series data using low-pass digital filters

    USGS Publications Warehouse

    Walters, Roy A.; Heston, Cynthia

    1982-01-01

    Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.

  19. An RC active filter design handbook

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.

    1977-01-01

    The design of filters is described. Emphasis is placed on simplified procedures that can be used by the reader who has minimum knowledge about circuit design and little acquaintance with filter theory. The handbook has three main parts. The first part is a review of some information that is essential for work with filters. The second part includes design information for specific types of filter circuitry and describes simple procedures for obtaining the component values for a filter that will have a desired set of characteristics. Pertinent information relating to actual performance is given. The third part (appendix) is a review of certain topics in filter theory and is intended to provide some basic understanding of how filters are designed.

  20. Cultivating an Environment that Contributes to Teaching and Learning in Schools: High School Principals' Actions

    ERIC Educational Resources Information Center

    Lin, Mind-Dih

    2012-01-01

    Improving principal leadership is a vital component to the success of educational reform initiatives that seek to improve whole-school performance, as principal leadership often exercises positive but indirect effects on student learning. Because of the importance of principals within the field of school improvement, this article focuses on…

  1. Measuring Principals' Effectiveness: Results from New Jersey's First Year of Statewide Principal Evaluation. REL 2016-156

    ERIC Educational Resources Information Center

    Herrmann, Mariesa; Ross, Christine

    2016-01-01

    States and districts across the country are implementing new principal evaluation systems that include measures of the quality of principals' school leadership practices and measures of student achievement growth. Because these evaluation systems will be used for high-stakes decisions, it is important that the component measures of the evaluation…

  2. The Views of Novice and Late Career Principals Concerning Instructional and Organizational Leadership within Their Evaluation

    ERIC Educational Resources Information Center

    Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann; Mette, Ian M.

    2015-01-01

    This study examined the perspectives of novice and late career principals concerning instructional and organizational leadership within their performance evaluations. An online survey was sent to 251 principals with a return rate of 49%. Instructional leadership components of the evaluation that were most important to all principals were:…

  3. Self-aligned spatial filtering using laser optical tweezers.

    PubMed

    Birkbeck, Aaron L; Zlatanovic, Sanja; Esener, Sadik C

    2006-09-01

    We present an optical spatial filtering device that has been integrated into a microfluidic system and whose motion and alignment is controlled using a laser optical tweezer. The lithographically patterned micro-optical spatial filter device filters out higher frequency additive noise components by automatically aligning itself in three dimensions to the focus of the laser beam. This self-alignment capability is achieved through the attachment of a refractive optical element directly over the circular aperture or pinhole of the spatial filter. A discussion of two different spatial filter designs is presented along with experimental results that demonstrate the effectiveness of the self-aligned micro-optic spatial filter.

  4. Time course based artifact identification for independent components of resting-state FMRI.

    PubMed

    Rummel, Christian; Verma, Rajeev Kumar; Schöpf, Veronika; Abela, Eugenio; Hauf, Martinus; Berruecos, José Fernando Zapata; Wiest, Roland

    2013-01-01

    In functional magnetic resonance imaging (fMRI) coherent oscillations of the blood oxygen level-dependent (BOLD) signal can be detected. These arise when brain regions respond to external stimuli or are activated by tasks. The same networks have been characterized during wakeful rest when functional connectivity of the human brain is organized in generic resting-state networks (RSN). Alterations of RSN emerge as neurobiological markers of pathological conditions such as altered mental state. In single-subject fMRI data the coherent components can be identified by blind source separation of the pre-processed BOLD data using spatial independent component analysis (ICA) and related approaches. The resulting maps may represent physiological RSNs or may be due to various artifacts. In this methodological study, we propose a conceptually simple and fully automatic time course based filtering procedure to detect obvious artifacts in the ICA output for resting-state fMRI. The filter is trained on six and tested on 29 healthy subjects, yielding mean filter accuracy, sensitivity and specificity of 0.80, 0.82, and 0.75 in out-of-sample tests. To estimate the impact of clearly artifactual single-subject components on group resting-state studies we analyze unfiltered and filtered output with a second level ICA procedure. Although the automated filter does not reach performance values of visual analysis by human raters, we propose that resting-state compatible analysis of ICA time courses could be very useful to complement the existing map or task/event oriented artifact classification algorithms.

  5. Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.

    1990-01-01

    A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.

  6. Cryogenic filter wheel design for an infrared instrument

    NASA Astrophysics Data System (ADS)

    Azcue, Joaquín.; Villanueva, Carlos; Sánchez, Antonio; Polo, Cristina; Reina, Manuel; Carretero, Angel; Torres, Josefina; Ramos, Gonzalo; Gonzalez, Luis M.; Sabau, Maria D.; Najarro, Francisco; Pintado, Jesús M.

    2014-09-01

    In the last two decades, Spain has built up a strong IR community which has successfully contributed to space instruments, reaching Co-PI level in the SPICA mission (Space Infrared Telescope for Cosmology and Astrophysics). Under the SPICA mission, INTA, focused on the SAFARI instrument requirements but highly adaptable to other missions has designed a cryogenic low dissipation filter wheel with six positions, taking as starting point the past experience of the team with the OSIRIS instrument (ROSETTA mission) filter wheels and adapting the design to work at cryogenic temperatures. One of the main goals of the mechanism is to use as much as possible commercial components and test them at cryogenic temperature. This paper is focused on the design of the filter wheel, including the material selection for each of the main components of the mechanism, the design of elastic mount for the filter assembly, a positioner device designed to provide positional accuracy and repeatability to the filter, allowing the locking of the position without dissipation. In order to know the position of the wheel on every moment a position sensor based on a Hall sensor was developed. A series of cryogenic tests have been performed in order to validate the material configuration selected, the ball bearing lubrication and the selection of the motor. A stepper motor characterization campaign was performed including heat dissipation measurements. The result is a six position filter wheel highly adaptable to different configurations and motors using commercial components. The mechanism was successfully tested at INTA facilities at 20K at breadboard level.

  7. Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2005-04-01

    Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.

  8. Vacuum Filtration. Sludge Treatment and Disposal Course #166. Instructor's Guide [and] Student Workbook.

    ERIC Educational Resources Information Center

    Filer, Herb; Windram, Kendall

    Three types of vacuum filters and their operation are described in this lesson. Typical filter cycle, filter components and their functions, process control parameters, expected performance, and safety/historical aspects are considered. Conditioning methods are also described, although it is suggested that lessons on sludge characteristics, sludge…

  9. Checking Dimensionality in Item Response Models with Principal Component Analysis on Standardized Residuals

    ERIC Educational Resources Information Center

    Chou, Yeh-Tai; Wang, Wen-Chung

    2010-01-01

    Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…

  10. Variable Neighborhood Search Heuristics for Selecting a Subset of Variables in Principal Component Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Singh, Renu; Steinley, Douglas

    2009-01-01

    The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…

  11. Relaxation mode analysis of a peptide system: comparison with principal component analysis.

    PubMed

    Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi

    2011-10-28

    This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.

  12. Matrix partitioning and EOF/principal component analysis of Antarctic Sea ice brightness temperatures

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.

    1984-01-01

    A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.

  13. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  14. Multivariate analyses of salt stress and metabolite sensing in auto- and heterotroph Chenopodium cell suspensions.

    PubMed

    Wongchai, C; Chaidee, A; Pfeiffer, W

    2012-01-01

    Global warming increases plant salt stress via evaporation after irrigation, but how plant cells sense salt stress remains unknown. Here, we searched for correlation-based targets of salt stress sensing in Chenopodium rubrum cell suspension cultures. We proposed a linkage between the sensing of salt stress and the sensing of distinct metabolites. Consequently, we analysed various extracellular pH signals in autotroph and heterotroph cell suspensions. Our search included signals after 52 treatments: salt and osmotic stress, ion channel inhibitors (amiloride, quinidine), salt-sensing modulators (proline), amino acids, carboxylic acids and regulators (salicylic acid, 2,4-dichlorphenoxyacetic acid). Multivariate analyses revealed hirarchical clusters of signals and five principal components of extracellular proton flux. The principal component correlated with salt stress was an antagonism of γ-aminobutyric and salicylic acid, confirming involvement of acid-sensing ion channels (ASICs) in salt stress sensing. Proline, short non-substituted mono-carboxylic acids (C2-C6), lactic acid and amiloride characterised the four uncorrelated principal components of proton flux. The proline-associated principal component included an antagonism of 2,4-dichlorphenoxyacetic acid and a set of amino acids (hydrophobic, polar, acidic, basic). The five principal components captured 100% of variance of extracellular proton flux. Thus, a bias-free, functional high-throughput screening was established to extract new clusters of response elements and potential signalling pathways, and to serve as a core for quantitative meta-analysis in plant biology. The eigenvectors reorient research, associating proline with development instead of salt stress, and the proof of existence of multiple components of proton flux can help to resolve controversy about the acid growth theory. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  15. [The application of the multidimensional statistical methods in the evaluation of the influence of atmospheric pollution on the population's health].

    PubMed

    Surzhikov, V D; Surzhikov, D V

    2014-01-01

    The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.

  16. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  17. Comparison of dimensionality reduction methods to predict genomic breeding values for carcass traits in pigs.

    PubMed

    Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S

    2015-10-09

    A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.

  18. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  19. Single sensor processing to obtain high resolution color component signals

    NASA Technical Reports Server (NTRS)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  20. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    NASA Astrophysics Data System (ADS)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  1. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  2. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  3. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  4. Performance-Based Preparation of Principals: A Framework for Improvement. A Special Report of the NASSP Consortium for the Performance-Based Preparation of Principals.

    ERIC Educational Resources Information Center

    National Association of Secondary School Principals, Reston, VA.

    Preparation programs for principals should have excellent academic and performance based components. In examining the nature of performance based principal preparation this report finds that school administration programs must bridge the gap between conceptual learning in the classroom and the requirements of professional practice. A number of…

  5. Principal component greenness transformation in multitemporal agricultural Landsat data

    NASA Technical Reports Server (NTRS)

    Abotteen, R. A.

    1978-01-01

    A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.

  6. Design and Analysis of a Micromachined LC Low Pass Filter For 2.4GHz Application

    NASA Astrophysics Data System (ADS)

    Saroj, Samruddhi R.; Rathee, Vishal R.; Pande, Rajesh S.

    2018-02-01

    This paper reports design and analysis of a passive low pass filter with cut-off frequency of 2.4 GHz using MEMS (Micro Electro-Mechanical Systems) technology. The passive components such as suspended spiral inductors and metal-insulator-metal (MIM) capacitor are arranged in T network form to implement LC low pass filter design. This design employs a simple approach of suspension thereby reducing parasitic losses to eliminate the performance degrading effects caused by integrating an off-chip inductor in the filter circuit proposed to be developed on a low cost silicon substrate using RF-MEMS components. The filter occupies only 2.1 mm x 0.66 mm die area and is designed using micro-strip transmission line placed on a silicon substrate. The design is implemented in High Frequency Structural Simulator (HFSS) software and fabrication flow is proposed for its implementation. The simulated results show that the design has an insertion loss of -4.98 dB and return loss of -2.60dB.

  7. ASSESSMENT OF INNER FILTER EFFECTS IN FLUORESCENCE SPECTROSCOPY USING THE DUAL-PATHLENGTH METHOD- A STUDY OF THE JET FUEL JP-4. (U915376)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  8. Electron volt spectroscopy on a pulsed neutron source

    NASA Astrophysics Data System (ADS)

    Newport, R. J.; Penfold, J.; Williams, W. G.

    1984-07-01

    The principal design aspects of a pulsed source neutron spectrometer in which the scattered neutron energy is determined by a resonance absorption filter difference method are discussed. Calculations of the accessible dynamic range, resolution and spectrum simulations are given for the spectrometer on a high intensity pulsed neutron source, such as the spallation neutron source (SNS) now being constructed at the Rutherford Appleton Laboratory. Special emphasis is made of the advantage gained by placing coarse and fixed energy-sensitive filters before and after the scatterer; these enhance the inelastic/elastic descrimination of the method. A brief description is given of a double difference filter method which gives a superior difference peak shape, as well as a better energy transfer resolution. Finally, some first results of scattering from zirconium hydride, obtained on a test spectrometer, are presented.

  9. High temperature charge amplifier for geothermal applications

    DOEpatents

    Lindblom, Scott C.; Maldonado, Frank J.; Henfling, Joseph A.

    2015-12-08

    An amplifier circuit in a multi-chip module includes a charge to voltage converter circuit, a voltage amplifier a low pass filter and a voltage to current converter. The charge to voltage converter receives a signal representing an electrical charge and generates a voltage signal proportional to the input signal. The voltage amplifier receives the voltage signal from the charge to voltage converter, then amplifies the voltage signal by the gain factor to output an amplified voltage signal. The lowpass filter passes low frequency components of the amplified voltage signal and attenuates frequency components greater than a cutoff frequency. The voltage to current converter receives the output signal of the lowpass filter and converts the output signal to a current output signal; wherein an amplifier circuit output is selectable between the output signal of the lowpass filter and the current output signal.

  10. Prediction of genomic breeding values for dairy traits in Italian Brown and Simmental bulls using a principal component approach.

    PubMed

    Pintus, M A; Gaspa, G; Nicolazzi, E L; Vicario, D; Rossoni, A; Ajmone-Marsan, P; Nardone, A; Dimauro, C; Macciotta, N P P

    2012-06-01

    The large number of markers available compared with phenotypes represents one of the main issues in genomic selection. In this work, principal component analysis was used to reduce the number of predictors for calculating genomic breeding values (GEBV). Bulls of 2 cattle breeds farmed in Italy (634 Brown and 469 Simmental) were genotyped with the 54K Illumina beadchip (Illumina Inc., San Diego, CA). After data editing, 37,254 and 40,179 single nucleotide polymorphisms (SNP) were retained for Brown and Simmental, respectively. Principal component analysis carried out on the SNP genotype matrix extracted 2,257 and 3,596 new variables in the 2 breeds, respectively. Bulls were sorted by birth year to create reference and prediction populations. The effect of principal components on deregressed proofs in reference animals was estimated with a BLUP model. Results were compared with those obtained by using SNP genotypes as predictors with either the BLUP or Bayes_A method. Traits considered were milk, fat, and protein yields, fat and protein percentages, and somatic cell score. The GEBV were obtained for prediction population by blending direct genomic prediction and pedigree indexes. No substantial differences were observed in squared correlations between GEBV and EBV in prediction animals between the 3 methods in the 2 breeds. The principal component analysis method allowed for a reduction of about 90% in the number of independent variables when predicting direct genomic values, with a substantial decrease in calculation time and without loss of accuracy. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    PubMed

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  12. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  13. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. FILTSoft: A computational tool for microstrip planar filter design

    NASA Astrophysics Data System (ADS)

    Elsayed, M. H.; Abidin, Z. Z.; Dahlan, S. H.; Cholan N., A.; Ngu, Xavier T. I.; Majid, H. A.

    2017-09-01

    Filters are key component of any communication system to control spectrum and suppress interferences. Designing a filter involves long process as well as good understanding of the basic hardware technology. Hence this paper introduces an automated design tool based on Matlab-GUI, called the FILTSoft (acronym for Filter Design Software) to ease the process. FILTSoft is a user friendly filter design tool to aid, guide and expedite calculations from lumped elements level to microstrip structure. Users just have to provide the required filter specifications as well as the material description. FILTSoft will calculate and display the lumped element details, the planar filter structure, and the expected filter's response. An example of a lowpass filter design was calculated using FILTSoft and the results were validated through prototype measurement for comparison purposes.

  15. Experimental Investigation of Principal Residual Stress and Fatigue Performance for Turned Nickel-Based Superalloy Inconel 718.

    PubMed

    Hua, Yang; Liu, Zhanqiang

    2018-05-24

    Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.

  16. Principal component analysis for designed experiments.

    PubMed

    Konishi, Tomokazu

    2015-01-01

    Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.

  17. Coping with Multicollinearity: An Example on Application of Principal Components Regression in Dendroecology

    Treesearch

    B. Desta Fekedulegn; J.J. Colbert; R.R., Jr. Hicks; Michael E. Schuckers

    2002-01-01

    The theory and application of principal components regression, a method for coping with multicollinearity among independent variables in analyzing ecological data, is exhibited in detail. A concrete example of the complex procedures that must be carried out in developing a diagnostic growth-climate model is provided. We use tree radial increment data taken from breast...

  18. Application of Principal Component Analysis (PCA) to Reduce Multicollinearity Exchange Rate Currency of Some Countries in Asia Period 2004-2014

    ERIC Educational Resources Information Center

    Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad

    2017-01-01

    This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…

  19. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  20. Principal component analysis of Raman spectra for TiO2 nanoparticle characterization

    NASA Astrophysics Data System (ADS)

    Ilie, Alina Georgiana; Scarisoareanu, Monica; Morjan, Ion; Dutu, Elena; Badiceanu, Maria; Mihailescu, Ion

    2017-09-01

    The Raman spectra of anatase/rutile mixed phases of Sn doped TiO2 nanoparticles and undoped TiO2 nanoparticles, synthesised by laser pyrolysis, with nanocrystallite dimensions varying from 8 to 28 nm, was simultaneously processed with a self-written software that applies Principal Component Analysis (PCA) on the measured spectrum to verify the possibility of objective auto-characterization of nanoparticles from their vibrational modes. The photo-excited process of Raman scattering is very sensible to the material characteristics, especially in the case of nanomaterials, where more properties become relevant for the vibrational behaviour. We used PCA, a statistical procedure that performs eigenvalue decomposition of descriptive data covariance, to automatically analyse the sample's measured Raman spectrum, and to interfere the correlation between nanoparticle dimensions, tin and carbon concentration, and their Principal Component values (PCs). This type of application can allow an approximation of the crystallite size, or tin concentration, only by measuring the Raman spectrum of the sample. The study of loadings of the principal components provides information of the way the vibrational modes are affected by the nanoparticle features and the spectral area relevant for the classification.

  1. Testing for Non-Random Mating: Evidence for Ancestry-Related Assortative Mating in the Framingham Heart Study

    PubMed Central

    Sebro, Ronnie; Hoffman, Thomas J.; Lange, Christoph; Rogus, John J.; Risch, Neil J.

    2013-01-01

    Population stratification leads to a predictable phenomenon—a reduction in the number of heterozygotes compared to that calculated assuming Hardy-Weinberg Equilibrium (HWE). We show that population stratification results in another phenomenon—an excess in the proportion of spouse-pairs with the same genotypes at all ancestrally informative markers, resulting in ancestrally related positive assortative mating. We use principal components analysis to show that there is evidence of population stratification within the Framingham Heart Study, and show that the first principal component correlates with a North-South European cline. We then show that the first principal component is highly correlated between spouses (r=0.58, p=0.0013), demonstrating that there is ancestrally related positive assortative mating among the Framingham Caucasian population. We also show that the single nucleotide polymorphisms loading most heavily on the first principal component show an excess of homozygotes within the spouses, consistent with similar ancestry-related assortative mating in the previous generation. This nonrandom mating likely affects genetic structure seen more generally in the North American population of European descent today, and decreases the rate of decay of linkage disequilibrium for ancestrally informative markers. PMID:20842694

  2. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    PubMed

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p < 0.05) difference in sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  3. Statistical analysis of major ion and trace element geochemistry of water, 1986-2006, at seven wells transecting the freshwater/saline-water interface of the Edwards Aquifer, San Antonio, Texas

    USGS Publications Warehouse

    Mahler, Barbara J.

    2008-01-01

    The statistical analyses taken together indicate that the geochemistry at the freshwater-zone wells is more variable than that at the transition-zone wells. The geochemical variability at the freshwater-zone wells might result from dilution of ground water by meteoric water. This is indicated by relatively constant major ion molar ratios; a preponderance of positive correlations between SC, major ions, and trace elements; and a principal components analysis in which the major ions are strongly loaded on the first principal component. Much of the variability at three of the four transition-zone wells might result from the use of different laboratory analytical methods or reporting procedures during the period of sampling. This is reflected by a lack of correlation between SC and major ion concentrations at the transition-zone wells and by a principal components analysis in which the variability is fairly evenly distributed across several principal components. The statistical analyses further indicate that, although the transition-zone wells are less well connected to surficial hydrologic conditions than the freshwater-zone wells, there is some connection but the response time is longer. 

  4. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    PubMed Central

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  5. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    Development of computer programs for component analysis and design aids used in production of microstrip components is discussed. System includes designs for couplers, filters, circulators, transformers, power splitters, diode switches, and attenuators.

  6. Time Management Ideas for Assistant Principals.

    ERIC Educational Resources Information Center

    Cronk, Jerry

    1987-01-01

    Prioritizing the use of time, effective communication, delegating authority, having detailed job descriptions, and good secretarial assistance are important components of time management for assistant principals. (MD)

  7. The principal components model: a model for advancing spirituality and spiritual care within nursing and health care practice.

    PubMed

    McSherry, Wilfred

    2006-07-01

    The aim of this study was to generate a deeper understanding of the factors and forces that may inhibit or advance the concepts of spirituality and spiritual care within both nursing and health care. This manuscript presents a model that emerged from a qualitative study using grounded theory. Implementation and use of this model may assist all health care practitioners and organizations to advance the concepts of spirituality and spiritual care within their own sphere of practice. The model has been termed the principal components model because participants identified six components as being crucial to the advancement of spiritual health care. Grounded theory was used meaning that there was concurrent data collection and analysis. Theoretical sampling was used to develop the emerging theory. These processes, along with data analysis, open, axial and theoretical coding led to the identification of a core category and the construction of the principal components model. Fifty-three participants (24 men and 29 women) were recruited and all consented to be interviewed. The sample included nurses (n=24), chaplains (n=7), a social worker (n=1), an occupational therapist (n=1), physiotherapists (n=2), patients (n=14) and the public (n=4). The investigation was conducted in three phases to substantiate the emerging theory and the development of the model. The principal components model contained six components: individuality, inclusivity, integrated, inter/intra-disciplinary, innate and institution. A great deal has been written on the concepts of spirituality and spiritual care. However, rhetoric alone will not remove some of the intrinsic and extrinsic barriers that are inhibiting the advancement of the spiritual dimension in terms of theory and practice. An awareness of and adherence to the principal components model may assist nurses and health care professionals to engage with and overcome some of the structural, organizational, political and social variables that are impacting upon spiritual care.

  8. Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter

    NASA Technical Reports Server (NTRS)

    Ehsan, Negar; U-yen, Kongpop; Brown, Ari; Hsieh, Wen-Ting; Wollack, Edward; Moseley, Samuel

    2013-01-01

    This innovation is a compact, superconducting, channelizing bandpass filter on a single-crystal (0.45 m thick) silicon substrate, which operates from 300 to 600 GHz. This device consists of four channels with center frequencies of 310, 380, 460, and 550 GHz, with approximately 50-GHz bandwidth per channel. The filter concept is inspired by the mammalian cochlea, which is a channelizing filter that covers three decades of bandwidth and 3,000 channels in a very small physical space. By using a simplified physical cochlear model, and its electrical analog of a channelizing filter covering multiple octaves bandwidth, a large number of output channels with high inter-channel isolation and high-order upper stopband response can be designed. A channelizing filter is a critical component used in spectrometer instruments that measure the intensity of light at various frequencies. This embodiment was designed for MicroSpec in order to increase the resolution of the instrument (with four channels, the resolution will be increased by a factor of four). MicroSpec is a revolutionary wafer-scale spectrometer that is intended for the SPICA (Space Infrared Telescope for Cosmology and Astrophysics) Mission. In addition to being a vital component of MicroSpec, the channelizing filter itself is a low-resolution spectrometer when integrated with only an antenna at its input, and a detector at each channel s output. During the design process for this filter, the available characteristic impedances, possible lumped element ranges, and fabrication tolerances were identified for design on a very thin silicon substrate. Iterations between full-wave and lumped-element circuit simulations were performed. Each channel s circuit was designed based on the availability of characteristic impedances and lumped element ranges. This design was based on a tabular type bandpass filter with no spurious harmonic response. Extensive electromagnetic modeling for each channel was performed. Four channels, with 50-GHz bandwidth, were designed, each using multiple transmission line media such as microstrip, coplanar waveguide, and quasi-lumped components on 0.45- m thick silicon. In the design process, modeling issues had to be overcome. Due to the extremely high frequencies, very thin Si substrate, and the superconducting metal layers, most commercially available software fails in various ways. These issues were mitigated by using alternative software that was capable of handling them at the expense of greater simulation time. The design of on-chip components for the filter characterization, such as a broadband antenna, Wilkinson power dividers, attenuators, detectors, and transitions has been completed.

  9. [Application of ICP-MS to Identify the Botanic Source of Characteristic Honey in South Yunnan].

    PubMed

    Wei, Yue; Chen, Fang; Wang, Yong; Chen, Lan-zhen; Zhang, Xue-wen; Wang, Yan-hui; Wu, Li-ming; Zhou, Qun

    2016-01-01

    By adopting inductively coupled plasma mass spectrometry (ICP-MS) combined with chemometric analysis technology, 23 kinds of minerals in four kinds of characteristic honey derived from Yunnan province were analyzed. The result showed that 21 kinds of mineral elements, namely Na, Mg, K, Ca, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Sr, Mo, Cd, Sb, Ba, Tl and Pb, have significant differences among different varieties of honey. The results of principal component analysis (PCA) showed that the cumulative variance contribution rate of the first four main components reached 77.74%, seven kinds of elements (Mg, Ca, Mn, Co, Sr, Cd, Ba) from the first main component contained most of the honey information. Through the stepwise discriminant analysis, seven kinds of elements (Mg, K, Ca, Cr, Mn, Sr, Pb) were filtered. out and used to establish the discriminant function model, and the correct classification rates of the proposed model reached 90% and 86.7%, respectively, which showed elements contents could be effectively indicators to discriminate the four kinds characteristic honey in southern Yunnan Province. In view of all the honey samples were harvested from apiaries located at south Yunnan Province where have similar climate, soil and other environment conditions, the differences of the mineral elements contents for the honey samples mainly due to their corresponding nectariferous plant. Therefore, it is feasible to identify honey botanical source through the differences of mineral elements.

  10. The source-filter theory of whistle-like calls in marmosets: Acoustic analysis and simulation of helium-modulated voices.

    PubMed

    Koda, Hiroki; Tokuda, Isao T; Wakita, Masumi; Ito, Tsuyoshi; Nishimura, Takeshi

    2015-06-01

    Whistle-like high-pitched "phee" calls are often used as long-distance vocal advertisements by small-bodied marmosets and tamarins in the dense forests of South America. While the source-filter theory proposes that vibration of the vocal fold is modified independently from the resonance of the supralaryngeal vocal tract (SVT) in human speech, a source-filter coupling that constrains the vibration frequency to SVT resonance effectively produces loud tonal sounds in some musical instruments. Here, a combined approach of acoustic analyses and simulation with helium-modulated voices was used to show that phee calls are produced principally with the same mechanism as in human speech. The animal keeps the fundamental frequency (f0) close to the first formant (F1) of the SVT, to amplify f0. Although f0 and F1 are primarily independent, the degree of their tuning can be strengthened further by a flexible source-filter interaction, the variable strength of which depends upon the cross-sectional area of the laryngeal cavity. The results highlight the evolutionary antiquity and universality of the source-filter model in primates, but the study can also explore the diversification of vocal physiology, including source-filter interaction and its anatomical basis in non-human primates.

  11. Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    BoŻek, Piotr

    2018-03-01

    The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.

  12. Analysis and improvement measures of flight delay in China

    NASA Astrophysics Data System (ADS)

    Zang, Yuhang

    2017-03-01

    Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.

  13. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  14. Polyhedral gamut representation of natural objects based on spectral reflectance database and its application

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Sakuda, Yasunori; Honda, Toshio

    2002-06-01

    Spectral reflectance of most reflective objects such as natural objects and color hardcopy is relatively smooth and can be approximated by several numbers of principal components with high accuracy. Though the subspace spanned by those principal components represents a space in which reflective objects can exist, it dos not provide the bound in which the samples distribute. In this paper we propose to represent the gamut of reflective objects in more distinct form, i.e., as a polyhedron in the subspace spanned by several principal components. Concept of the polyhedral gamut representation and its application to calculation of metamer ensemble are described. Color-mismatch volume caused by different illuminant and/or observer for a metamer ensemble is also calculated and compared with theoretical one.

  15. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  16. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  17. Extraction of user's navigation commands from upper body force interaction in walker assisted gait.

    PubMed

    Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón

    2010-08-05

    The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.

  18. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  19. Least squares restoration of multi-channel images

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Galatsanos, Nikolas P.

    1989-01-01

    In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

  20. Degradation of electro-optic components aboard LDEF

    NASA Technical Reports Server (NTRS)

    Blue, M. D.

    1993-01-01

    Remeasurement of the properties of a set of electro-optic components exposed to the low-earth environment aboard the Long Duration Exposure Facility (LDEF) indicates that most components survived quite well. Typical components showed some effects related to the space environment unless well protected. The effects were often small but significant. Results for semiconductor infrared detectors, lasers, and LED's, as well as filters, mirrors, and black paints are described. Semiconductor detectors and emitters were scarred but reproduced their original characteristics. Spectral characteristics of multi-layer dielectric filters and mirrors were found to be altered and degraded. Increased absorption in black paints indicates an increase in absorption sites, giving rise to enhanced performance as coatings for baffles and sunscreens.

  1. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  2. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  3. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  4. The Artistic Nature of the High School Principal.

    ERIC Educational Resources Information Center

    Ritschel, Robert E.

    The role of high school principals can be compared to that of composers of music. For instance, composers put musical components together into a coherent whole; similarly, principals organize high schools by establishing class schedules, assigning roles to subordinates, and maintaining a safe and orderly learning environment. Second, composers…

  5. Collaborative Relationships between Principals and School Counselors: Facilitating a Model for Developing a Working Alliance

    ERIC Educational Resources Information Center

    Odegard-Koester, Melissa A.; Watkins, Paul

    2016-01-01

    The working relationship between principals and school counselors have received some attention in the literature, however, little empirical research exists that examines specifically the components that facilitate a collaborative working relationship between the principal and school counselor. This qualitative case study examined the unique…

  6. The Retention and Attrition of Catholic School Principals

    ERIC Educational Resources Information Center

    Durow, W. Patrick; Brock, Barbara L.

    2004-01-01

    This article reports the results of a study of the retention of principals in Catholic elementary and secondary schools in one Midwestern diocese. Findings revealed that personal needs, career advancement, support from employer, and clearly defined role expectations were key factors in principals' retention decisions. A profile of components of…

  7. Analytic expression for the giant fieldlike spin torque in spin-filter magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Tang, Y.-H.; Huang, Z.-W.; Huang, B.-H.

    2017-08-01

    We propose analytic expressions for fieldlike, T⊥, and spin-transfer, T∥, spin torque components in the spin-filter-based magnetic tunnel junction (SFMTJ), by using the single-band tight-binding model with the nonequilibrium Keldysh formalism. In consideration of multireflection processes between noncollinear magnetization of the spin-filter (SF) barrier and the ferromagnetic (FM) electrode, the central spin-selective SF barrier plays an active role in the striking discovery T⊥≫T∥ , which can be further identified by the unusual barrier thickness dependence of giant T⊥. Our general expressions reveal the sinusoidal angular dependence of both spin torque components, even in the presence of the SF barrier.

  8. Microwave signal processing with photorefractive dynamic holography

    NASA Astrophysics Data System (ADS)

    Fotheringham, Edeline B.

    Have you ever found yourself listening to the music playing from the closest stereo rather than to the bromidic (uninspiring) person speaking to you? Your ears receive information from two sources but your brain listens to only one. What if your cell phone could distinguish among signals sharing the same bandwidth too? There would be no "full" channels to stop you from placing or receiving a call. This thesis presents a nonlinear optical circuit capable of distinguishing uncorrelated signals that have overlapping temporal bandwidths. This so called autotuning filter is the size of a U.S. quarter dollar and requires less than 3 mW of optical power to operate. It is basically an oscillator in which the losses are compensated with dynamic holographic gain. The combination of two photorefractive crystals in the resonator governs the filter's winner-take-all dynamics through signal-competition for gain. This physical circuit extracts what is mathematically referred to as the largest principal component of its spatio-temporal input space. The circuit's practicality is demonstrated by its incorporation in an RF-photonic system. An unknown mixture of unknown microwave signals, received by an antenna array, constitutes the input to the system. The output electronically returns one of the original microwave signals. The front-end of the system down converts the 10 GHz microwave signals and amplifies them before the signals phase modulate optical beams. The optical carrier is suppressed from these beams so that it may not be considered as a signal itself to the autotuning filter. The suppression is achieved with two-beam coupling in a single photorefractive crystal. The filter extracts the more intense of the signals present on the carrier-suppressed input beams. The detection of the extracted signal restores the microwave signal to an electronic form. The system, without the receiving antenna array, is packaged in a 13 x 18 x 6″ briefcase. Its power consumption equals that of a regular 50 W household light bulb. The system was shipped to different parts of the country for real-time demonstrations of signal separation thus also validating its claim to robustness.

  9. Hyperspectral Microwave Atmospheric Sounder (HyMAS) - New Capability in the CoSMIR-CoSSIR Scanhead

    NASA Technical Reports Server (NTRS)

    Hilliard, Lawrence; Racette, Paul; Blackwell, William; Galbraith, Christopher; Thompson, Erik

    2015-01-01

    Lincoln Laboratory and NASA's Goddard Space Flight Center have teamed to re-use an existing instrument platform, the CoSMIR/CoSSIR system for atmospheric sounding, to develop a new capability in hyperspectral filtering, data collection, and display. The volume of the scanhead accomodated an intermediate frequency processor(IFP), that provides the filtering and digitization of the raw data and the interoperable remote component (IRC) adapted to CoSMIR, CoSSIR, and HyMAS that stores and archives the data with time tagged calibration and navigation data. The first element of the work is the demonstration of a hyperspectral microwave receiver subsystem that was recently shown using a comprehensive simulation study to yield performance that substantially exceeds current state-of-the-art. Hyperspectral microwave sounders with approximately 100 channels offer temperature and humidity sounding improvements similar to those obtained when infrared sensors became hyperspectral, but with the relative insensitivity to clouds that characterizes microwave sensors. Hyperspectral microwave operation is achieved using independent RF antenna/receiver arrays that sample the same area/volume of the Earth's surface/atmosphere at slightly different frequencies and therefore synthesize a set of dense, finely spaced vertical weighting functions. The second, enabling element of the proposal is the development of a compact 52-channel Intermediate Frequency processor module. A principal challenge in the development of a hyperspectral microwave system is the size of the IF filter bank required for channelization. Large bandwidths are simultaneously processed, thus complicating the use of digital back-ends with associated high complexities, costs, and power requirements. Our approach involves passive filters implemented using low-temperature co-fired ceramic (LTCC) technology to achieve an ultra-compact module that can be easily integrated with existing radio frequency front-end technology. This IF processor is universally applicable to other microwave sensing missions requiring compact IF spectrometry. The data include 52 operational channels with low IF module volume (less than 100 cubic centimeters) and mass (less than 300 grams) and linearity better than 0.3 percent over a 330,000 dynamic range.

  10. Modeling Navigation System Performance of a Satellite-Observing Star Tracker Tightly Integrated with an Inertial Measurement Unit

    DTIC Science & Technology

    2015-03-26

    tracker, an Inertial Measurement Unit (IMU), and a barometric altimeter using an Extended Kalman Filter (EKF). Models of each of these components are...Positioning 15 2.5 Detector Device Improvement . . . . . . . . . . . . . . . 15 2.6 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 17 2.6.1...Extended Kalman Filter . . . . . . . . . . . . . 17 2.7 System Properties . . . . . . . . . . . . . . . . . . . . . 21 2.8 Sun Exitance

  11. Generation of Quality Pulses for Control of Qubit/Quantum Memory Spin States: Experimental and Simulation

    DTIC Science & Technology

    2016-09-01

    as an example the integration of cryogenic superconductor components, including filters and amplifiers to improve the pulse quality and validate the...5 5.1 CRYOGENIC BAND-PASS FILTERS .............................................................................10 6. BIBLIOGRAPHY...10 16. Gain plot of DARPA SURF tunable band-pass filter tuned to 950-MHz .............................. 10 v 17. VSG at -50 dBm: Experimental

  12. The Psychometric Assessment of Children with Learning Disabilities: An Index Derived from a Principal Components Analysis of the WISC-R.

    ERIC Educational Resources Information Center

    Lawson, J. S.; Inglis, James

    1984-01-01

    A learning disability index (LDI) for the assessment of intellectual deficits on the Wechsler Intelligence Scale for Children-Revised (WISC-R) is described. The Factor II score coefficients derived from an unrotated principal components analysis of the WISC-R normative data, in combination with the individual's scaled scores, are used for this…

  13. Perturbation analyses of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.

  14. Perturbation analyses of intermolecular interactions.

    PubMed

    Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.

  15. [Micropore filters for measuring red blood cell deformability and their pore diameters].

    PubMed

    Niu, X; Yan, Z

    2001-09-01

    Micropore filters are the most important components in micropore filtration testes for assessing red blood cell (RBC) deformability. With regard to their appearance and filtration behaviors, comparisons are made for different kinds of filters currently in use. Nickel filters with regular geometric characteristics are found to be more sensitive to the effects of physical, chemical, especially pathological factors on the RBC deformability. We have critically reviewed the following viewpoint that filters with 3 microns pore diameter are more sensitive to cell volume than to internal viscosity while filters with 5 microns pore diameter are just the opposite. After analyzing the experiment results with 3 microns and 5 microns filters, we point out that filters with smaller pore diameters are more suitable for assessing the RBC deformability.

  16. Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2003-01-01

    In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.

  17. Characterization and Modeling of Dual Stage Quadruple Pass Configurations

    NASA Astrophysics Data System (ADS)

    Sellami, M.; Sellami, A.; Berrah, S.

    In this paper, the proposed system achieves a gain of 62dBs. It employs a dual-stage (DS) to enhance the amplification and a tunable band-pass filter (TBF) to filter out the backward amplified spontaneous emission (ASE) that degrades the signal amplification at the input end of the EDFA. The technique there by reduces the effect of ASE self-saturation [1]. This configuration is also useful in reducing the sensitivity of the EDFA to extra strenuous reflections caused by imperfections of the splices and other optical components [2]. as well as improving noise figure and gain. The experimental work will build up by using the active component Silica based EDF (Si-EDF) in Dual Stage Quadruple Pass (DSQP) configuration. By using Tunable Band pass Filter (TBF) in DSQP between the port 1 and port 2 of circulators (CRT2, CRT3) to filter out the unwanted ASE.

  18. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  19. Static and dynamic force/moment measurements in the Eidetics water tunnel

    NASA Technical Reports Server (NTRS)

    Suarez, Carlos J.; Malcolm, Gerald N.

    1994-01-01

    Water tunnels have been utilized in one form or another to explore fluid mechanics and aerodynamics phenomena since the days of Leonardo da Vinci. Water tunnel testing is attractive because of the relatively low cost and quick turn-around time to perform flow visualization experiments and evaluate the results. The principal limitation of a water tunnel is that the low flow speed, which provides for detailed visualization, also results in very small hydrodynamic (aerodynamic) forces on the model, which, in the past, have proven to be difficult to measure accurately. However, the advent of semi-conductor strain gage technology and devices associated with data acquisition such as low-noise amplifiers, electronic filters, and digital recording have made accurate measurements of very low strain levels feasible. The principal objective of this research effort was to develop a multi-component strain gage balance to measure forces and moments on models tested in flow visualization water tunnels. A balance was designed that allows measuring normal and side forces, and pitching, yawing and rolling moments (no axial force). The balance mounts internally in the model and is used in a manner typical of wind tunnel balances. The key differences between a water tunnel balance and a wind tunnel balance are the requirement for very high sensitivity since the loads are very low (typical normal force is 0.2 lbs), the need for water proofing the gage elements, and the small size required to fit into typical water tunnel models.

  20. [Role of school lunch in primary school education: a trial analysis of school teachers' views using an open-ended questionnaire].

    PubMed

    Inayama, T; Kashiwazaki, H; Sakamoto, M

    1998-12-01

    We tried to analyze synthetically teachers' view points associated with health education and roles of school lunch in primary education. For this purpose, a survey using an open-ended questionnaire consisting of eight items relating to health education in the school curriculum was carried out in 100 teachers of ten public primary schools. Subjects were asked to describe their view regarding the following eight items: 1) health and physical guidance education, 2) school lunch guidance education, 3) pupils' attitude toward their own health and nutrition, 4) health education, 5) role of school lunch in education, 6) future subjects of health education, 7) class room lesson related to school lunch, 8) guidance in case of pupil with unbalanced dieting and food avoidance. Subjects described their own opinions on an open-ended questionnaire response sheet. Keywords in individual descriptions were selected, rearranged and classified into categories according to their own meanings, and each of the selected keywords were used as the dummy variable. To assess individual opinions synthetically, a principal component analysis was then applied to the variables collected through the teachers' descriptions, and four factors were extracted. The results were as follows. 1) Four factors obtained from the repeated principal component analysis were summarized as; roles of health education and school lunch program (the first principal component), cooperation with nurse-teachers and those in charge of lunch service (the second principal component), time allocation for health education in home-room activity and lunch time (the third principal component) and contents of health education and school lunch guidance and their future plan (the fourth principal component). 2) Teachers regarded the role of school lunch in primary education as providing daily supply of nutrients, teaching of table manners and building up friendships with classmates, health education and food and nutrition education, and developing food preferences through eating lunch together with classmates. 3) Significant positive correlation was observed between "the teachers' opinion about the role of school lunch of providing opportunity to learn good behavior for food preferences through eating lunch together with classmates" and the first principal component "roles of health education and school lunch program" (r = 0.39, p < 0.01). The variable "the role of school lunch is health education and food and nutrition education" showed positive correlation with the principle component "cooperation with nurse-teachers and those in charge of lunch service" (r = 0.27, p < 0.01). Interesting relationships obtained were that teachers with longer educational experience tended to place importance in health education and food and nutrition education as the role of school lunch, and that male teachers regarded the roles of school lunch more importantly for future education in primary education than female teachers did.

Top