Sample records for component analysis linear

  1. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  2. New evidence and impact of electron transport non-linearities based on new perturbative inter-modulation analysis

    NASA Astrophysics Data System (ADS)

    van Berkel, M.; Kobayashi, T.; Igami, H.; Vandersteen, G.; Hogeweij, G. M. D.; Tanaka, K.; Tamura, N.; Zwart, H. J.; Kubo, S.; Ito, S.; Tsuchiya, H.; de Baar, M. R.; LHD Experiment Group

    2017-12-01

    A new methodology to analyze non-linear components in perturbative transport experiments is introduced. The methodology has been experimentally validated in the Large Helical Device for the electron heat transport channel. Electron cyclotron resonance heating with different modulation frequencies by two gyrotrons has been used to directly quantify the amplitude of the non-linear component at the inter-modulation frequencies. The measurements show significant quadratic non-linear contributions and also the absence of cubic and higher order components. The non-linear component is analyzed using the Volterra series, which is the non-linear generalization of transfer functions. This allows us to study the radial distribution of the non-linearity of the plasma and to reconstruct linear profiles where the measurements were not distorted by non-linearities. The reconstructed linear profiles are significantly different from the measured profiles, demonstrating the significant impact that non-linearity can have.

  3. Stability of Nonlinear Principal Components Analysis: An Empirical Study Using the Balanced Bootstrap

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.

    2007-01-01

    Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…

  4. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  5. Tracing and separating plasma components causing matrix effects in hydrophilic interaction chromatography-electrospray ionization mass spectrometry.

    PubMed

    Ekdahl, Anja; Johansson, Maria C; Ahnoff, Martin

    2013-04-01

    Matrix effects on electrospray ionization were investigated for plasma samples analysed by hydrophilic interaction chromatography (HILIC) in gradient elution mode, and HILIC columns of different chemistries were tested for separation of plasma components and model analytes. By combining mass spectral data with post-column infusion traces, the following components of protein-precipitated plasma were identified and found to have significant effect on ionization: urea, creatinine, phosphocholine, lysophosphocholine, sphingomyelin, sodium ion, chloride ion, choline and proline betaine. The observed effect on ionization was both matrix-component and analyte dependent. The separation of identified plasma components and model analytes on eight columns was compared, using pair-wise linear correlation analysis and principal component analysis (PCA). Large changes in selectivity could be obtained by change of column, while smaller changes were seen when the mobile phase buffer was changed from ammonium formate pH 3.0 to ammonium acetate pH 4.5. While results from PCA and linear correlation analysis were largely in accord, linear correlation analysis was judged to be more straight-forward in terms of conduction and interpretation.

  6. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  7. Effect of removing the common mode errors on linear regression analysis of noise amplitudes in position time series of a regional GPS network & a case study of GPS stations in Southern California

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye

    2018-05-01

    The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.

  8. Modeling hardwood crown radii using circular data analysis

    Treesearch

    Paul F. Doruska; Hal O. Liechty; Douglas J. Marshall

    2003-01-01

    Cylindrical data are bivariate data composed of a linear and an angular component. One can use uniform, first-order (one maximum and one minimum) or second-order (two maxima and two minima) models to relate the linear component to the angular component. Crown radii can be treated as cylindrical data when the azimuths at which the radii are measured are also recorded....

  9. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  10. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  11. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  12. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  13. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  14. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  15. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  16. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  17. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  18. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  19. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  20. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  1. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  2. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  3. New Representation of Bearings in LS-DYNA

    NASA Technical Reports Server (NTRS)

    Carney, Kelly S.; Howard, Samuel A.; Miller, Brad A.; Benson, David J.

    2014-01-01

    Non-linear, dynamic, finite element analysis is used in various engineering disciplines to evaluate high-speed, dynamic impact and vibration events. Some of these applications require connecting rotating to stationary components. For example, bird impacts on rotating aircraft engine fan blades are a common analysis performed using this type of analysis tool. Traditionally, rotating machines utilize some type of bearing to allow rotation in one degree of freedom while offering constraints in the other degrees of freedom. Most times, bearings are modeled simply as linear springs with rotation. This is a simplification that is not necessarily accurate under the conditions of high-velocity, high-energy, dynamic events such as impact problems. For this reason, it is desirable to utilize a more realistic non-linear force-deflection characteristic of real bearings to model the interaction between rotating and non-rotating components during dynamic events. The present work describes a rolling element bearing model developed for use in non-linear, dynamic finite element analysis. This rolling element bearing model has been implemented in LS-DYNA as a new element, *ELEMENT_BEARING.

  4. Multi-component separation and analysis of bat echolocation calls.

    PubMed

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  5. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  6. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  7. [Preliminary study on effective components of Tripterygium wilfordii for liver toxicity based on spectrum-effect correlation analysis].

    PubMed

    Zhao, Xiao-Mei; Pu, Shi-Biao; Zhao, Qing-Guo; Gong, Man; Wang, Jia-Bo; Ma, Zhi-Jie; Xiao, Xiao-He; Zhao, Kui-Jun

    2016-08-01

    In this paper, the spectrum-effect correlation analysis method was used to explore the main effective components of Tripterygium wilfordii for liver toxicity, and provide reference for promoting the quality control of T. wilfordii. Chinese medicine T.wilfordii was taken as the study object, and LC-Q-TOF-MS was used to characterize the chemical components in T. wilfordii samples from different areas, and their main components were initially identified after referring to the literature. With the normal human hepatocytes (LO2 cell line)as the carrier, acetaminophen as positive medicine, and cell inhibition rate as testing index, the simple correlation analysis and multivariate linear correlation analysis methods were used to screen the main components of T. wilfordii for liver toxicity. As a result, 10 kinds of main components were identified, and the spectrum-effect correlation analysis showed that triptolide may be the toxic component, which was consistent with previous results of traditional literature. Meanwhile it was found that tripterine and demethylzeylasteral may greatly contribute to liver toxicity in multivariate linear correlation analysis. T. wilfordii samples of different varieties or different origins showed large difference in quality, and the T. wilfordii from southwest China showed lower liver toxicity, while those from Hunan and Anhui province showed higher liver toxicity. This study will provide data support for further rational use of T. wilfordii and research on its liver toxicity ingredients. Copyright© by the Chinese Pharmaceutical Association.

  8. Wind modeling and lateral control for automatic landing

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Bryson, A. E., Jr.

    1975-01-01

    For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.

  9. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    PubMed

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  10. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 3: Literature surveys and technical reports

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The technical effort and computer code developed during the first year are summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis.

  11. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  12. Comparative study on fast classification of brick samples by combination of principal component analysis and linear discriminant analysis using stand-off and table-top laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Vítková, Gabriela; Prokeš, Lubomír; Novotný, Karel; Pořízka, Pavel; Novotný, Jan; Všianský, Dalibor; Čelko, Ladislav; Kaiser, Jozef

    2014-11-01

    Focusing on historical aspect, during archeological excavation or restoration works of buildings or different structures built from bricks it is important to determine, preferably in-situ and in real-time, the locality of bricks origin. Fast classification of bricks on the base of Laser-Induced Breakdown Spectroscopy (LIBS) spectra is possible using multivariate statistical methods. Combination of principal component analysis (PCA) and linear discriminant analysis (LDA) was applied in this case. LIBS was used to classify altogether the 29 brick samples from 7 different localities. Realizing comparative study using two different LIBS setups - stand-off and table-top it is shown that stand-off LIBS has a big potential for archeological in-field measurements.

  13. Linear model describing three components of flow in karst aquifers using 18O data

    USGS Publications Warehouse

    Long, Andrew J.; Putnam, L.D.

    2004-01-01

    The stable isotope of oxygen, 18O, is used as a naturally occurring ground-water tracer. Time-series data for ??18O are analyzed to model the distinct responses and relative proportions of the conduit, intermediate, and diffuse flow components in karst aquifers. This analysis also describes mathematically the dynamics of the transient fluid interchange between conduits and diffusive networks. Conduit and intermediate flow are described by linear-systems methods, whereas diffuse flow is described by mass-balance methods. An automated optimization process estimates parameters of lognormal, Pearson type III, and gamma distributions, which are used as transfer functions in linear-systems analysis. Diffuse flow and mixing parameters also are estimated by these optimization methods. Results indicate the relative proximity of a well to a main conduit flowpath and can help to predict the movement and residence times of potential contaminants. The three-component linear model is applied to five wells, which respond to changes in the isotopic composition of point recharge water from a sinking stream in the Madison aquifer in the Black Hills of South Dakota. Flow velocities as much as 540 m/d and system memories of as much as 71 years are estimated by this method. Also, the mean, median, and standard deviation of traveltimes; time to peak response; and the relative fraction of flow for each of the three components are determined for these wells. This analysis infers that flow may branch apart and rejoin as a result of an anastomotic (or channeled) karst network.

  14. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses.

    PubMed

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard.

  15. Long Term Precipitation Pattern Identification and Derivation of Non Linear Precipitation Trend in a Catchment using Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Poornima; Jothiprakash, Vinayakam

    2017-04-01

    Precipitation is the major component in the hydrologic cycle. Awareness of not only the total amount of rainfall pertaining to a catchment, but also the pattern of its spatial and temporal distribution are equally important in the management of water resources systems in an efficient way. Trend is the long term direction of a time series; it determines the overall pattern of a time series. Singular Spectrum Analysis (SSA) is a time series analysis technique that decomposes the time series into small components (eigen triples). This property of the method of SSA has been utilized to extract the trend component of the rainfall time series. In order to derive trend from the rainfall time series, we need to select components corresponding to trend from the eigen triples. For this purpose, periodogram analysis of the eigen triples have been proposed to be coupled with SSA, in the present study. In the study, seasonal data of England and Wales Precipitation (EWP) for a time period of 1766-2013 have been analyzed and non linear trend have been derived out of the precipitation data. In order to compare the performance of SSA in deriving trend component, Mann Kendall (MK) test is also used to detect trends in EWP seasonal series and the results have been compared. The result showed that the MK test could detect the presence of positive or negative trend for a significance level, whereas the proposed methodology of SSA could extract the non-linear trend present in the rainfall series along with its shape. We will discuss further the comparison of both the methodologies along with the results in the presentation.

  16. Relationship between rice yield and climate variables in southwest Nigeria using multiple linear regression and support vector machine analysis

    NASA Astrophysics Data System (ADS)

    Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried

    2018-03-01

    This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P < 0.001) in rice yield, pan evaporation, solar radiation, and wind speed declined significantly. Eight principal components exhibited an eigenvalue > 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.

  17. An accurate nonlinear finite element analysis and test correlation of a stiffened composite wing panel

    NASA Astrophysics Data System (ADS)

    Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; McCleary, S. L.

    1991-05-01

    State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.

  18. An accurate nonlinear finite element analysis and test correlation of a stiffened composite wing panel

    NASA Technical Reports Server (NTRS)

    Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; Mccleary, S. L.

    1991-01-01

    State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.

  19. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.

  20. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  1. Linearized blade row compression component model. Stability and frequency response analysis of a J85-3 compressor

    NASA Technical Reports Server (NTRS)

    Tesch, W. A.; Moszee, R. H.; Steenken, W. G.

    1976-01-01

    NASA developed stability and frequency response analysis techniques were applied to a dynamic blade row compression component stability model to provide a more economic approach to surge line and frequency response determination than that provided by time-dependent methods. This blade row model was linearized and the Jacobian matrix was formed. The clean-inlet-flow stability characteristics of the compressors of two J85-13 engines were predicted by applying the alternate Routh-Hurwitz stability criterion to the Jacobian matrix. The predicted surge line agreed with the clean-inlet-flow surge line predicted by the time-dependent method to a high degree except for one engine at 94% corrected speed. No satisfactory explanation of this discrepancy was found. The frequency response of the linearized system was determined by evaluating its Laplace transfer function. The results of the linearized-frequency-response analysis agree with the time-dependent results when the time-dependent inlet total-pressure and exit-flow function amplitude boundary conditions are less than 1 percent and 3 percent, respectively. The stability analysis technique was extended to a two-sector parallel compressor model with and without interstage crossflow and predictions were carried out for total-pressure distortion extents of 180 deg, 90 deg, 60 deg, and 30 deg.

  2. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  3. MRM assay for quantitation of complement components in human blood plasma - a feasibility study on multiple sclerosis.

    PubMed

    Rezeli, Melinda; Végvári, Akos; Ottervald, Jan; Olsson, Tomas; Laurell, Thomas; Marko-Varga, György

    2011-12-10

    As a proof-of-principle study, a multiple reaction monitoring (MRM) assay was developed for quantitation of proteotypic peptides, representing seven plasma proteins associated with inflammation (complement components and C-reactive protein). The assay development and the sample analysis were performed on a linear ion trap mass spectrometer. We were able to quantify 5 of the 7 target proteins in depleted plasma digests with reasonable reproducibility over a 2 orders of magnitude linear range (RSD≤25%). The assay panel was utilized for the analysis of a small multiple sclerosis sample cohort with 10 diseased and 8 control patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Dynamic analysis of a flexible spacecraft with rotating components. Volume 1: Analytical developments

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, A. D.; Park, A. C.

    1975-01-01

    Analytical procedures and digital computer code are presented for the dynamic analysis of a flexible spacecraft with rotating components. Topics, considered include: (1) nonlinear response in the time domain, and (2) linear response in the frequency domain. The spacecraft is assumed to consist of an assembly of connected rigid or flexible subassemblies. The total system is not restricted to a topological connection arrangement and may be acting under the influence of passive or active control systems and external environments. The analytics and associated digital code provide the user with the capability to establish spacecraft system nonlinear total response for specified initial conditions, linear perturbation response about a calculated or specified nominal motion, general frequency response and graphical display, and spacecraft system stability analysis.

  5. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  6. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  7. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  8. Analysis of complex elastic structures by a Rayleigh-Ritz component modes method using Lagrange multipliers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Klein, L. R.

    1974-01-01

    The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.

  9. Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization

    PubMed Central

    Lyu, Siwei; Simoncelli, Eero P.

    2011-01-01

    We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599

  10. The influence of acceleration loading curve characteristics on traumatic brain injury.

    PubMed

    Post, Andrew; Blaine Hoshizaki, T; Gilchrist, Michael D; Brien, Susan; Cusimano, Michael D; Marshall, Shawn

    2014-03-21

    To prevent brain trauma, understanding the mechanism of injury is essential. Once the mechanism of brain injury has been identified, prevention technologies could then be developed to aid in their prevention. The incidence of brain injury is linked to how the kinematics of a brain injury event affects the internal structures of the brain. As a result it is essential that an attempt be made to describe how the characteristics of the linear and rotational acceleration influence specific traumatic brain injury lesions. As a result, the purpose of this study was to examine the influence of the characteristics of linear and rotational acceleration pulses and how they account for the variance in predicting the outcome of TBI lesions, namely contusion, subdural hematoma (SDH), subarachnoid hemorrhage (SAH), and epidural hematoma (EDH) using a principal components analysis (PCA). Monorail impacts were conducted which simulated falls which caused the TBI lesions. From these reconstructions, the characteristics of the linear and rotational acceleration were determined and used for a PCA analysis. The results indicated that peak resultant acceleration variables did not account for any of the variance in predicting TBI lesions. The majority of the variance was accounted for by duration of the resultant and component linear and rotational acceleration. In addition, the components of linear and rotational acceleration characteristics on the x, y, and z axes accounted for the majority of the remainder of the variance after duration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Extending the accuracy of the SNAP interatomic potential form

    NASA Astrophysics Data System (ADS)

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-06-01

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.

  12. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  13. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  14. Three dimensional radiative flow of magnetite-nanofluid with homogeneous-heterogeneous reactions

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed

    2018-03-01

    Present communication deals with the effects of homogeneous-heterogeneous reactions in flow of nanofluid by non-linear stretching sheet. Water based nanofluid containing magnetite nanoparticles is considered. Non-linear radiation and non-uniform heat sink/source effects are examined. Non-linear differential systems are computed by Optimal homotopy analysis method (OHAM). Convergent solutions of nonlinear systems are established. The optimal data of auxiliary variables is obtained. Impact of several non-dimensional parameters for velocity components, temperature and concentration fields are examined. Graphs are plotted for analysis of surface drag force and heat transfer rate.

  15. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses

    PubMed Central

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  16. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  17. Dynamics of short-pulse generation via spectral filtering from intensely excited gain-switched 1.55-μm distributed-feedback laser diodes.

    PubMed

    Chen, Shaoqiang; Yoshita, Masahiro; Sato, Aya; Ito, Takashi; Akiyama, Hidefumi; Yokoyama, Hiroyuki

    2013-05-06

    Picosecond-pulse-generation dynamics and pulse-width limiting factors via spectral filtering from intensely pulse-excited gain-switched 1.55-μm distributed-feedback laser diodes were studied. The spectral and temporal characteristics of the spectrally filtered pulses indicated that the short-wavelength component stems from the initial part of the gain-switched main pulse and has a nearly linear down-chirp of 5.2 ps/nm, whereas long-wavelength components include chirped pulse-lasing components and steady-state-lasing components. Rate-equation calculations with a model of linear change in refractive index with carrier density explained the major features of the experimental results. The analysis of the expected pulse widths with optimum spectral widths was also consistent with the experimental data.

  18. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  19. Assessment of Change in Green Infrastructure Components Using Morphological Spatial Pattern Analysis for the Conterminous United States

    EPA Science Inventory

    Green infrastructure is a widely used framework for conservation planning in the United States and elsewhere. The main components of green infrastructure are hubs and corridors. Hubs are large areas of natural vegetation, and corridors are linear features that connect hubs. W...

  20. Pattern classification of fMRI data: applications for analysis of spatially distributed cortical networks.

    PubMed

    Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C

    2014-08-01

    The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Kinetics of the formation of chromosome aberrations in X-irradiated human lymphocytes: analysis by premature chromosome condensation with delayed fusion.

    PubMed

    Greinert, R; Detzler, E; Volkmer, B; Harder, D

    1995-11-01

    Human lymphocytes irradiated with graded doses of up to 5 Gy of 150 kV X rays were fused with mitotic CHO cells after delay times ranging from 0 to 14 h after irradiation. The yields of dicentrics seen under PCC conditions, using C-banding for centromere detection, and of excess acentric fragments observed in the PCC experiment were determined by image analysis. At 4 Gy the time course of the yield of dicentrics shows an early plateau for delay times up to 2 h, then an S-shaped rise and a final plateau which is reached after a delay time of about 8 to 10 h. Whereas the dose-yield curve measured at zero delay time is strictly linear, the shape of the curve obtained for 8 h delay time is linear-quadratic. The linear yield component, alpha D, is formed entirely in the fast process manifested in the early plateau, while component beta D2 is developed slowly in the subsequent hours. Analysis of the kinetics of the rise of the S-shaped curve for yield as a function of time leads to the postulate of an "intermediate product" of pairwise DNA lesion interaction, still fragile when subjected to the stress of PCC, but gradually processed into a stable dicentric chromosome. It is concluded that the observed difference in the kinetics of the alpha and beta components explains a number of earlier results, especially the disappearance of the beta component at high LET, and opens possibilities for chemical and physical modification of the beta component during the extended formation process after irradiation observed here.

  2. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    NASA Astrophysics Data System (ADS)

    Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.

    2015-03-01

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  3. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  4. Missing Data Treatments at the Second Level of Hierarchical Linear Models

    ERIC Educational Resources Information Center

    St. Clair, Suzanne W.

    2011-01-01

    The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…

  5. Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.

    ERIC Educational Resources Information Center

    Alston, Richard M.; Chi, Wan Fu

    1989-01-01

    Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…

  6. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  7. Rotation of EOFs by the Independent Component Analysis: Towards A Solution of the Mixing Problem in the Decomposition of Geophysical Time Series

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2001-01-01

    The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.

  8. Lattice Independent Component Analysis for Mobile Robot Localization

    NASA Astrophysics Data System (ADS)

    Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz

    This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).

  9. Equilibrium Phase Behavior of the Square-Well Linear Microphase-Forming Model.

    PubMed

    Zhuang, Yuan; Charbonneau, Patrick

    2016-07-07

    We have recently developed a simulation approach to calculate the equilibrium phase diagram of particle-based microphase formers. Here, this approach is used to calculate the phase behavior of the square-well linear model for different strengths and ranges of the linear long-range repulsive component. The results are compared with various theoretical predictions for microphase formation. The analysis further allows us to better understand the mechanism for microphase formation in colloidal suspensions.

  10. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  11. [Simultaneous determination of five main index components and specific chromatograms analysis in Xiaochaihu granules].

    PubMed

    Zhuang, Yan-Shuang; Cai, Hao; Liu, Xiao; Cai, Bao-Chang

    2012-01-01

    Reversed phase high performance liquid chromatography with diode array detector was employed for simultaneous determination of five main index components and specific chromatograms analysis in Xiaochaihu granules with a linear gradient elution of acetonitrile-water (containing 0.1% phosphoric acid) as mobile phase. The results showed that five main index components (baicalin, baicalein, wogonoside, wogonin, enoxolone) were separated well under the analytical condition. The linear ranges of five components were 0.518 - 16.576, 0.069 - 2.197, 0.167 - 5.333, 0.009 - 0.297 and 0.006 - 0.270 mg x g(-1), respectively. The correlation coefficients were 0.999 9, and the average recoveries ranged from 95% to 105%. Twelve common peaks were selected as the specific chromatograms of Xiaochaihu granules with baicalin as the reference peak. There were good similarities between the reference and the ten batches of samples. The similarity coefficients were no less than 0.9. The analytical method established is highly sensitive with strong specificity and it can be used efficiently in the quality control of Xiaochaihu granules.

  12. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  13. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  14. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  15. Qualitative and Quantitative Analysis of Volatile Components of Zhengtian Pills Using Gas Chromatography Mass Spectrometry and Ultra-High Performance Liquid Chromatography.

    PubMed

    Liu, Cui-Ting; Zhang, Min; Yan, Ping; Liu, Hai-Chan; Liu, Xing-Yun; Zhan, Ruo-Ting

    2016-01-01

    Zhengtian pills (ZTPs) are traditional Chinese medicine (TCM) which have been commonly used to treat headaches. Volatile components of ZTPs extracted by ethyl acetate with an ultrasonic method were analyzed by gas chromatography mass spectrometry (GC-MS). Twenty-two components were identified, accounting for 78.884% of the total components of volatile oil. The three main volatile components including protocatechuic acid, ferulic acid, and ligustilide were simultaneously determined using ultra-high performance liquid chromatography coupled with diode array detection (UHPLC-DAD). Baseline separation was achieved on an XB-C18 column with linear gradient elution of methanol-0.2% acetic acid aqueous solution. The UHPLC-DAD method provided good linearity (R (2) ≥ 0.9992), precision (RSD < 3%), accuracy (100.68-102.69%), and robustness. The UHPLC-DAD/GC-MS method was successfully utilized to analyze volatile components, protocatechuic acid, ferulic acid, and ligustilide, in 13 batches of ZTPs, which is suitable for discrimination and quality assessment of ZTPs.

  16. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  17. Detailed analysis and test correlation of a stiffened composite wing panel

    NASA Technical Reports Server (NTRS)

    Davis, D. Dale, Jr.

    1991-01-01

    Nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings supplied by the Bell Helicopter Textron Corporation, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain (ANS) elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain displacements relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis. Strain predictions from both the linear and nonlinear stress analyses are shown to compare well with experimental data up through the Design Ultimate Load (DUL) of the panel. However, due to the extreme nonlinear response of the panel, the linear analysis was not accurate at loads above the DUL. The nonlinear analysis more accurately predicted the strain at high values of applied load, and even predicted complicated nonlinear response characteristics, such as load reversals, at the observed failure load of the test panel. In order to understand the failure mechanism of the panel, buckling and first ply failure analyses were performed. The buckling load was 17 percent above the observed failure load while first ply failure analyses indicated significant material damage at and below the observed failure load.

  18. Discovery of Empirical Components by Information Theory

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-VA-TR-2016-0289 Discovery of Empirical Components by Information Theory Amit Singer TRUSTEES OF PRINCETON UNIVERSITY 1 NASSAU HALL...3. DATES COVERED (From - To) 15 Feb 2013 to 14 Feb 2016 5a. CONTRACT NUMBER Discovery of Empirical Components by Information Theory 5b. GRANT...they draw not only from traditional linear algebra based numerical analysis or approximation theory , but also from information theory , graph theory

  19. Mid-frequency Band Dynamics of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.; Adams, Douglas S.

    2004-01-01

    High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.

  20. Extending the accuracy of the SNAP interatomic potential form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Mitchell A.; Thompson, Aidan P.

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  1. Extending the accuracy of the SNAP interatomic potential form

    DOE PAGES

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-03-28

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  2. Alignment of the Stanford Linear Collider Arcs: Concepts and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitthan, R.; Bell, B.; Friedsam, H.

    1987-02-01

    The alignment of the Arcs for the Stanford Linear Collider at SLAC has posed problems in accelerator survey and alignment not encountered before. These problems come less from the tight tolerances of 0.1 mm, although reaching such a tight statistically defined accuracy in a controlled manner is difficult enough, but from the absence of a common reference plane for the Arcs. Traditional circular accelerators, including HERA and LEP, have been designed in one plane referenced to local gravity. For the SLC Arcs no such single plane exists. Methods and concepts developed to solve these and other problems, connected with themore » unique design of SLC, range from the first use of satellites for accelerator alignment, use of electronic laser theodolites for placement of components, computer control of the manual adjustment process, complete automation of the data flow incorporating the most advanced concepts of geodesy, strict separation of survey and alignment, to linear principal component analysis for the final statistical smoothing of the mechanical components.« less

  3. Searching for the main anti-bacterial components in artificial Calculus bovis using UPLC and microcalorimetry coupled with multi-linear regression analysis.

    PubMed

    Zang, Qing-Ce; Wang, Jia-Bo; Kong, Wei-Jun; Jin, Cheng; Ma, Zhi-Jie; Chen, Jing; Gong, Qian-Feng; Xiao, Xiao-He

    2011-12-01

    The fingerprints of artificial Calculus bovis extracts from different solvents were established by ultra-performance liquid chromatography (UPLC) and the anti-bacterial activities of artificial C. bovis extracts on Staphylococcus aureus (S. aureus) growth were studied by microcalorimetry. The UPLC fingerprints were evaluated using hierarchical clustering analysis. Some quantitative parameters obtained from the thermogenic curves of S. aureus growth affected by artificial C. bovis extracts were analyzed using principal component analysis. The spectrum-effect relationships between UPLC fingerprints and anti-bacterial activities were investigated using multi-linear regression analysis. The results showed that peak 1 (taurocholate sodium), peak 3 (unknown compound), peak 4 (cholic acid), and peak 6 (chenodeoxycholic acid) are more significant than the other peaks with the standard parameter estimate 0.453, -0.166, 0.749, 0.025, respectively. So, compounds cholic acid, taurocholate sodium, and chenodeoxycholic acid might be the major anti-bacterial components in artificial C. bovis. Altogether, this work provides a general model of the combination of UPLC chromatography and anti-bacterial effect to study the spectrum-effect relationships of artificial C. bovis extracts, which can be used to discover the main anti-bacterial components in artificial C. bovis or other Chinese herbal medicines with anti-bacterial effects. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan P.; Swiler, Laura P.; Trott, Christian R.

    2015-03-15

    Here, we present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1].more » The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.« less

  5. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, A.P., E-mail: athomps@sandia.gov; Swiler, L.P., E-mail: lpswile@sandia.gov; Trott, C.R., E-mail: crtrott@sandia.gov

    2015-03-15

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. Themore » SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.« less

  6. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  7. Deep Independence Network Analysis of Structural Brain Imaging: Application to Schizophrenia

    PubMed Central

    Castro, Eduardo; Hjelm, R. Devon; Plis, Sergey M.; Dinh, Laurent; Turner, Jessica A.; Calhoun, Vince D.

    2016-01-01

    Linear independent component analysis (ICA) is a standard signal processing technique that has been extensively used on neuroimaging data to detect brain networks with coherent brain activity (functional MRI) or covarying structural patterns (structural MRI). However, its formulation assumes that the measured brain signals are generated by a linear mixture of the underlying brain networks and this assumption limits its ability to detect the inherent nonlinear nature of brain interactions. In this paper, we introduce nonlinear independent component estimation (NICE) to structural MRI data to detect abnormal patterns of gray matter concentration in schizophrenia patients. For this biomedical application, we further addressed the issue of model regularization of nonlinear ICA by performing dimensionality reduction prior to NICE, together with an appropriate control of the complexity of the model and the usage of a proper approximation of the probability distribution functions of the estimated components. We show that our results are consistent with previous findings in the literature, but we also demonstrate that the incorporation of nonlinear associations in the data enables the detection of spatial patterns that are not identified by linear ICA. Specifically, we show networks including basal ganglia, cerebellum and thalamus that show significant differences in patients versus controls, some of which show distinct nonlinear patterns. PMID:26891483

  8. Stability analysis of the Peregrine solution via squared eigenfunctions

    NASA Astrophysics Data System (ADS)

    Schober, C. M.; Strawn, M.

    2017-10-01

    A preliminary numerical investigation involving ensembles of perturbed initial data for the Peregrine soliton (the lowest order rational solution of the nonlinear Schrödinger equation) indicates that it is unstable [16]. In this paper we analytically investigate the linear stability of the Peregrine soliton, appealing to the fact that the Peregrine solution can be viewed as the singular limit of a single mode spatially periodic breathers (SPB). The "squared eigenfunction" connection between the Zakharov-Shabat (Z-S) system and the linearized NLS equation is employed in the stability analysis. Specifically, we determine the eigenfunctions of the Z-S system associated with the Peregrine soliton and construct a family of solutions of the associated linearized NLS (about the Peregrine) in terms of quadratic products of components of the eigenfunctions (i.e., the squared eigenfunction). We find there exist solutions of the linearization that grow exponentially in time, thus showing the Peregrine soliton is linearly unstable.

  9. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  10. Genetic covariance components within and among linear type traits differ among contrasting beef cattle breeds.

    PubMed

    Doyle, Jennifer L; Berry, Donagh P; Walsh, Siobhan W; Veerkamp, Roel F; Evans, Ross D; Carthy, Tara R

    2018-05-04

    Linear type traits describing the skeletal, muscular, and functional characteristics of an animal are routinely scored on live animals in both the dairy and beef cattle industries. Previous studies have demonstrated that genetic parameters for certain performance traits may differ between breeds; no study, however, has attempted to determine if differences exist in genetic parameters of linear type traits among breeds or sexes. Therefore, the objective of the present study was to determine if genetic covariance components for linear type traits differed among five contrasting cattle breeds, and to also investigate if these components differed by sex. A total of 18 linear type traits scored on 3,356 Angus (AA), 31,049 Charolais (CH), 3,004 Hereford (HE), 35,159 Limousin (LM), and 8,632 Simmental (SI) were used in the analysis. Data were analyzed using animal linear mixed models which included the fixed effects of sex of the animal (except in the investigation into the presence of sexual dimorphism), age at scoring, parity of the dam, and contemporary group of herd-date of scoring. Differences (P < 0.05) in heritability estimates, between at least two breeds, existed for 13 out of 18 linear type traits. Differences (P < 0.05) also existed between the pairwise within-breed genetic correlations among the linear type traits. Overall, the linear type traits in the continental breeds (i.e., CH, LM, SI) tended to have similar heritability estimates to each other as well as similar genetic correlations among the same pairwise traits, as did the traits in the British breeds (i.e., AA, HE). The correlation between a linear function of breeding values computed conditional on covariance parameters estimated from the CH breed with a linear function of breeding values computed conditional on covariance parameters estimated from the other breeds was estimated. Replacing the genetic covariance components estimated in the CH breed with those of the LM had least effect but the impact was considerable when the genetic covariance components of the AA were used. Genetic correlations between the same linear type traits in the two sexes were all close to unity (≥0.90) suggesting little advantage in considering these as separate traits for males and females. Results for the present study indicate the potential increase in accuracy of estimated breeding value prediction from considering, at least, the British breed traits separate to continental breed traits.

  11. Seasonal characterization of CDOM for lakes in semiarid regions of Northeast China using excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC)

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Song, Kaishan; Wen, Zhidan; Li, Lin; Zang, Shuying; Shao, Tiantian; Li, Sijia; Du, Jia

    2016-03-01

    The seasonal characteristics of fluorescent components in chromophoric dissolved organic matter (CDOM) for lakes in the semiarid region of Northeast China were examined by excitation-emission matrix (EEM) spectra and parallel factor analysis (PARAFAC). Two humic-like (C1 and C2) and protein-like (C3 and C4) components were identified using PARAFAC. The average fluorescence intensity of the four components differed under seasonal variation from June and August 2013 to February and April 2014. Components 1 and 2 exhibited a strong linear correlation (R2 = 0.628). Significantly positive linear relationships between CDOM absorption coefficients a(254) (R2 = 0.72, 0.46, p < 0.01), a(280) (R2 = 0.77, 0.47, p < 0.01), a(350) (R2 = 0.76, 0.78, p < 0.01) and Fmax for two humic-like components (C1 and C2) were exhibited, respectively. A significant relationship (R2 = 0.930) was found between salinity and dissolved organic carbon (DOC). However, almost no obvious correlation was found between salinity and EEM-PARAFAC-extracted components except for C3 (R2 = 0.469). Results from this investigation demonstrate that the EEM-PARAFAC technique can be used to evaluate the seasonal dynamics of CDOM fluorescent components for inland waters in the semiarid regions of Northeast China, and to quantify CDOM components for other waters with similar environmental conditions.

  12. Enhanced Spectral Anisotropies Near the Proton-Cyclotron Scale: Possible Two-Component Structure in Hall-FLR MHD Turbulence Simulations

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Goldstein, Melvyn L.

    2011-01-01

    Recent analysis of the magnetic correlation function of solar wind fluctuations at 1 AU suggests the existence of two-component structure near the proton-cyclotron scale. Here we use two-and-one-half dimensional and three-dimensional compressible MHD models to look for two-component structure adjacent the proton-cyclotron scale. Our MHD system incorporates both Hall and Finite Larmor Radius (FLR) terms. We find that strong spectral anisotropies appear adjacent the proton-cyclotron scales depending on selections of initial condition and plasma beta. These anisotropies are enhancements on top of related anisotropies that appear in standard MHD turbulence in the presence of a mean magnetic field and are suggestive of one turbulence component along the inertial scales and another component adjacent the dissipative scales. We compute the relative strengths of linear and nonlinear accelerations on the velocity and magnetic fields to gauge the relative influence of terms that drive the system with wave-like (linear) versus turbulent (nonlinear) dynamics.

  13. Susceptibility of linear and nonlinear otoacoustic emission components to low-dose styrene exposure.

    PubMed

    Tognola, G; Chiaramello, E; Sisto, R; Moleti, A

    2015-03-01

    To investigate potential susceptibility of active cochlear mechanisms to low-level styrene exposure by comparing TEOAEs in workers and controls. Two advanced analysis techniques were applied to detect sub-clinical changes in linear and nonlinear cochlear mechanisms of OAE generation: the wavelet transform to decompose TEOAEs into time-frequency components and extract signal-to-noise ratio and latency of each component, and the bispectrum to detect and extract nonlinear TEOAE contributions as quadratic frequency couplings (QFCs). Two cohorts of workers were examined: subjects exposed exclusively to styrene (N = 9), and subjects exposed to styrene and noise (N = 6). The control group was perfectly matched by age and sex to the exposed group. Exposed subjects showed significantly lowered SNR in TEOAE components at mid-to-high frequencies (above 1.6 kHz) and a shift of QFC distribution towards lower frequencies than controls. No systematic differences were observed in latency. Low-level styrene exposure may have induced a modification of cochlear functionality as concerns linear and nonlinear OAE generation mechanisms. The lack of change in latency seems to suggest that the OAE components, where generation region and latency are tightly coupled, may not have been affected by styrene and noise exposure levels considered here.

  14. Differential adaptation of the linear and nonlinear components of the horizontal vestibuloocular reflex in squirrel monkeys

    NASA Technical Reports Server (NTRS)

    Clendaniel, Richard A.; Lasker, David M.; Minor, Lloyd B.; Shelhamer, M. J. (Principal Investigator)

    2002-01-01

    Previous work in squirrel monkeys has demonstrated the presence of linear and nonlinear components to the horizontal vestibuloocular reflex (VOR) evoked by high-acceleration rotations. The nonlinear component is seen as a rise in gain with increasing velocity of rotation at frequencies more than 2 Hz (a velocity-dependent gain enhancement). We have shown that there are greater changes in the nonlinear than linear component of the response after spectacle-induced adaptation. The present study was conducted to determine if the two components of the response share a common adaptive process. The gain of the VOR, in the dark, to sinusoidal stimuli at 4 Hz (peak velocities: 20-150 degrees /s) and 10 Hz (peak velocities: 20 and 100 degrees /s) was measured pre- and postadaptation. Adaptation was induced over 4 h with x0.45 minimizing spectacles. Sum-of-sines stimuli were used to induce adaptation, and the parameters of the stimuli were adjusted to invoke only the linear or both linear and nonlinear components of the response. Preadaptation, there was a velocity-dependent gain enhancement at 4 and 10 Hz. In postadaptation with the paradigms that only recruited the linear component, there was a decrease in gain and a persistent velocity-dependent gain enhancement (indicating adaptation of only the linear component). After adaptation with the paradigm designed to recruit both the linear and nonlinear components, there was a decrease in gain and no velocity-dependent gain enhancement (indicating adaptation of both components). There were comparable changes in the response to steps of acceleration. We interpret these results to indicate that separate processes drive the adaptation of the linear and nonlinear components of the response.

  15. [Discrimination of Red Tide algae by fluorescence spectra and principle component analysis].

    PubMed

    Su, Rong-guo; Hu, Xu-peng; Zhang, Chuan-song; Wang, Xiu-lin

    2007-07-01

    Fluorescence discrimination technology for 11 species of the Red Tide algae at genus level was constructed by principle component analysis and non-negative least squares. Rayleigh and Raman scattering peaks of 3D fluorescence spectra were eliminated by Delaunay triangulation method. According to the results of Fisher linear discrimination, the first principle component score and the second component score of 3D fluorescence spectra were chosen as discriminant feature and the feature base was established. The 11 algae species were tested, and more than 85% samples were accurately determinated, especially for Prorocentrum donghaiense, Skeletonema costatum, Gymnodinium sp., which have frequently brought Red tide in the East China Sea. More than 95% samples were right discriminated. The results showed that the genus discriminant feature of 3D fluorescence spectra of Red Tide algae given by principle component analysis could work well.

  16. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  17. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  19. Identification of Piecewise Linear Uniform Motion Blur

    NASA Astrophysics Data System (ADS)

    Patanukhom, Karn; Nishihara, Akinori

    A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.

  20. School Health Promotion Policies and Adolescent Risk Behaviors in Israel: A Multilevel Analysis

    ERIC Educational Resources Information Center

    Tesler, Riki; Harel-Fisch, Yossi; Baron-Epel, Orna

    2016-01-01

    Background: Health promotion policies targeting risk-taking behaviors are being implemented across schools in Israel. This study identified the most effective components of these policies influencing cigarette smoking and alcohol consumption among adolescents. Methods: Logistic hierarchical linear model (HLM) analysis of data for 5279 students in…

  1. Observation of Polarization-Locked Vector Solitons in an Optical Fiber

    NASA Astrophysics Data System (ADS)

    Cundiff, S. T.; Collings, B. C.; Akhmediev, N. N.; Soto-Crespo, J. M.; Bergman, K.; Knox, W. H.

    1999-05-01

    We observe polarization-locked vector solitons in a mode-locked fiber laser. Temporal vector solitons have components along both birefringent axes. Despite different phase velocities due to linear birefringence, the relative phase of the components is locked at +/-π/2. The value of +/-π/2 and component magnitudes agree with a simple analysis of the Kerr nonlinearity. These fragile phase-locked vector solitons have been the subject of much theoretical conjecture, but have previously eluded experimental observation.

  2. Nonautonomous dark soliton solutions in two-component Bose—Einstein condensates with a linear time-dependent potential

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Yan; Wang, Shuang-Jin; Li, Zai-Dong

    2014-06-01

    We report the analytical nonautonomous soliton solutions (NSSs) for two-component Bose—Einstein condensates with the presence of a time-dependent potential. These solutions show that the time-dependent potential can affect the velocity of NSS. The velocity shows the characteristic of both increasing and oscillation with time. A detailed analysis for the asymptotic behavior of NSSs demonstrates that the collision of two NSSs of each component is elastic.

  3. [Research on the method of interference correction for nondispersive infrared multi-component gas analysis].

    PubMed

    Sun, You-Wen; Liu, Wen-Qing; Wang, Shi-Mei; Huang, Shu-Hua; Yu, Xiao-Man

    2011-10-01

    A method of interference correction for nondispersive infrared multi-component gas analysis was described. According to the successive integral gas absorption models and methods, the influence of temperature and air pressure on the integral line strengths and linetype was considered, and based on Lorentz detuning linetypes, the absorption cross sections and response coefficients of H2O, CO2, CO, and NO on each filter channel were obtained. The four dimension linear regression equations for interference correction were established by response coefficients, the absorption cross interference was corrected by solving the multi-dimensional linear regression equations, and after interference correction, the pure absorbance signal on each filter channel was only controlled by the corresponding target gas concentration. When the sample cell was filled with gas mixture with a certain concentration proportion of CO, NO and CO2, the pure absorbance after interference correction was used for concentration inversion, the inversion concentration error for CO2 is 2.0%, the inversion concentration error for CO is 1.6%, and the inversion concentration error for NO is 1.7%. Both the theory and experiment prove that the interference correction method proposed for NDIR multi-component gas analysis is feasible.

  4. Numerical simulation of the wave-induced non-linear bending moment of ships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, J.; Wang, Z.; Gu, X.

    1995-12-31

    Ships traveling in moderate or rough seas may experience non-linear bending moments due to flare effect and slamming loads. The numerical simulation of the total wave-induced bending moment contributed from both the wave frequency component induced by wave forces and the high frequency whipping component induced by slamming actions is very important in predicting the responses and ensuring the safety of the ship in rough seas. The time simulation is also useful for the reliability analysis of ship girder strength. The present paper discusses four different methods of the numerical simulation of wave-induced non-linear vertical bending moment of ships recentlymore » developed in CSSRC, including the hydroelastic integral-differential method (HID), the hydroelastic differential analysis method (HDA), the combined seakeeping and structural forced vibration method (CSFV), and the modified CSFV method (MCSFV). Numerical predictions are compared with the experimental results obtained from the elastic ship model test of S-175 container ship in regular and irregular waves presented by Watanabe Ueno and Sawada (1989).« less

  5. Wireless acceleration sensor of moving elements for condition monitoring of mechanisms

    NASA Astrophysics Data System (ADS)

    Sinitsin, Vladimir V.; Shestakov, Aleksandr L.

    2017-09-01

    Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.

  6. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  7. Time and frequency domain analysis of sampled data controllers via mixed operation equations

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.

  8. Quantification of frequency-components contributions to the discharge of a karst spring

    NASA Astrophysics Data System (ADS)

    Taver, V.; Johannet, A.; Vinches, M.; Borrell, V.; Pistre, S.; Bertin, D.

    2013-12-01

    Karst aquifers represent important underground resources for water supplies, providing it to 25% of the population. Nevertheless such systems are currently underexploited because of their heterogeneity and complexity, which make work fields and physical measurements expensive, and frequently not representative of the whole aquifer. The systemic paradigm appears thus at a complementary approach to study and model karst aquifers in the framework of non-linear system analysis. Its input and output signals, namely rainfalls and discharge contain information about the function performed by the physical process. Therefore, improvement of knowledge about the karst system can be provided using time series analysis, for example Fourier analysis or orthogonal decomposition [1]. Another level of analysis consists in building non-linear models to identify rainfall/discharge relation, component by component [2]. In this context, this communication proposes to use neural networks to first model the rainfall-runoff relation using frequency components, and second to analyze the models, using the KnoX method [3], in order to quantify the importance of each component. Two different neural models were designed: (i) the recurrent model which implements a non-linear recurrent model fed by rainfalls, ETP and previous estimated discharge, (ii) the feed-forward model which implements a non-linear static model fed by rainfalls, ETP and previous observed discharges. The first model is known to better represent the rainfall-runoff relation; the second one to better predict the discharge based on previous discharge observations. KnoX method is based on a variable selection method, which simply considers values of parameters after the training without taking into account the non-linear behavior of the model during functioning. An amelioration of the KnoX method, is thus proposed in order to overcome this inadequacy. The proposed method, leads thus to both a hierarchization and a quantification of the input variables, here the frequency components, over output signal. Applied to the Lez karst aquifer, the combination of frequency decomposition and knowledge extraction improves knowledge on hydrological behavior. Both models and both extraction methods were applied and assessed using a fictitious reference model. Discussion is proposed in order to analyze efficiency of the methods compared to in situ measurements and tracing. [1] D. Labat et al. 'Rainfall-runoff relations for karst springs. Part II: continuous wavelet and discrete orthogonal multiresolution' In J of Hydrology, Vol. 238, 2000, pp. 149-178. [2] A. Johannet et al. 'Prediction of Lez Spring Discharge (Southern France) by Neural Networks using Orthogonal Wavelet Decomposition'.IJCNN Proceedings Brisbane 2012. [3] L. Kong A Siou et al. 'Modélisation hydrodynamique des karsts par réseaux de neurones : Comment dépasser la boîte noire. (Karst hydrodynamic modelling using artificial neural networks: how to surpass the black box ?)'. Proceedings of the 9th conference on limestone hydrogeology,2011 Besançon, France.

  9. SDP_mharwit_1: Demonstration of HIFI Linear Polarization Analysis of Spectral Features

    NASA Astrophysics Data System (ADS)

    Harwit, M.

    2010-03-01

    We propose to observe the polarization of the 621 GHz water vapor maser in VY Canis Majoris to demonstrate the capability of HIFI to make polarization observations of Far-Infrared/Submillimeter spectral lines. The proposed Demonstration Phase would: - Show that HIFI is capable of interesting linear polarization measurements of spectral lines; - Test out the highest spectral resolving power to sort out closely spaced Doppler components; - Determine whether the relative intensities predicted by Neufeld and Melnick are correct; - Record the degree and direction of linear polarization for the closely-Doppler shifted peaks.

  10. Pattern recognition and genetic algorithms for discrimination of orange juices and reduction of significant components from headspace solid-phase microextraction.

    PubMed

    Rinaldi, Maurizio; Gindro, Roberto; Barbeni, Massimo; Allegrone, Gianna

    2009-01-01

    Orange (Citrus sinensis L.) juice comprises a complex mixture of volatile components that are difficult to identify and quantify. Classification and discrimination of the varieties on the basis of the volatile composition could help to guarantee the quality of a juice and to detect possible adulteration of the product. To provide information on the amounts of volatile constituents in fresh-squeezed juices from four orange cultivars and to establish suitable discrimination rules to differentiate orange juices using new chemometric approaches. Fresh juices of four orange cultivars were analysed by headspace solid-phase microextraction (HS-SPME) coupled with GC-MS. Principal component analysis, linear discriminant analysis and heuristic methods, such as neural networks, allowed clustering of the data from HS-SPME analysis while genetic algorithms addressed the problem of data reduction. To check the quality of the results the chemometric techniques were also evaluated on a sample. Thirty volatile compounds were identified by HS-SPME and GC-MS analyses and their relative amounts calculated. Differences in composition of orange juice volatile components were observed. The chosen orange cultivars could be discriminated using neural networks, genetic relocation algorithms and linear discriminant analysis. Genetic algorithms applied to the data were also able to detect the most significant compounds. SPME is a useful technique to investigate orange juice volatile composition and a flexible chemometric approach is able to correctly separate the juices.

  11. Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoqian; Tian, Jie; Chen, Zhe

    2010-03-01

    Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.

  12. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  13. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  14. Co-pyrolysis characteristics and kinetic analysis of organic food waste and plastic.

    PubMed

    Tang, Yijing; Huang, Qunxing; Sun, Kai; Chi, Yong; Yan, Jianhua

    2018-02-01

    In this work, typical organic food waste (soybean protein (SP)) and typical chlorine enriched plastic waste (polyvinyl chloride (PVC)) were chosen as principal MSW components and their interaction during co-pyrolysis was investigated. Results indicate that the interaction accelerated the reaction during co-pyrolysis. The activation energies needed were 2-13% lower for the decomposition of mixture compared with linear calculation while the maximum reaction rates were 12-16% higher than calculation. In the fixed-bed experiments, interaction was observed to reduce the yield of tar by 2-69% and promote the yield of char by 13-39% compared with linear calculation. In addition, 2-6 times more heavy components and 61-93% less nitrogen-containing components were formed for tar derived from mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Predictors of burnout among correctional mental health professionals.

    PubMed

    Gallavan, Deanna B; Newman, Jody L

    2013-02-01

    This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.

  16. Non-linear Min protein interactions generate harmonics that signal mid-cell division in Escherichia coli

    PubMed Central

    Walsh, James C.; Angstmann, Christopher N.; Duggin, Iain G.

    2017-01-01

    The Min protein system creates a dynamic spatial pattern in Escherichia coli cells where the proteins MinD and MinE oscillate from pole to pole. MinD positions MinC, an inhibitor of FtsZ ring formation, contributing to the mid-cell localization of cell division. In this paper, Fourier analysis is used to decompose experimental and model MinD spatial distributions into time-dependent harmonic components. In both experiment and model, the second harmonic component is responsible for producing a mid-cell minimum in MinD concentration. The features of this harmonic are robust in both experiment and model. Fourier analysis reveals a close correspondence between the time-dependent behaviour of the harmonic components in the experimental data and model. Given this, each molecular species in the model was analysed individually. This analysis revealed that membrane-bound MinD dimer shows the mid-cell minimum with the highest contrast when averaged over time, carrying the strongest signal for positioning the cell division ring. This concurs with previous data showing that the MinD dimer binds to MinC inhibiting FtsZ ring formation. These results show that non-linear interactions of Min proteins are essential for producing the mid-cell positioning signal via the generation of second-order harmonic components in the time-dependent spatial protein distribution. PMID:29040283

  17. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  18. Analysis of mutagenic DNA repair in a thermoconditional mutant of Saccharomyces cerevisiae. III. Dose-response pattern of mutation induction in UV-irradiated rev2ts cells.

    PubMed

    Siede, W; Eckardt, F

    1986-01-01

    Recent studies regarding the influence of cycloheximide on the temperature-dependent increase in survival and mutation frequencies of a thermoconditional rev2 mutant lead to the suggestion that the REV2-coded mutagenic repair function is UV-inducible. In the present study we show that stationary-phase rev2ts cells are characterized by a biphasic linear-quadratic dose-dependence of mutation induction ("mutation kinetics") of ochre alleles at 23 degrees C (permissive temperature) but linear kinetics at the restrictive temperature of 36 degrees C. Mathematical analysis using a model based on Poisson statistics and a further mathematical procedure, the calculation of "apparent survival", support the assumption that the quadratic component of the reverse mutation kinetics investigated can be attributed to a UV-inducible component of mutagenic DNA repair controlled by the REV2 gene.

  19. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  20. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  1. Spatial variation analyses of Thematic Mapper data for the identification of linear features in agricultural landscapes

    NASA Technical Reports Server (NTRS)

    Pelletier, R. E.

    1984-01-01

    A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.

  2. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  3. Continuous functional magnetic resonance imaging reveals dynamic nonlinearities of "dose-response" curves for finger opposition.

    PubMed

    Berns, G S; Song, A W; Mao, H

    1999-07-15

    Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.

  4. Extracting Independent Local Oscillatory Geophysical Signals by Geodetic Tropospheric Delay

    NASA Technical Reports Server (NTRS)

    Botai, O. J.; Combrinck, L.; Sivakumar, V.; Schuh, H.; Bohm, J.

    2010-01-01

    Zenith Tropospheric Delay (ZTD) due to water vapor derived from space geodetic techniques and numerical weather prediction simulated-reanalysis data exhibits non-linear and non-stationary properties akin to those in the crucial geophysical signals of interest to the research community. These time series, once decomposed into additive (and stochastic) components, have information about the long term global change (the trend) and other interpretable (quasi-) periodic components such as seasonal cycles and noise. Such stochastic component(s) could be a function that exhibits at most one extremum within a data span or a monotonic function within a certain temporal span. In this contribution, we examine the use of the combined Ensemble Empirical Mode Decomposition (EEMD) and Independent Component Analysis (ICA): the EEMD-ICA algorithm to extract the independent local oscillatory stochastic components in the tropospheric delay derived from the European Centre for Medium-Range Weather Forecasts (ECMWF) over six geodetic sites (HartRAO, Hobart26, Wettzell, Gilcreek, Westford, and Tsukub32). The proposed methodology allows independent geophysical processes to be extracted and assessed. Analysis of the quality index of the Independent Components (ICs) derived for each cluster of local oscillatory components (also called the Intrinsic Mode Functions (IMFs)) for all the geodetic stations considered in the study demonstrate that they are strongly site dependent. Such strong dependency seems to suggest that the localized geophysical signals embedded in the ZTD over the geodetic sites are not correlated. Further, from the viewpoint of non-linear dynamical systems, four geophysical signals the Quasi-Biennial Oscillation (QBO) index derived from the NCEP/NCAR reanalysis, the Southern Oscillation Index (SOI) anomaly from NCEP, the SIDC monthly Sun Spot Number (SSN), and the Length of Day (LoD) are linked to the extracted signal components from ZTD. Results from the synchronization analysis show that ZTD and the geophysical signals exhibit (albeit subtle) site dependent phase synchronization index.

  5. Semi-blind Bayesian inference of CMB map and power spectrum

    NASA Astrophysics Data System (ADS)

    Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim

    2016-04-01

    We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.

  6. Differential Lipid Profiles of Normal Human Brain Matter and Gliomas by Positive and Negative Mode Desorption Electrospray Ionization – Mass Spectrometry Imaging

    PubMed Central

    Pirro, Valentina; Hattab, Eyas M.; Cohen-Gadol, Aaron A.; Cooks, R. Graham

    2016-01-01

    Desorption electrospray ionization—mass spectrometry (DESI-MS) imaging was used to analyze unmodified human brain tissue sections from 39 subjects sequentially in the positive and negative ionization modes. Acquisition of both MS polarities allowed more complete analysis of the human brain tumor lipidome as some phospholipids ionize preferentially in the positive and others in the negative ion mode. Normal brain parenchyma, comprised of grey matter and white matter, was differentiated from glioma using positive and negative ion mode DESI-MS lipid profiles with the aid of principal component analysis along with linear discriminant analysis. Principal component–linear discriminant analyses of the positive mode lipid profiles was able to distinguish grey matter, white matter, and glioma with an average sensitivity of 93.2% and specificity of 96.6%, while the negative mode lipid profiles had an average sensitivity of 94.1% and specificity of 97.4%. The positive and negative mode lipid profiles provided complementary information. Principal component–linear discriminant analysis of the combined positive and negative mode lipid profiles, via data fusion, resulted in approximately the same average sensitivity (94.7%) and specificity (97.6%) of the positive and negative modes when used individually. However, they complemented each other by improving the sensitivity and specificity of all classes (grey matter, white matter, and glioma) beyond 90% when used in combination. Further principal component analysis using the fused data resulted in the subgrouping of glioma into two groups associated with grey and white matter, respectively, a separation not apparent in the principal component analysis scores plots of the separate positive and negative mode data. The interrelationship of tumor cell percentage and the lipid profiles is discussed, and how such a measure could be used to measure residual tumor at surgical margins. PMID:27658243

  7. Use of multivariate analysis for determining sources of solutes found in wet atmospheric deposition in the United States

    USGS Publications Warehouse

    Hooper, R.P.; Peters, N.E.

    1989-01-01

    A principal-components analysis was performed on the major solutes in wet deposition collected from 194 stations in the United States and its territories. Approximately 90% of the components derived could be interpreted as falling into one of three categories - acid, salt, or an agricultural/soil association. The total mass, or the mass of any one solute, was apportioned among these components by multiple linear regression techniques. The use of multisolute components for determining trends or spatial distribution represents a substantial improvement over single-solute analysis in that these components are more directly related to the sources of the deposition. The geographic patterns displayed by the components in this analysis indicate a far more important role for acid deposition in the Southeast and intermountain regions of the United States than would be indicated by maps of sulfate or nitrate deposition alone. In the Northeast and Midwest, the acid component is not declining at most stations, as would be expected from trends in sulfate deposition, but is holding constant or increasing. This is due, in part, to a decline in the agriculture/soil factor throughout this region, which would help to neutralize the acidity.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less

  9. Comparison of GPS tropospheric delays derived from two consecutive EPN reprocessing campaigns from the point of view of climate monitoring

    NASA Astrophysics Data System (ADS)

    Baldysz, Zofia; Nykiel, Grzegorz; Araszkiewicz, Andrzej; Figurski, Mariusz; Szafranek, Karolina

    2016-09-01

    The main purpose of this research was to acquire information about consistency of ZTD (zenith total delay) linear trends and seasonal components between two consecutive GPS reprocessing campaigns. The analysis concerned two sets of the ZTD time series which were estimated during EUREF (Reference Frame Sub-Commission for Europe) EPN (Permanent Network) reprocessing campaigns according to 2008 and 2015 MUT AC (Military University of Technology Analysis Centre) scenarios. Firstly, Lomb-Scargle periodograms were generated for 57 EPN stations to obtain a characterisation of oscillations occurring in the ZTD time series. Then, the values of seasonal components and linear trends were estimated using the LSE (least squares estimation) approach. The Mann-Kendall trend test was also carried out to verify the presence of linear long-term ZTD changes. Finally, differences in seasonal signals and linear trends between these two data sets were investigated. All these analyses were conducted for the ZTD time series of two lengths: a shortened 16-year series and a full 18-year one. In the case of spectral analysis, amplitudes of the annual and semi-annual periods were almost exactly the same for both reprocessing campaigns. Exceptions were found for only a few stations and they did not exceed 1 mm. The estimated trends were also similar. However, for the reprocessing performed in 2008, the trends values were usually higher. In general, shortening of the analysed time period by 2 years resulted in a decrease of the linear trends values of about 0.07 mm yr-1. This was confirmed by analyses based on two data sets.

  10. Systematic Analysis of Absorbed Anti-Inflammatory Constituents and Metabolites of Sarcandra glabra in Rat Plasma Using Ultra-High-Pressure Liquid Chromatography Coupled with Linear Trap Quadrupole Orbitrap Mass Spectrometry.

    PubMed

    Li, Xiong; Zhao, Jin; Liu, Jianxing; Li, Geng; Zhao, Ya; Zeng, Xing

    2016-01-01

    Ultra-high-pressure liquid chromatography (UHPLC) was coupled with linear ion trap quadrupole Orbitrap mass spectrometry (LTQ-Orbitrap) and was used for the first time to systematically analyze the absorbed components and metabolites in rat plasma after oral administration of the water extract of Sarcandra glabra. This extract is a well-known Chinese herbal medicine for the treatment of inflammation and immunity related diseases. The anti-inflammatory activities of the absorbed components were evaluated by measuring nitric oxide (NO) production and proinflammatory genes expression in lipopolysaccharide (LPS)-stimulated murine RAW 264.7 macrophages. As a result, 54 components in Sarcandra glabra were detected in dosed rat plasma, and 36 of them were positively identified. Moreover, 23 metabolites were characterized and their originations were traced. Furthermore, 20 of the 24 studied components showed anti-inflammatory activities. These results provide evidence that this method efficiency detected constituents in plasma based on the anti-inflammatory mechanism of multiple components and would be a useful technique for screening multiple targets for natural medicine research.

  11. Latent effects decision analysis

    DOEpatents

    Cooper, J Arlin [Albuquerque, NM; Werner, Paul W [Albuquerque, NM

    2004-08-24

    Latent effects on a system are broken down into components ranging from those far removed in time from the system under study (latent) to those which closely effect changes in the system. Each component is provided with weighted inputs either by a user or from outputs of other components. A non-linear mathematical process known as `soft aggregation` is performed on the inputs to each component to provide information relating to the component. This information is combined in decreasing order of latency to the system to provide a quantifiable measure of an attribute of a system (e.g., safety) or to test hypotheses (e.g., for forensic deduction or decisions about various system design options).

  12. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  13. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  15. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  16. Efficient computational nonlinear dynamic analysis using modal modification response technique

    NASA Astrophysics Data System (ADS)

    Marinone, Timothy; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2012-08-01

    Generally, structural systems contain nonlinear characteristics in many cases. These nonlinear systems require significant computational resources for solution of the equations of motion. Much of the model, however, is linear where the nonlinearity results from discrete local elements connecting different components together. Using a component mode synthesis approach, a nonlinear model can be developed by interconnecting these linear components with highly nonlinear connection elements. The approach presented in this paper, the Modal Modification Response Technique (MMRT), is a very efficient technique that has been created to address this specific class of nonlinear problem. By utilizing a Structural Dynamics Modification (SDM) approach in conjunction with mode superposition, a significantly smaller set of matrices are required for use in the direct integration of the equations of motion. The approach will be compared to traditional analytical approaches to make evident the usefulness of the technique for a variety of test cases.

  17. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  18. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  19. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2017-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  20. Rotordynamic Characteristics of the HPOTP (High Pressure Oxygen Turbopump) of the SSME (Space Shuttle Main Engine)

    NASA Technical Reports Server (NTRS)

    Childs, D. W.

    1984-01-01

    Rotational stability of turbopump components in the space shuttle main engine was studied via analysis of component and structural dynamic models. Subsynchronous vibration caused unacceptable migration of the rotor/housing unit with unequal load sharing of the synchronous bearings that resulted in the failure of the High Pressure Oxygen Turbopump. Linear analysis shows that a shrouded inducer eliminates the second critical speed and the stability problem, a stiffened rotor improves the rotordynamic characteristics of the turbopump, and installing damper boost/impeller seals reduces bearing loads. Nonlinear analysis shows that by increasing the "dead band' clearances, a marked reduction in peak bearing loads occurs.

  1. gpICA: A Novel Nonlinear ICA Algorithm Using Geometric Linearization

    NASA Astrophysics Data System (ADS)

    Nguyen, Thang Viet; Patra, Jagdish Chandra; Emmanuel, Sabu

    2006-12-01

    A new geometric approach for nonlinear independent component analysis (ICA) is presented in this paper. Nonlinear environment is modeled by the popular post nonlinear (PNL) scheme. To eliminate the nonlinearity in the observed signals, a novel linearizing method named as geometric post nonlinear ICA (gpICA) is introduced. Thereafter, a basic linear ICA is applied on these linearized signals to estimate the unknown sources. The proposed method is motivated by the fact that in a multidimensional space, a nonlinear mixture is represented by a nonlinear surface while a linear mixture is represented by a plane, a special form of the surface. Therefore, by geometrically transforming the surface representing a nonlinear mixture into a plane, the mixture can be linearized. Through simulations on different data sets, superior performance of gpICA algorithm has been shown with respect to other algorithms.

  2. On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman

    2016-04-01

    The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.

  3. Development and Integration of an Advanced Stirling Convertor Linear Alternator Model for a Tool Simulating Convertor Performance and Creating Phasor Diagrams

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2013-01-01

    A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.

  4. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  6. The Seismic Tool-Kit (STK): an open source software for seismology and signal processing.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique

    2016-04-01

    We present an open source software project (GNU public license), named STK: Seismic ToolKit, that is dedicated mainly for seismology and signal processing. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 19 500 downloads at the date of writing. The STK project is composed of two main branches: First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The estimation of spectral density of the signal are performed via the Fourier transform, with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noize), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. A MINimum library of Linear AlGebra (MIN-LINAG) is also provided for computing the main matrix process like: QR/QL decomposition, Cholesky solve of linear system, finding eigen value/eigen vectors, QR-solve/Eigen-solve of linear equations systems ... etc. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. Usefull links: http://sourceforge.net/projects/seismic-toolkit/ http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/

  7. Characterization of CDOM from urban waters in Northern-Northeastern China using excitation-emission matrix fluorescence and parallel factor analysis.

    PubMed

    Zhao, Ying; Song, Kaishan; Li, Sijia; Ma, Jianhang; Wen, Zhidan

    2016-08-01

    Chromophoric dissolved organic matter (CDOM) plays an important role in aquatic systems, but high concentrations of organic materials are considered pollutants. The fluorescent component characteristics of CDOM in urban waters sampled from Northern and Northeastern China were examined by excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC) to investigate the source and compositional changes of CDOM on both space and pollution levels. One humic-like (C1), one tryptophan-like component (C2), and one tyrosine-like component (C3) were identified by PARAFAC. Mean fluorescence intensities of the three CDOM components varied spatially and by pollution level in cities of Northern and Northeastern China during July-August, 2013 and 2014. Principal components analysis (PCA) was conducted to identify the relative distribution of all water samples. Cluster analysis (CA) was also used to categorize the samples into groups of similar pollution levels within a study area. Strong positive linear relationships were revealed between the CDOM absorption coefficients a(254) (R (2) = 0.89, p < 0.01); a(355) (R (2) = 0.94, p < 0.01); and the fluorescence intensity (F max) for the humic-like C1 component. A positive linear relationship (R (2) = 0.77) was also exhibited between dissolved organic carbon (DOC) and the F max for the humic-like C1 component, but a relatively weak correlation (R (2) = 0.56) was detected between DOC and the F max for the tryptophan-like component (C2). A strong positive correlation was observed between the F max for the tryptophan-like component (C2) and total nitrogen (TN) (R (2) = 0.78), but moderate correlations were observed with ammonium-N (NH4-N) (R (2) = 0.68), and chemical oxygen demand (CODMn) (R (2) = 0.52). Therefore, the fluorescence intensities of CDOM components can be applied to monitor water quality in real time compared to that of traditional approaches. These results demonstrate that EEM-PARAFAC is useful to evaluate the dynamics of CDOM fluorescent components in urban waters from Northern and Northeastern China and this method has potential applications for monitoring urban water quality in different regions with various hydrological conditions and pollution levels.

  8. A Modern Picture of Barred Galaxy Dynamics

    NASA Astrophysics Data System (ADS)

    Petersen, Michael; Weinberg, Martin; Katz, Neal

    2018-01-01

    Observations of disk galaxies suggest that bars are responsible for altering global galaxy parameters (e.g. structures, gas fraction, star formation rate). The canonical understanding of the mechanisms underpinning bar-driven secular dynamics in disk galaxies has been largely built upon the analysis of linear theory, despite galactic bars being clearly demonstrated to be nonlinear phenomena in n-body simulations. We present simulations of barred Milky Way-like galaxy models designed to elucidate nonlinear barred galaxy dynamics. We have developed two new methodologies for analyzing n-body simulations that give the best of both powerful analytic linear theory and brute force simulation analysis: orbit family identification and multicomponent torque analysis. The software will be offered publicly to the community for their own simulation analysis.The orbit classifier reveals that the details of kinematic components in galactic disks (e.g. the bar, bulge, thin disk, and thick disk components) are powerful discriminators of evolutionary paradigms (i.e. violent instabilities and secular evolution) as well as the basic parameters of the dark matter halo (mass distribution, angular momentum distribution). Multicomponent torque analysis provides a thorough accounting of the transfer of angular momentum between orbits, global patterns, and distinct components in order to better explain the underlying physics which govern the secular evolution of barred disk galaxies.Using these methodologies, we are able to identify the successes and failures of linear theory and traditional n-body simulations en route to a detailed understanding of the control bars exhibit over secular evolution in galaxies. We present explanations for observed physical and velocity structures in observations of barred galaxies alongside predictions for how structures will vary with dynamical properties from galaxy to galaxy as well as over the lifetime of a galaxy, finding that the transfer of angular momentum through previously unidentified channels can more fully explain the observed dynamics.

  9. Least-dependent-component analysis based on mutual information

    NASA Astrophysics Data System (ADS)

    Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter

    2004-12-01

    We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.

  10. Virtual directions in paleomagnetism: A global and rapid approach to evaluate the NRM components.

    NASA Astrophysics Data System (ADS)

    Ramón, Maria J.; Pueyo, Emilio L.; Oliva-Urcia, Belén; Larrasoaña, Juan C.

    2017-02-01

    We introduce a method and software to process demagnetization data for a rapid and integrative estimation of characteristic remanent magnetization (ChRM) components. The virtual directions (VIDI) of a paleomagnetic site are “all” possible directions that can be calculated from a given demagnetization routine of “n” steps (being m the number of specimens in the site). If the ChRM can be defined for a site, it will be represented in the VIDI set. Directions can be calculated for successive steps using principal component analysis, both anchored to the origin (resultant virtual directions RVD; m * (n2+n)/2) and not anchored (difference virtual directions DVD; m * (n2-n)/2). The number of directions per specimen (n2) is very large and will enhance all ChRM components with noisy regions where two components were fitted together (mixing their unblocking intervals). In the same way, resultant and difference virtual circles (RVC, DVC) are calculated. Virtual directions and circles are a global and objective approach to unravel different natural remanent magnetization (NRM) components for a paleomagnetic site without any assumption. To better constrain the stable components, some filters can be applied, such as establishing an upper boundary to the MAD, removing samples with anomalous intensities, or stating a minimum number of demagnetization steps (objective filters) or selecting a given unblocking interval (subjective but based on the expertise). On the other hand, the VPD program also allows the application of standard approaches (classic PCA fitting of directions a circles) and other ancillary methods (stacking routine, linearity spectrum analysis) giving an objective, global and robust idea of the demagnetization structure with minimal assumptions. Application of the VIDI method to natural cases (outcrops in the Pyrenees and u-channel data from a Roman dam infill in northern Spain) and their comparison to other approaches (classic end-point, demagnetization circle analysis, stacking routine and linearity spectrum analysis) allows validation of this technique. The VIDI is a global approach and it is especially useful for large data sets and rapid estimation of the NRM components.

  11. Identifying Plant Part Composition of Forest Logging Residue Using Infrared Spectral Data and Linear Discriminant Analysis

    PubMed Central

    Acquah, Gifty E.; Via, Brian K.; Billor, Nedret; Fasina, Oladiran O.; Eckhardt, Lori G.

    2016-01-01

    As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS) and Fourier transform infrared spectroscopy (FTIRS) together with linear discriminant analysis (LDA). Forest logging residue harvested from several Pinus taeda (loblolly pine) plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage). Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC) was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability of forest biomass so that the appropriate online adjustments to parameters can be made in time to ensure process optimization and product quality. PMID:27618901

  12. Trueness, Precision, and Detectability for Sampling and Analysis of Organic Species in Airborne Particulate Matter

    EPA Science Inventory

    Recovery. precision, limits of detection and quantitation, blank levels, calibration linearity, and agreement with certified reference materials were determined for two classes of organic components of airborne particulate matter, polycyclic aromatic hydrocarbons and hopanes usin...

  13. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  14. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  15. A systematic study on the influencing parameters and improvement of quantitative analysis of multi-component with single marker method using notoginseng as research subject.

    PubMed

    Wang, Chao-Qun; Jia, Xiu-Hong; Zhu, Shu; Komatsu, Katsuko; Wang, Xuan; Cai, Shao-Qing

    2015-03-01

    A new quantitative analysis of multi-component with single marker (QAMS) method for 11 saponins (ginsenosides Rg1, Rb1, Rg2, Rh1, Rf, Re and Rd; notoginsenosides R1, R4, Fa and K) in notoginseng was established, when 6 of these saponins were individually used as internal referring substances to investigate the influences of chemical structure, concentrations of quantitative components, and purities of the standard substances on the accuracy of the QAMS method. The results showed that the concentration of the analyte in sample solution was the major influencing parameter, whereas the other parameters had minimal influence on the accuracy of the QAMS method. A new method for calculating the relative correction factors by linear regression was established (linear regression method), which demonstrated to decrease standard method differences of the QAMS method from 1.20%±0.02% - 23.29%±3.23% to 0.10%±0.09% - 8.84%±2.85% in comparison with the previous method. And the differences between external standard method and the QAMS method using relative correction factors calculated by linear regression method were below 5% in the quantitative determination of Rg1, Re, R1, Rd and Fa in 24 notoginseng samples and Rb1 in 21 notoginseng samples. And the differences were mostly below 10% in the quantitative determination of Rf, Rg2, R4 and N-K (the differences of these 4 constituents bigger because their contents lower) in all the 24 notoginseng samples. The results indicated that the contents assayed by the new QAMS method could be considered as accurate as those assayed by external standard method. In addition, a method for determining applicable concentration ranges of the quantitative components assayed by QAMS method was established for the first time, which could ensure its high accuracy and could be applied to QAMS methods of other TCMs. The present study demonstrated the practicability of the application of the QAMS method for the quantitative analysis of multi-component and the quality control of TCMs and TCM prescriptions. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  17. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  18. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.

    PubMed

    Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad

    2014-01-01

    Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.

  20. A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok

    2012-02-01

    We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.

  1. Linear and angular control of circular walking in healthy older adults and subjects with cerebellar ataxia.

    PubMed

    Goodworth, Adam D; Paquette, Caroline; Jones, Geoffrey Melvill; Block, Edward W; Fletcher, William A; Hu, Bin; Horak, Fay B

    2012-05-01

    Linear and angular control of trunk and leg motion during curvilinear navigation was investigated in subjects with cerebellar ataxia and age-matched control subjects. Subjects walked with eyes open around a 1.2-m circle. The relationship of linear to angular motion was quantified by determining the ratios of trunk linear velocity to trunk angular velocity and foot linear position to foot angular position. Errors in walking radius (the ratio of linear to angular motion) also were quantified continuously during the circular walk. Relative variability of linear and angular measures was compared using coefficients of variation (CoV). Patterns of variability were compared using power spectral analysis for the trunk and auto-covariance analysis for the feet. Errors in radius were significantly increased in patients with cerebellar damage as compared to controls. Cerebellar subjects had significantly larger CoV of feet and trunk in angular, but not linear, motion. Control subjects also showed larger CoV in angular compared to linear motion of the feet and trunk. Angular and linear components of stepping differed in that angular, but not linear, foot placement had a negative correlation from one stride to the next. Thus, walking in a circle was associated with more, and a different type of, variability in angular compared to linear motion. Results are consistent with increased difficulty of, and role of the cerebellum in, control of angular trunk and foot motion for curvilinear locomotion.

  2. Interpretation of a compositional time series

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.

  3. Nonlinear vs. linear biasing in Trp-cage folding simulations

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-01

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  4. Nonlinear vs. linear biasing in Trp-cage folding simulations.

    PubMed

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  5. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  6. Linear analysis of a force reflective teleoperator

    NASA Technical Reports Server (NTRS)

    Biggers, Klaus B.; Jacobsen, Stephen C.; Davis, Clark C.

    1989-01-01

    Complex force reflective teleoperation systems are often very difficult to analyze due to the large number of components and control loops involved. One mode of a force reflective teleoperator is described. An analysis of the performance of the system based on a linear analysis of the general full order model is presented. Reduced order models are derived and correlated with the full order models. Basic effects of force feedback and position feedback are examined and the effects of time delays between the master and slave are studied. The results show that with symmetrical position-position control of teleoperators, a basic trade off must be made between the intersystem stiffness of the teleoperator, and the impedance felt by the operator in free space.

  7. Linear dynamic coupling in geared rotor systems

    NASA Technical Reports Server (NTRS)

    David, J. W.; Mitchell, L. D.

    1986-01-01

    The effects of high frequency oscillations caused by the gear mesh, on components of a geared system that can be modeled as rigid discs are analyzed using linear dynamic coupling terms. The coupled, nonlinear equations of motion for a disc attached to a rotating shaft are presented. The results of a trial problem analysis show that the inclusion of the linear dynamic coupling terms can produce significant changes in the predicted response of geared rotor systems, and that the produced sideband responses are greater than the unbalanced response. The method is useful in designing gear drives for heavy-lift helicopters, industrial speed reducers, naval propulsion systems, and heavy off-road equipment.

  8. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  9. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  10. HgCdTe APD-based linear-mode photon counting components and ladar receivers

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.

    2011-05-01

    Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.

  11. Systematic Analysis of Absorbed Anti-Inflammatory Constituents and Metabolites of Sarcandra glabra in Rat Plasma Using Ultra-High-Pressure Liquid Chromatography Coupled with Linear Trap Quadrupole Orbitrap Mass Spectrometry

    PubMed Central

    Li, Xiong; Zhao, Jin; Liu, Jianxing; Li, Geng; Zhao, Ya; Zeng, Xing

    2016-01-01

    Ultra-high-pressure liquid chromatography (UHPLC) was coupled with linear ion trap quadrupole Orbitrap mass spectrometry (LTQ-Orbitrap) and was used for the first time to systematically analyze the absorbed components and metabolites in rat plasma after oral administration of the water extract of Sarcandra glabra. This extract is a well-known Chinese herbal medicine for the treatment of inflammation and immunity related diseases. The anti-inflammatory activities of the absorbed components were evaluated by measuring nitric oxide (NO) production and proinflammatory genes expression in lipopolysaccharide (LPS)-stimulated murine RAW 264.7 macrophages. As a result, 54 components in Sarcandra glabra were detected in dosed rat plasma, and 36 of them were positively identified. Moreover, 23 metabolites were characterized and their originations were traced. Furthermore, 20 of the 24 studied components showed anti-inflammatory activities. These results provide evidence that this method efficiency detected constituents in plasma based on the anti-inflammatory mechanism of multiple components and would be a useful technique for screening multiple targets for natural medicine research. PMID:26974321

  12. [An ultra-high-pressure liquid chromatography/linear ion trap-Orbitrap mass spectrometry method coupled with a diagnostic fragment ions-searching-based strategy for rapid identification and characterization of chemical components in Polygonum cuspidatum].

    PubMed

    Pan, Zhiran; Liang, Hailong; Liang, Chabhufi; Xu, Wen

    2015-01-01

    A method for qualitative analysis of constituents in Polygonum cuspidatum by ultra-high-pressure liquid chromatography coupled with linear ion trap-Orbitrap mass spectrometry (UHPLC-LTQ-Orbitrap MS) has been established. The methanol extract of Polygonum cuspidatumrn was separated on a Waters UPLC C18 column using acetonitrile-water (containing formic acid) eluting system and detected by LTQ-Orbitrap hybrid mass spectrometer in negative mode. The targeted components were further fragmented in LTQ and high accuracy data were acquired by Orbitrap MS. The summarized fragmentation pathways of typical reference components and a diagnostic fragment ions-searching-based strategy were used for detection and identification of the main phenolic components in Polygonum cuspidatum. Other clues such as nitrogen rule, even electron rule, degree of unsaturation rule and isotopic peak data were included for the structural elucidation as well. The whole analytical procedure was within 10 min and more than 30 components were identified or tentatively identified. This method is helpful for further phytochemical research and quality control on Polygonum cuspidatum and related preparations.

  13. Polarizabilities of Impurity Doped Quantum Dots Under Pulsed Field: Role of Multiplicative White Noise

    NASA Astrophysics Data System (ADS)

    Saha, Surajit; Ghosh, Manas

    2016-02-01

    We perform a rigorous analysis of the profiles of a few diagonal and off-diagonal components of linear ( α xx , α yy , α xy , and α yx ), first nonlinear ( β xxx , β yyy , β xyy , and β yxx ), and second nonlinear ( γ xxxx , γ yyyy , γ xxyy , and γ yyxx ) polarizabilities of quantum dots exposed to an external pulsed field. Simultaneous presence of multiplicative white noise has also been taken into account. The quantum dot contains a dopant represented by a Gaussian potential. The number of pulse and the dopant location have been found to fabricate the said profiles through their interplay. Moreover, a variation in the noise strength also contributes evidently in designing the profiles of above polarizability components. In general, the off-diagonal components have been found to be somewhat more responsive to a variation of noise strength. However, we have found some exception to the above fact for the off-diagonal β yxx component. The study projects some pathways of achieving stable, enhanced, and often maximized output of linear and nonlinear polarizabilities of doped quantum dots driven by multiplicative noise.

  14. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  15. Binding affinity toward human prion protein of some anti-prion compounds - Assessment based on QSAR modeling, molecular docking and non-parametric ranking.

    PubMed

    Kovačević, Strahinja; Karadžić, Milica; Podunavac-Kuzmanović, Sanja; Jevrić, Lidija

    2018-01-01

    The present study is based on the quantitative structure-activity relationship (QSAR) analysis of binding affinity toward human prion protein (huPrP C ) of quinacrine, pyridine dicarbonitrile, diphenylthiazole and diphenyloxazole analogs applying different linear and non-linear chemometric regression techniques, including univariate linear regression, multiple linear regression, partial least squares regression and artificial neural networks. The QSAR analysis distinguished molecular lipophilicity as an important factor that contributes to the binding affinity. Principal component analysis was used in order to reveal similarities or dissimilarities among the studied compounds. The analysis of in silico absorption, distribution, metabolism, excretion and toxicity (ADMET) parameters was conducted. The ranking of the studied analogs on the basis of their ADMET parameters was done applying the sum of ranking differences, as a relatively new chemometric method. The main aim of the study was to reveal the most important molecular features whose changes lead to the changes in the binding affinities of the studied compounds. Another point of view on the binding affinity of the most promising analogs was established by application of molecular docking analysis. The results of the molecular docking were proven to be in agreement with the experimental outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids.

    PubMed

    Peleato, Nicolás M; Andrews, Robert C

    2015-01-01

    This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.

  17. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  18. Assessing the Liquidity of Firms: Robust Neural Network Regression as an Alternative to the Current Ratio

    NASA Astrophysics Data System (ADS)

    de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia

    Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.

  19. Application of power spectrum, cepstrum, higher order spectrum and neural network analyses for induction motor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liang, B.; Iwnicki, S. D.; Zhao, Y.

    2013-08-01

    The power spectrum is defined as the square of the magnitude of the Fourier transform (FT) of a signal. The advantage of FT analysis is that it allows the decomposition of a signal into individual periodic frequency components and establishes the relative intensity of each component. It is the most commonly used signal processing technique today. If the same principle is applied for the detection of periodicity components in a Fourier spectrum, the process is called the cepstrum analysis. Cepstrum analysis is a very useful tool for detection families of harmonics with uniform spacing or the families of sidebands commonly found in gearbox, bearing and engine vibration fault spectra. Higher order spectra (HOS) (also known as polyspectra) consist of higher order moment of spectra which are able to detect non-linear interactions between frequency components. For HOS, the most commonly used is the bispectrum. The bispectrum is the third-order frequency domain measure, which contains information that standard power spectral analysis techniques cannot provide. It is well known that neural networks can represent complex non-linear relationships, and therefore they are extremely useful for fault identification and classification. This paper presents an application of power spectrum, cepstrum, bispectrum and neural network for fault pattern extraction of induction motors. The potential for using the power spectrum, cepstrum, bispectrum and neural network as a means for differentiating between healthy and faulty induction motor operation is examined. A series of experiments is done and the advantages and disadvantages between them are discussed. It has been found that a combination of power spectrum, cepstrum and bispectrum plus neural network analyses could be a very useful tool for condition monitoring and fault diagnosis of induction motors.

  20. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  1. Application of kernel principal component analysis and computational machine learning to exploration of metabolites strongly associated with diet.

    PubMed

    Shiokawa, Yuka; Date, Yasuhiro; Kikuchi, Jun

    2018-02-21

    Computer-based technological innovation provides advancements in sophisticated and diverse analytical instruments, enabling massive amounts of data collection with relative ease. This is accompanied by a fast-growing demand for technological progress in data mining methods for analysis of big data derived from chemical and biological systems. From this perspective, use of a general "linear" multivariate analysis alone limits interpretations due to "non-linear" variations in metabolic data from living organisms. Here we describe a kernel principal component analysis (KPCA)-incorporated analytical approach for extracting useful information from metabolic profiling data. To overcome the limitation of important variable (metabolite) determinations, we incorporated a random forest conditional variable importance measure into our KPCA-based analytical approach to demonstrate the relative importance of metabolites. Using a market basket analysis, hippurate, the most important variable detected in the importance measure, was associated with high levels of some vitamins and minerals present in foods eaten the previous day, suggesting a relationship between increased hippurate and intake of a wide variety of vegetables and fruits. Therefore, the KPCA-incorporated analytical approach described herein enabled us to capture input-output responses, and should be useful not only for metabolic profiling but also for profiling in other areas of biological and environmental systems.

  2. Simultaneous analysis of 11 main active components in Cirsium setosum based on HPLC-ESI-MS/MS and combined with statistical methods.

    PubMed

    Sun, Qian; Chang, Lu; Ren, Yanping; Cao, Liang; Sun, Yingguang; Du, Yingfeng; Shi, Xiaowei; Wang, Qiao; Zhang, Lantong

    2012-11-01

    A novel method based on high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry was developed for simultaneous determination of the 11 major active components including ten flavonoids and one phenolic acid in Cirsium setosum. Separation was performed on a reversed-phase C(18) column with gradient elution of methanol and 0.1‰ acetic acid (v/v). The identification and quantification of the analytes were achieved on a hybrid quadrupole linear ion trap mass spectrometer. Multiple-reaction monitoring scanning was employed for quantification with switching electrospray ion source polarity between positive and negative modes in a single run. Full validation of the assay was carried out including linearity, precision, accuracy, stability, limits of detection and quantification. The results demonstrated that the method developed was reliable, rapid, and specific. The 25 batches of C. setosum samples from different sources were first determined using the developed method and the total contents of 11 analytes ranged from 1717.460 to 23028.258 μg/g. Among them, the content of linarin was highest, and its mean value was 7340.967 μg/g. Principal component analysis and hierarchical clustering analysis were performed to differentiate and classify the samples, which is helpful for comprehensive evaluation of the quality of C. setosum. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. MECHANICAL PROPERTIES OF BLENDS OF PAMAM DENDRIMERS WITH POLY(VINYL CHLORIDE) AND POLY(VINYL ACETATE)

    EPA Science Inventory

    Hybrid blends of poly(amidoamine) PAMAM dendrimers with two linear high polymers, poly(vinyl chloride), PVC, and poly(vinyl acetate), PVAc, are reported. The interaction between the blend components was studied using dynamic mechanical analysis, xenon nuclear magnetic resonacne ...

  4. Linear unmixing of multidate hyperspectral imagery for crop yield estimation

    USDA-ARS?s Scientific Manuscript database

    In this paper, we have evaluated an unsupervised unmixing approach, vertex component analysis (VCA), for the application of crop yield estimation. The results show that abundance maps of the vegetation extracted by the approach are strongly correlated to the yield data (the correlation coefficients ...

  5. Autonomic cardiovascular modulation with three different anesthetic strategies during neurosurgical procedures.

    PubMed

    Guzzetti, S; Bassani, T; Latini, R; Masson, S; Barlera, S; Citerio, G; Porta, A

    2015-01-01

    Autonomic cardiovascular modulation during surgery might be affected by different anesthetic strategies. Aim of the present study was to assess autonomic control during three different anesthetic strategies in the course of neurosurgical procedures by the linear and non-linear analysis of two cardiovascular signals. Heart rate (EKG-RR intervals) and systolic arterial pressure (SAP) signals were analyzed in 93 patients during elective neurosurgical procedures at fixed points: anesthetic induction, dura mater opening, first and second hour of surgery, dura mater and skin closure. Patients were randomly assigned to three anesthetic strategies: sevoflurane+fentanyl (S-F), sevoflurane+remifentanil (S-R) and propofol+remifentanil (P-R). All the three anesthetic strategies were characterized by a reduction of RR and SAP variability. A more active autonomic sympathetic modulation, as ratio of low to high frequency spectral components of RR variability (LF/HF), was present in the P-R group vs. S-R group. This is confirmed by non-linear symbolic analysis of RR series and SAP variability analysis. In addition, an increased parasympathetic modulation was suggested by symbolic analysis of RR series during the second hour of surgery in S-F group. Despite an important reduction of cardiovascular signal variability, the analysis of RR and SAP signals were capable to detect information about autonomic control during anesthesia. Symbolic analysis (non-linear) seems to be able to highlight the differences of both the sympathetic (slow) and vagal (fast) modulation among anesthetics, while spectral analysis (linear) underlines the same differences but only in terms of balance between the two neural control systems.

  6. Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition.

    PubMed

    Alegre-Cortés, J; Soto-Sánchez, C; Pizá, Á G; Albarracín, A L; Farfán, F D; Felice, C J; Fernández, E

    2016-07-15

    Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  8. Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong

    2010-03-01

    The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.

  9. Biostatistics Series Module 10: Brief Overview of Multivariate Methods.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2017-01-01

    Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.

  10. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photo-voltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic control system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  11. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photovoltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic controls system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  12. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE PAGES

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2017-07-10

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  13. Gender classification of running subjects using full-body kinematics

    NASA Astrophysics Data System (ADS)

    Williams, Christina M.; Flora, Jeffrey B.; Iftekharuddin, Khan M.

    2016-05-01

    This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject's entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.

  14. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  15. Rapid identif ication and comparative analysis of chemical constituents in herbal medicine Fufang decoction by ultra-high-pressure liquid chromatography coupled with a hybrid linear ion trap-high-resolution mass spectrometry.

    PubMed

    Cao, Gang; Chen, Xiaocheng; Wu, Xin; Li, Qinglin; Zhang, Hongyan

    2015-05-01

    This study was conducted to reveal the relation between herbal medicine Fufang decoction and a single drug in terms of material base. Da-Cheng-Qi decoction (DCQD) was used as a model. Ultrahigh-pressure liquid chromatography coupled with a hybrid linear ion trap-high-resolution mass spectrometry (UHPLC-LTQ-Orbitrap) was applied to detect and identify the main chemical compounds. This technique was also employed to determine the different chemical components. Under optimized liquid chromatography and mass spectrometry conditions, 64 components, including iridoids, flavonoids, anthraquinones and coumarins, were separated and tentatively characterized in Da-Cheng-Qi decoction. After decoction, the contents of 18 compounds were markedly changed, and two components were no longer detected in Fufang decoction compared with single-medicine decoction. The established method provided a good example for the rapid identification of complicated polar constituents in herbal medicine prescriptions. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  17. New insights into the folding of a β-sheet miniprotein in a reduced space of collective hydrogen bond variables: application to a hydrodynamic analysis of the folding flow.

    PubMed

    Kalgin, Igor V; Caflisch, Amedeo; Chekmarev, Sergei F; Karplus, Martin

    2013-05-23

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted principal component analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed, and it is complemented by a hydrodynamic analysis. The latter, making use of "passive tracers" in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this "hydrogen atom" of protein folding.

  18. A first application of independent component analysis to extracting structure from stock returns.

    PubMed

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  19. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  20. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  1. Preliminary analysis of the effects of non-linear creep and flange contact forces on truck performance in curves

    DOT National Transportation Integrated Search

    1975-05-31

    Prediction of wheel displacements and wheel-rail forces is a prerequisite to the evaluation of the curving performance of rail vehicles. This information provides part of the basis for the rational design of wheels and suspension components, for esta...

  2. [HPLC fingerprint of flavonoids in Sophora flavescens and determination of five components].

    PubMed

    Ma, Hong-Yan; Zhou, Wan-Shan; Chu, Fu-Jiang; Wang, Dong; Liang, Sheng-Wang; Li, Shao

    2013-08-01

    A simple and reliable method of high-performance liquid chromatography with photodiode array detection (HPLC-DAD) was developed to evaluate the quality of a traditional Chinese medicine Sophora flavescens through establishing chromatographic fingerprint and simultaneous determination of five flavonoids, including trifolirhizin, maackiain, kushenol I, kurarinone and sophoraflavanone G. The optimal conditions of separation and detection were achieved on an ULTIMATE XB-C18 column (4.6 mm x 250 mm, 5 microm) with a gradient of acetonitrile and water, detected at 295 nm. In the chromatographic fingerprint, 13 peaks were selected as the characteristic peaks to assess the similarities of different samples collected from different origins in China according to similarity evaluation for chromatographic fingerprint of traditional chinese medicine (2004AB) and principal component analysis (PCA) were used in data analysis. There were significant differences in the fingerprint chromatograms between S. flavescens and S. tonkinensis. Principal component analysis showed that kurarinone and sophoraflavanone G were the most important component. In quantitative analysis, the five components showed good regression (R > 0.999) with linear ranges, and their recoveries were in the range of 96.3% - 102.3%. This study indicated that the combination of quantitative and chromatographic fingerprint analysis can be readily utilized as a quality control method for S. flavescens and its related traditional Chinese medicinal preparations.

  3. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  4. Analysis of the transient behavior of rubbing components

    NASA Technical Reports Server (NTRS)

    Quezdou, M. B.; Mullen, R. L.

    1986-01-01

    Finite element equations are developed for studying deformations and temperatures resulting from frictional heating in sliding system. The formulation is done for linear steady state motion in two dimensions. The equations include the effect of the velocity on the moving components. This gives spurious oscillations in their solutions by Galerkin finite element methods. A method called streamline upwind scheme is used to try to deal with this deficiency. The finite element program is then used to investigate the friction of heating in gas path seal.

  5. The Performance of A Sampled Data Delay Lock Loop Implemented with a Kalman Loop Filter.

    DTIC Science & Technology

    1980-01-01

    que for analysis is computer simulation. Other techniques include state variable techniques and z-transform methods. Since the Kalman filter is linear...LOGIC NOT SHOWN Figure 2. Block diagram of the sampled data delay lock loop (SDDLL) Es A/ A 3/A/ Figure 3. Sampled error voltage ( Es ) as a function of...from a sum of two components. The first component is the previous filtered es - timate advanced one step forward by the state transition matrix. The 8

  6. Three dimensional tracking with misalignment between display and control axes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Tyler, Mitchell; Kim, Won S.; Stark, Lawrence

    1992-01-01

    Human operators confronted with misaligned display and control frames of reference performed three dimensional, pursuit tracking in virtual environment and virtual space simulations. Analysis of the components of the tracking errors in the perspective displays presenting virtual space showed that components of the error due to visual motor misalignment may be linearly separated from those associated with the mismatch between display and control coordinate systems. Tracking performance improved with several hours practice despite previous reports that such improvement did not take place.

  7. Using Advanced Analysis Approaches to Complete Long-Term Evaluations of Natural Attenuation Processes on the Remediation of Dissolved Chlorinated Solvent Contamination

    DTIC Science & Technology

    2008-10-01

    and UTCHEM (Clement et al., 1998). While all four of these software packages use conservation of mass as the basic principle for tracking NAPL...simulate dissolution of a single NAPL component. UTCHEM can be used to simulate dissolution of a multiple NAPL components using either linear or first...parameters. No UTCHEM a/ 3D model, general purpose NAPL simulator. Yes Virulo a/ Probabilistic model for predicting leaching of viruses in unsaturated

  8. Calculation of cogging force in a novel slotted linear tubular brushless permanent magnet motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Z.Q.; Hor, P.J.; Howe, D.

    1997-09-01

    There is an increasing requirement for controlled linear motion over short and long strokes, in the factory automation and packaging industries, for example. Linear brushless PM motors could offer significant advantages over conventional actuation technologies, such as motor driven cams and linkages and pneumatic rams--in terms of efficiency, operating bandwidth, speed and thrust control, stroke and positional accuracy, and indeed over other linear motor technologies, such as induction motors. Here, a finite element/analytical based technique for the prediction of cogging force in a novel topology of slotted linear brushless permanent magnet motor has been developed and validated. The various forcemore » components, which influence cogging are pre-calculated by the finite element analysis of some basic magnetic structures, facilitate the analytical synthesis of the resultant cogging force. The technique can be used to aid design for the minimization of cogging.« less

  9. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  10. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  11. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  12. Simultaneous quantification of coumarins, flavonoids and limonoids in Fructus Citri Sarcodactylis by high performance liquid chromatography coupled with diode array detector.

    PubMed

    Chu, Jun; Li, Song-Lin; Yin, Zhi-Qi; Ye, Wen-Cai; Zhang, Qing-Wen

    2012-07-01

    A high performance liquid chromatography coupled with diode array detector (HPLC-DAD) method was developed for simultaneous quantification of eleven major bioactive components including six coumarins, three flavonoids and two limonoids in Fructus Citri Sarcodactylis. The analysis was performed on a Cosmosil 5 C(18)-MS-II column (4.6 mm × 250 mm, 5 μm) with water-acetonitrile gradient elution. The method was validated in terms of linearity, sensitivity, precision, stability and accuracy. It was found that the calibration curves for all analytes showed good linearity (R(2)>0.9993) within the test ranges. The overall limit of detection (LOD) and limit of quantification (LOQ) were less than 3.0 and 10.2 ng. The relative standard deviations (RSDs) for intra- and inter-day repeatability were not more than 4.99% and 4.92%, respectively. The sample was stable for at least 48 h. The spike recoveries of eleven components were 95.1-104.9%. The established method was successfully applied to determine eleven components in three samples from different locations. The results showed that the newly developed HPLC-DAD method was linear, sensitive, precise and accurate, and could be used for quality control of Fructus Citri Sarcodactylis. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Lattice modeling and application of independent component analysis to high power, long bunch beams in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Kolski, Jeffrey

    The linear lattice properties of the Proton Storage Ring (PSR) at the Los Alamos Neutron Science Center (LANSCE) in Los Alamos, NM were measured and applied to determine a better linear accelerator model. We found that the initial model was deficient in predicting the vertical focusing strength. The additional vertical focusing was located through fundamental understanding of experiment and statistically rigorous analysis. An improved model was constructed and compared against the initial model and measurement at operation set points and set points far away from nominal and was shown to indeed be an enhanced model. Independent component analysis (ICA) is a tool for data mining in many fields of science. Traditionally, ICA is applied to turn-by-turn beam position data as a means to measure the lattice functions of the real machine. Due to the diagnostic setup for the PSR, this method is not applicable. A new application method for ICA is derived, ICA applied along the length of the bunch. The ICA modes represent motions within the beam pulse. Several of the dominate ICA modes are experimentally identified.

  14. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  15. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  16. Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Shelhamer, M.

    2001-01-01

    It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.

  17. Nonlinear vs. linear biasing in Trp-cage folding simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less

  18. The Relationship of Social Engagement and Social Support With Sense of Community.

    PubMed

    Tang, Fengyan; Chi, Iris; Dong, Xinqi

    2017-07-01

    We aimed to investigate the relationship of engagement in social and cognitive activities and social support with the sense of community (SOC) and its components among older Chinese Americans. The Sense of Community Index (SCI) was used to measure SOC and its four component factors: membership, influence, needs fulfillment, and emotional connection. Social engagement was assessed with 16 questions. Social support included positive support and negative strain. Principal component analysis was used to identify the SCI components. Linear regression analysis was used to detect the contribution of social engagement and social support to SOC and its components. After controlling for sociodemographics and self-rated health, social activity engagement and positive social support were positively related to SOC and its components. This study points to the importance of social activity engagement and positive support from family and friends in increasing the sense of community. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Stability of matter-wave solitons in optical lattices

    NASA Astrophysics Data System (ADS)

    Ali, Sk. Golam; Roy, S. K.; Talukdar, B.

    2010-08-01

    We consider localized states of both single- and two-component Bose-Einstein condensates (BECs) confined in a potential resulting from the superposition of linear and nonlinear optical lattices and make use of Vakhitov-Kolokolov criterion to investigate the effect of nonlinear lattice on the stability of the soliton solutions in the linear optical lattice (LOL). For the single-component case we show that a weak nonlinear lattice has very little effect on the stability of such solitons while sufficiently strong nonlinear optical lattice (NOL) squeezes them to produce narrow bound states. For two-component condensates we find that when the strength of the NOL (γ1) is less than that of the LOL (V0) a relatively weak intra-atomic interaction (IAI) has little effect on the stability of the component solitons. This is true for both attractive and repulsive IAI. A strong attractive IAI, however, squeezes the BEC solitons while a similar repulsive IAI makes the component solitons wider. For γ1 > V0, only a strong attractive IAI squeezes the BEC solitons but the squeezing effect is less prominent than that found for γ1 < V0. We make useful checks on the results of our semianalytical stability analysis by solving the appropriate Gross-Pitaevskii equations numerically.

  20. Evolution of the cosmic web

    NASA Astrophysics Data System (ADS)

    Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.

    2014-07-01

    The cosmic web is the largest scale manifestation of the anisotropic gravitational collapse of matter. It represents the transitional stage between linear and non-linear structures and contains easily accessible information about the early phases of structure formation processes. Here we investigate the characteristics and the time evolution of morphological components. Our analysis involves the application of the NEXUS Multiscale Morphology Filter technique, predominantly its NEXUS+ version, to high resolution and large volume cosmological simulations. We quantify the cosmic web components in terms of their mass and volume content, their density distribution and halo populations. We employ new analysis techniques to determine the spatial extent of filaments and sheets, like their total length and local width. This analysis identifies clusters and filaments as the most prominent components of the web. In contrast, while voids and sheets take most of the volume, they correspond to underdense environments and are devoid of group-sized and more massive haloes. At early times the cosmos is dominated by tenuous filaments and sheets, which, during subsequent evolution, merge together, such that the present-day web is dominated by fewer, but much more massive, structures. The analysis of the mass transport between environments clearly shows how matter flows from voids into walls, and then via filaments into cluster regions, which form the nodes of the cosmic web. We also study the properties of individual filamentary branches, to find long, almost straight, filaments extending to distances larger than 100 h-1 Mpc. These constitute the bridges between massive clusters, which seem to form along approximatively straight lines.

  1. Magnetoencephalogram blind source separation and component selection procedure to improve the diagnosis of Alzheimer's disease patients.

    PubMed

    Escudero, Javier; Hornero, Roberto; Abásolo, Daniel; Fernández, Alberto; Poza, Jesús

    2007-01-01

    The aim of this study was to improve the diagnosis of Alzheimer's disease (AD) patients applying a blind source separation (BSS) and component selection procedure to their magnetoencephalogram (MEG) recordings. MEGs from 18 AD patients and 18 control subjects were decomposed with the algorithm for multiple unknown signals extraction. MEG channels and components were characterized by their mean frequency, spectral entropy, approximate entropy, and Lempel-Ziv complexity. Using Student's t-test, the components which accounted for the most significant differences between groups were selected. Then, these relevant components were used to partially reconstruct the MEG channels. By means of a linear discriminant analysis, we found that the BSS-preprocessed MEGs classified the subjects with an accuracy of 80.6%, whereas 72.2% accuracy was obtained without the BSS and component selection procedure.

  2. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  3. Assessing spatial coupling in complex population dynamics using mutual prediction and continuity statistics

    USGS Publications Warehouse

    Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.

    2005-01-01

    A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.

  4. Decomposition of ECG by linear filtering.

    PubMed

    Murthy, I S; Niranjan, U C

    1992-01-01

    A simple method is developed for the delineation of a given electrocardiogram (ECG) signal into its component waves. The properties of discrete cosine transform (DCT) are exploited for the purpose. The transformed signal is convolved with appropriate filters and the component waves are obtained by computing the inverse transform (IDCT) of the filtered signals. The filters are derived from the time signal itself. Analysis of continuous strips of ECG signals with various arrhythmias showed that the performance of the method is satisfactory both qualitatively and quantitatively. The small amplitude P wave usually had a high percentage rms difference (PRD) compared to the other large component waves.

  5. Iron-dextran complex: geometrical structure and magneto-optical features.

    PubMed

    Graczykowski, Bartłomiej; Dobek, Andrzej

    2011-11-15

    Molecular mass of the iron-dextran complex (M(w)=1133 kDa), diameter of its particles (∼8.3 nm) and the content of iron ions in the complex core (N(Fe)=6360) were determined by static light scattering, measurements of refractive index increment and the Cotton-Mouton effect in solution. The known number of iron ions permitted the calculation of the permanent magnetic dipole moment value to be μ(Fe)=3.17×10(-18) erg Oe(-1) and the determination of anisotropy of linear magneto-optical polarizabilities components as Δχ=9.2×10(-21) cm(3). Knowing both values and the value of the mean linear optical polarizability α=7.3×10(-20) cm(3), it was possible to show that the total measured CM effect was due to the reorientation of the permanent and the induced magnetic dipole moments of the complex. Analysis of the measured magneto-optical birefringence indicated very small optical anisotropy of linear optical polarizability components, κ(α), which suggested a homogeneous structure of particles of spherical symmetry. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Reliability of Radioisotope Stirling Convertor Linear Alternator

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.

    2006-01-01

    Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.

  7. Power Analysis for Models of Change in Cluster Randomized Designs

    ERIC Educational Resources Information Center

    Li, Wei; Konstantopoulos, Spyros

    2017-01-01

    Field experiments in education frequently assign entire groups such as schools to treatment or control conditions. These experiments incorporate sometimes a longitudinal component where for example students are followed over time to assess differences in the average rate of linear change, or rate of acceleration. In this study, we provide methods…

  8. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  9. Filter-based multiscale entropy analysis of complex physiological time series.

    PubMed

    Xu, Yuesheng; Zhao, Liang

    2013-08-01

    Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.

  10. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  12. Cavity-enhanced Raman spectroscopy with optical feedback cw diode lasers for gas phase analysis and spectroscopy.

    PubMed

    Salter, Robert; Chu, Johnny; Hippler, Michael

    2012-10-21

    A variant of cavity-enhanced Raman spectroscopy (CERS) is introduced, in which diode laser radiation at 635 nm is coupled into an external linear optical cavity composed of two highly reflective mirrors. Using optical feedback stabilisation, build-up of circulating laser power by 3 orders of magnitude occurs. Strong Raman signals are collected in forward scattering geometry. Gas phase CERS spectra of H(2), air, CH(4) and benzene are recorded to demonstrate the potential for analytical applications and fundamental molecular studies. Noise equivalent limits of detection in the ppm by volume range (1 bar sample) can be achieved with excellent linearity with a 10 mW excitation laser, with sensitivity increasing with laser power and integration time. The apparatus can be operated with battery powered components and can thus be very compact and portable. Possible applications include safety monitoring of hydrogen gas levels, isotope tracer studies (e.g., (14)N/(15)N ratios), observing isotopomers of hydrogen (e.g., radioactive tritium), and simultaneous multi-component gas analysis. CERS has the potential to become a standard method for sensitive gas phase Raman spectroscopy.

  13. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  14. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  15. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  16. Assessment of mechanical properties of isolated bovine intervertebral discs from multi-parametric magnetic resonance imaging.

    PubMed

    Recuerda, Maximilien; Périé, Delphine; Gilbert, Guillaume; Beaudoin, Gilles

    2012-10-12

    The treatment planning of spine pathologies requires information on the rigidity and permeability of the intervertebral discs (IVDs). Magnetic resonance imaging (MRI) offers great potential as a sensitive and non-invasive technique for describing the mechanical properties of IVDs. However, the literature reported small correlation coefficients between mechanical properties and MRI parameters. Our hypothesis is that the compressive modulus and the permeability of the IVD can be predicted by a linear combination of MRI parameters. Sixty IVDs were harvested from bovine tails, and randomly separated in four groups (in-situ, digested-6h, digested-18h, digested-24h). Multi-parametric MRI acquisitions were used to quantify the relaxation times T1 and T2, the magnetization transfer ratio MTR, the apparent diffusion coefficient ADC and the fractional anisotropy FA. Unconfined compression, confined compression and direct permeability measurements were performed to quantify the compressive moduli and the hydraulic permeabilities. Differences between groups were evaluated from a one way ANOVA. Multi linear regressions were performed between dependent mechanical properties and independent MRI parameters to verify our hypothesis. A principal component analysis was used to convert the set of possibly correlated variables into a set of linearly uncorrelated variables. Agglomerative Hierarchical Clustering was performed on the 3 principal components. Multilinear regressions showed that 45 to 80% of the Young's modulus E, the aggregate modulus in absence of deformation HA0, the radial permeability kr and the axial permeability in absence of deformation k0 can be explained by the MRI parameters within both the nucleus pulposus and the annulus pulposus. The principal component analysis reduced our variables to two principal components with a cumulative variability of 52-65%, which increased to 70-82% when considering the third principal component. The dendograms showed a natural division into four clusters for the nucleus pulposus and into three or four clusters for the annulus fibrosus. The compressive moduli and the permeabilities of isolated IVDs can be assessed mostly by MT and diffusion sequences. However, the relationships have to be improved with the inclusion of MRI parameters more sensitive to IVD degeneration. Before the use of this technique to quantify the mechanical properties of IVDs in vivo on patients suffering from various diseases, the relationships have to be defined for each degeneration state of the tissue that mimics the pathology. Our MRI protocol associated to principal component analysis and agglomerative hierarchical clustering are promising tools to classify the degenerated intervertebral discs and further find biomarkers and predictive factors of the evolution of the pathologies.

  17. Flyby Error Analysis Based on Contour Plots for the Cassini Tour

    NASA Technical Reports Server (NTRS)

    Stumpf, P. W.; Gist, E. M.; Goodson, T. D.; Hahn, Y.; Wagner, S. V.; Williams, P. N.

    2008-01-01

    The maneuver cancellation analysis consists of cost contour plots employed by the Cassini maneuver team. The plots are two-dimensional linear representations of a larger six-dimensional solution to a multi-maneuver, multi-encounter mission at Saturn. By using contours plotted with the dot product of vectors B and R and the dot product of vectors B and T components, it is possible to view the effects delta V on for various encounter positions in the B-plane. The plot is used in operations to help determine if the Approach Maneuver (ensuing encounter minus three days) and/or the Cleanup Maneuver (ensuing encounter plus three days) can be cancelled and also is a linear check of an integrated solution.

  18. Multivariate Analysis of Solar Spectral Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Pilewskie, P.; Rabbette, M.

    2001-01-01

    Principal component analysis is used to characterize approximately 7000 downwelling solar irradiance spectra retrieved at the Southern Great Plains site during an Atmospheric Radiation Measurement (ARM) shortwave intensive operating period. This analysis technique has proven to be very effective in reducing a large set of variables into a much smaller set of independent variables while retaining the information content. It is used to determine the minimum number of parameters necessary to characterize atmospheric spectral irradiance or the dimensionality of atmospheric variability. It was found that well over 99% of the spectral information was contained in the first six mutually orthogonal linear combinations of the observed variables (flux at various wavelengths). Rotation of the principal components was effective in separating various components by their independent physical influences. The majority of the variability in the downwelling solar irradiance (380-1000 nm) was explained by the following fundamental atmospheric parameters (in order of their importance): cloud scattering, water vapor absorption, molecular scattering, and ozone absorption. In contrast to what has been proposed as a resolution to a clear-sky absorption anomaly, no unexpected gaseous absorption signature was found in any of the significant components.

  19. Approximate analytical solutions in the analysis of elastic structures of complex geometry

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    A method of analytical decomposition for analysis plane structures of a complex configuration is presented. For each part of the structure in the form of a rectangle all the components of the stress-strain state are constructed by the superposition method. The method is based on two solutions derived in the form of trigonometric series with unknown coefficients using the method of initial functions. The coefficients are determined from the system of linear algebraic equations obtained while satisfying the boundary conditions and the conditions for joining the structure parts. The components of the stress-strain state of a bent plate with holes are calculated using the analytical decomposition method.

  20. Study on Web-Based Tool for Regional Agriculture Industry Structure Optimization Using Ajax

    NASA Astrophysics Data System (ADS)

    Huang, Xiaodong; Zhu, Yeping

    According to the research status of regional agriculture industry structure adjustment information system and the current development of information technology, this paper takes web-based regional agriculture industry structure optimization tool as research target. This paper introduces Ajax technology and related application frameworks to build an auxiliary toolkit of decision support system for agricultural policy maker and economy researcher. The toolkit includes a “one page” style component of regional agriculture industry structure optimization which provides agile arguments setting method that enables applying sensitivity analysis and usage of data and comparative advantage analysis result, and a component that can solve the linear programming model and its dual problem by simplex method.

  1. Quantitative analysis of the mixtures of illicit drugs using terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Jiang, Dejun; Zhao, Shusen; Shen, Jingling

    2008-03-01

    A method was proposed to quantitatively inspect the mixtures of illicit drugs with terahertz time-domain spectroscopy technique. The mass percentages of all components in a mixture can be obtained by linear regression analysis, on the assumption that all components in the mixture and their absorption features be known. For illicit drugs were scarce and expensive, firstly we used common chemicals, Benzophenone, Anthraquinone, Pyridoxine hydrochloride and L-Ascorbic acid in the experiment. Then illicit drugs and a common adulterant, methamphetamine and flour, were selected for our experiment. Experimental results were in significant agreement with actual content, which suggested that it could be an effective method for quantitative identification of illicit drugs.

  2. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Identification of the isomers using principal component analysis (PCA) method

    NASA Astrophysics Data System (ADS)

    Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur

    2016-03-01

    In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.

  4. Face recognition using an enhanced independent component analysis approach.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2007-03-01

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.

  5. Near-infrared Raman spectroscopy for estimating biochemical changes associated with different pathological conditions of cervix

    NASA Astrophysics Data System (ADS)

    Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu

    2018-02-01

    The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.

  6. Digital histologic analysis reveals morphometric patterns of age-related involution in breast epithelium and stroma.

    PubMed

    Sandhu, Rupninder; Chollet-Hinton, Lynn; Kirk, Erin L; Midkiff, Bentley; Troester, Melissa A

    2016-02-01

    Complete age-related regression of mammary epithelium, often termed postmenopausal involution, is associated with decreased breast cancer risk. However, most studies have qualitatively assessed involution. We quantitatively analyzed epithelium, stroma, and adipose tissue from histologically normal breast tissue of 454 patients in the Normal Breast Study. High-resolution digital images of normal breast hematoxylin and eosin-stained slides were partitioned into epithelium, adipose tissue, and nonfatty stroma. Percentage area and nuclei per unit area (nuclear density) were calculated for each component. Quantitative data were evaluated in association with age using linear regression and cubic spline models. Stromal area decreased (P = 0.0002), and adipose tissue area increased (P < 0.0001), with an approximate 0.7% change in area for each component, until age 55 years when these area measures reached a steady state. Although epithelial area did not show linear changes with age, epithelial nuclear density decreased linearly beginning in the third decade of life. No significant age-related trends were observed for stromal or adipose nuclear density. Digital image analysis offers a high-throughput method for quantitatively measuring tissue morphometry and for objectively assessing age-related changes in adipose tissue, stroma, and epithelium. Epithelial nuclear density is a quantitative measure of age-related breast involution that begins to decline in the early premenopausal period. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Ten-year in vivo wear measurement of a fully congruent mobile bearing unicompartmental knee arthroplasty.

    PubMed

    Price, A J; Short, A; Kellett, C; Beard, D; Gill, H; Pandit, H; Dodd, C A F; Murray, D W

    2005-11-01

    Polyethylene particulate wear debris continues to be implicated in the aetiology of aseptic loosening following knee arthroplasty. The Oxford unicompartmental knee arthroplasty employs a spherical femoral component and a fully congruous meniscal bearing to increase contact area and theoretically reduce the potential for polyethylene wear. This study measures the in vivo ten-year linear wear of the device, using a roentgenstereophotogrammetric technique. In this in vivo study, seven medial Oxford unicompartmental prostheses, which had been implanted ten years previously were studied. Stereo pairs of radiographs were acquired for each patient and the films were analysed using a roentgen stereophotogrammetric analysis calibration and a computer-aided design model silhouette-fitting technique. Penetration of the femoral component into the original volume of the bearing was our estimate of linear wear. In addition, eight control patients were examined less than three weeks post-insertion of an Oxford prosthesis, where no wear would be expected. The control group showed no measured wear and suggested a system accuracy of 0.1 mm. At ten years, the mean linear wear rate was 0.02 mm/year. The results from this in vivo study confirm that the device has low ten-year linear wear in clinical practice. This may offer the device a survival advantage in the long term.

  8. Simultaneous quantitation of 14 active components in Yinchenhao decoction by using ultra high performance liquid chromatography with diode array detection: Method development and ingredient analysis of different commonly prepared samples.

    PubMed

    Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin

    2016-11-01

    We developed a novel quantitative analysis method based on ultra high performance liquid chromatography coupled with diode array detection for the simultaneous determination of the 14 main active components in Yinchenhao decoction. All components were separated on an Agilent SB-C18 column by using a gradient solvent system of acetonitrile/0.1% phosphoric acid solution at a flow rate of 0.4 mL/min for 35 min. Subsequently, linearity, precision, repeatability, and accuracy tests were implemented to validate the method. Furthermore, the method has been applied for compositional difference analysis of 14 components in eight normal-extraction Yinchenhao decoction samples, accompanied by hierarchical clustering analysis and similarity analysis. The result that all samples were divided into three groups based on different contents of components demonstrated that extraction methods of decocting, refluxing, ultrasonication and extraction solvents of water or ethanol affected component differentiation, and should be related to its clinical applications. The results also indicated that the sample prepared by patients in the family by using water extraction employing a casserole was almost same to that prepared using a stainless-steel kettle, which is mostly used in pharmaceutical factories. This research would help patients to select the best and most convenient method for preparing Yinchenhao decoction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Statistical techniques applied to aerial radiometric surveys (STAARS): principal components analysis user's manual. [NURE program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.

    1981-01-01

    A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From thismore » analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained.« less

  10. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  11. Raman structural study of melt-mixed blends of isotactic polypropylene with polyethylene of various densities

    NASA Astrophysics Data System (ADS)

    Prokhorov, K. A.; Nikolaeva, G. Yu; Sagitova, E. A.; Pashinin, P. P.; Guseva, M. A.; Shklyaruk, B. F.; Gerasin, V. A.

    2018-04-01

    We report a Raman structural study of melt-mixed blends of isotactic polypropylene with two grades of polyethylene: linear high-density and branched low-density polyethylenes. Raman methods, which had been suggested for the analysis of neat polyethylene and isotactic polypropylene, were modified in this study for quantitative analysis of polyethylene/polypropylene blends. We revealed the dependence of the degree of crystallinity and conformational composition of macromolecules in the blends on relative content of the blend components and preparation conditions (quenching or annealing). We suggested a simple Raman method for evaluation of the relative content of the components in polyethylene/polypropylene blends. The degree of crystallinity of our samples, evaluated by Raman spectroscopy, is in good agreement with the results of analysis by differential scanning calorimetry.

  12. Discrimination of rectal cancer through human serum using surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhou; Yang, Tianyue; Li, Siqi; Zhang, Su; Jin, Lili

    2015-05-01

    In this paper, surface-enhanced Raman spectroscopy (SERS) was used to detect the changes in blood serum components that accompany rectal cancer. The differences in serum SERS data between rectal cancer patients and healthy controls were examined. Postoperative rectal cancer patients also participated in the comparison to monitor the effects of cancer treatments. The results show that there are significant variations at certain wavenumbers which indicates alteration of corresponding biological substances. Principal component analysis (PCA) and parameters of intensity ratios were used on the original SERS spectra for the extraction of featured variables. These featured variables then underwent linear discriminant analysis (LDA) and classification and regression tree (CART) for the discrimination analysis. Accuracies of 93.5 and 92.4 % were obtained for PCA-LDA and parameter-CART, respectively.

  13. Rapid and sensitive determination of major polyphenolic components in Euphoria longana Lam. seeds using matrix solid-phase dispersion extraction and UHPLC with hybrid linear ion trap triple quadrupole mass spectrometry.

    PubMed

    Rathore, Atul S; Sathiyanarayanan, L; Deshpande, Shreekant; Mahadik, Kakasaheb R

    2016-11-01

    A rapid and sensitive method for the extraction and determination of four major polyphenolic components in Euphoria longana Lam. seeds is presented for the first time based on matrix solid-phase dispersion extraction followed by ultra high performance liquid chromatography with hybrid triple quadrupole linear ion trap mass spectrometry. Matrix solid-phase dispersion method was designed for the extraction of Euphoria longana seed constituents and compared with microwave-assisted extraction and ultrasonic-assisted extraction methods. An Ultra high performance liquid chromatography with hybrid triple quadrupole linear ion-trap mass spectrometry method was developed for quantitative analysis in multiple-reaction monitoring mode in negative electrospray ionization. The chromatographic separation was accomplished using an ACQUITY UPLC BEH C 18 (2.1 mm × 50 mm, 1.7 μm) column with gradient elution of 0.1% aqueous formic acid and 0.1% formic acid in acetonitrile. The developed method was validated with acceptable linearity (r 2 > 0.999), precision (RSD ≤ 2.22%) and recovery (RSD ≤ 2.35%). The results indicated that matrix solid-phase dispersion produced comparable extraction efficiency compared with other methods nevertheless was more convenient and time-saving with reduced requirements on sample and solvent volumes. The proposed method is rapid and sensitive in providing a promising alternative for extraction and comprehensive determination of active components for quality control of Euphoria longana products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  15. Nonlinear multivariate and time series analysis by neural network methods

    NASA Astrophysics Data System (ADS)

    Hsieh, William W.

    2004-03-01

    Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.

  16. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  17. Development of a railway wagon-track interaction model: Case studies on excited tracks

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Xianmai; Li, Xuwei; He, Xianglin

    2018-02-01

    In this paper, a theoretical framework for modeling the railway wagon-ballast track interactions is presented, in which the dynamic equations of motion of wagon-track systems are constructed by effectively coupling the linear and nonlinear dynamic characteristics of system components. For the linear components, the energy-variational principle is directly used to derive their dynamic matrices, while for the nonlinear components, the dynamic equilibrium method is implemented to deduce the load vectors, based on which a novel railway wagon-ballast track interaction model is developed, and being validated by comparing with the experimental data measured from a heavy haul railway and another advanced model. With this study, extensive contributions in figuring out the critical speed of instability, limits and localizations of track irregularities over derailment accidents are presented by effectively integrating the dynamic simulation model, the track irregularity probabilistic model and time-frequency analysis method. The proposed approaches can provide crucial information to guarantee the running safety and stability of the wagon-track system when considering track geometries and various running speeds.

  18. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    NASA Astrophysics Data System (ADS)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  19. Metabolic syndrome: An independent risk factor for erectile dysfunction

    PubMed Central

    Sanjay, Saran; Bharti, Gupta Sona; Manish, Gutch; Rajeev, Philip; Pankaj, Agrawal; Puspalata, Agroiya; Keshavkumar, Gupta

    2015-01-01

    Objective: The objective was to determine the role of various components of metabolic syndrome (MetS) as independent risk factor for erectile dysfunction (ED). Materials and Methods: A total of 113 subjects of MetS, as recommended by recent IDF and AHA/NHLBI joint interim statement were selected for study who presented for ED. After doing Anthropometric examination, fasting laboratory assay for fasting plasma glucose (FPG), fasting insulin, hemoglobin A1c, triglyceride (TG), high-density lipoprotein (HDL), low-density lipoprotein (LDL), and 2 h oral glucose tolerance test (OGTT) was done. Erectile function was assessed by completing questions one through five of the International Index of Erectile Function (IIEF-5). A multiple linear regression analysis was carried out on 66 subjects with IIEF-5 score as dependent variable and components of MetS FPG, 2 h OGTT, TG, HDL, and waist circumference as independent variables. Results: Using a multiple linear regression analysis, we observed that presence of the various components of MetS was associated with ED and a decrease IIEF-5 score and this effect was greater than the effect associated with any of the individual components. Of the individual components of the MetS, HDL (B = 0.136; P = 0.004) and FPG (B = −0.069; P = 0.007) conferred the strongest effect on IIEF-5 score. However, overall age had most significant effect on IIEF-5 score. Conclusion: It is crucial to formulate strategies and implement them to prevent or control the epidemic of the MetS and its consequences. The early identification and treatment of risk factors might be helpful to prevent ED and secondary cardiovascular disease, including diet and lifestyle interventions. PMID:25729692

  20. Metabolic syndrome: An independent risk factor for erectile dysfunction.

    PubMed

    Sanjay, Saran; Bharti, Gupta Sona; Manish, Gutch; Rajeev, Philip; Pankaj, Agrawal; Puspalata, Agroiya; Keshavkumar, Gupta

    2015-01-01

    The objective was to determine the role of various components of metabolic syndrome (MetS) as independent risk factor for erectile dysfunction (ED). A total of 113 subjects of MetS, as recommended by recent IDF and AHA/NHLBI joint interim statement were selected for study who presented for ED. After doing Anthropometric examination, fasting laboratory assay for fasting plasma glucose (FPG), fasting insulin, hemoglobin A1c, triglyceride (TG), high-density lipoprotein (HDL), low-density lipoprotein (LDL), and 2 h oral glucose tolerance test (OGTT) was done. Erectile function was assessed by completing questions one through five of the International Index of Erectile Function (IIEF-5). A multiple linear regression analysis was carried out on 66 subjects with IIEF-5 score as dependent variable and components of MetS FPG, 2 h OGTT, TG, HDL, and waist circumference as independent variables. Using a multiple linear regression analysis, we observed that presence of the various components of MetS was associated with ED and a decrease IIEF-5 score and this effect was greater than the effect associated with any of the individual components. Of the individual components of the MetS, HDL (B = 0.136; P = 0.004) and FPG (B = -0.069; P = 0.007) conferred the strongest effect on IIEF-5 score. However, overall age had most significant effect on IIEF-5 score. It is crucial to formulate strategies and implement them to prevent or control the epidemic of the MetS and its consequences. The early identification and treatment of risk factors might be helpful to prevent ED and secondary cardiovascular disease, including diet and lifestyle interventions.

  1. New Optical Transforms For Statistical Image Recognition

    NASA Astrophysics Data System (ADS)

    Lee, Sing H.

    1983-12-01

    In optical implementation of statistical image recognition, new optical transforms on large images for real-time recognition are of special interest. Several important linear transformations frequently used in statistical pattern recognition have now been optically implemented, including the Karhunen-Loeve transform (KLT), the Fukunaga-Koontz transform (FKT) and the least-squares linear mapping technique (LSLMT).1-3 The KLT performs principle components analysis on one class of patterns for feature extraction. The FKT performs feature extraction for separating two classes of patterns. The LSLMT separates multiple classes of patterns by maximizing the interclass differences and minimizing the intraclass variations.

  2. Restoration of recto-verso colour documents using correlated component analysis

    NASA Astrophysics Data System (ADS)

    Tonazzini, Anna; Bedini, Luigi

    2013-12-01

    In this article, we consider the problem of removing see-through interferences from pairs of recto-verso documents acquired either in grayscale or RGB modality. The see-through effect is a typical degradation of historical and archival documents or manuscripts, and is caused by transparency or seeping of ink from the reverse side of the page. We formulate the problem as one of separating two individual texts, overlapped in the recto and verso maps of the colour channels through a linear convolutional mixing operator, where the mixing coefficients are unknown, while the blur kernels are assumed known a priori or estimated off-line. We exploit statistical techniques of blind source separation to estimate both the unknown model parameters and the ideal, uncorrupted images of the two document sides. We show that recently proposed correlated component analysis techniques overcome the already satisfactory performance of independent component analysis techniques and colour decorrelation, when the two texts are even sensibly correlated.

  3. Discrimination of radiation quality through second harmonic out-of-phase cw-ESR detection.

    PubMed

    Marrale, Maurizio; Longo, Anna; Brai, Maria; Barbon, Antonio; Brustolon, Marina

    2014-02-01

    The ability to discriminate the quality of ionizing radiation is important because the biological effects produced in tissue strongly depends on both absorbed dose and linear energy transfer (LET) of ionizing particles. Here we present an experimental electron spin resonance (ESR) analysis aimed at discriminating the effective LETs of various radiation beams (e.g., 19.3 MeV protons, (60)Co photons and thermal neutrons). The measurement of the intensities of the continuous wave spectrometer signal channel first harmonic in-phase and the second harmonic out-of-phase components are used to distinguish the radiation quality. A computational analysis, was carried out to evaluate the dependence of the first harmonic in-phase and second harmonic out-of-phase components on microwave power, modulation amplitude and relaxation times, and highlights that these components could be used to point out differences in the relaxation times. On the basis of this numerical analysis the experimental results are discussed. The methodology described in this study has the potential to provide information on radiation quality.

  4. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  5. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  6. Authentication of virgin olive oil by a novel curve resolution approach combined with visible spectroscopy.

    PubMed

    Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús

    2017-04-01

    Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Computational model for the analysis of cartilage and cartilage tissue constructs

    PubMed Central

    Smith, David W.; Gardiner, Bruce S.; Davidson, John B.; Grodzinsky, Alan J.

    2013-01-01

    We propose a new non-linear poroelastic model that is suited to the analysis of soft tissues. In this paper the model is tailored to the analysis of cartilage and the engineering design of cartilage constructs. The proposed continuum formulation of the governing equations enables the strain of the individual material components within the extracellular matrix (ECM) to be followed over time, as the individual material components are synthesized, assembled and incorporated within the ECM or lost through passive transport or degradation. The material component analysis developed here naturally captures the effect of time-dependent changes of ECM composition on the deformation and internal stress states of the ECM. For example, it is shown that increased synthesis of aggrecan by chondrocytes embedded within a decellularized cartilage matrix initially devoid of aggrecan results in osmotic expansion of the newly synthesized proteoglycan matrix and tension within the structural collagen network. Specifically, we predict that the collagen network experiences a tensile strain, with a maximum of ~2% at the fixed base of the cartilage. The analysis of an example problem demonstrates the temporal and spatial evolution of the stresses and strains in each component of a self-equilibrating composite tissue construct, and the role played by the flux of water through the tissue. PMID:23784936

  8. Esophageal cancer detection based on tissue surface-enhanced Raman spectroscopy and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan

    2013-01-01

    The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.

  9. Resolving the percentage of component terrains within single resolution elements

    NASA Technical Reports Server (NTRS)

    Marsh, S. E.; Switzer, P.; Kowalik, W. S.; Lyon, R. J. P.

    1980-01-01

    An approximate maximum likelihood technique employing a widely available discriminant analysis program is discussed that has been developed for resolving the percentage of component terrains within single resolution elements. The method uses all four channels of Landsat data simultaneously and does not require prior knowledge of the percentage of components in mixed pixels. It was tested in five cases that were chosen to represent mixtures of outcrop, soil and vegetation which would typically be encountered in geologic studies with Landsat data. For all five cases, the method proved to be superior to single band weighted average and linear regression techniques and permitted an estimate of the total area occupied by component terrains to within plus or minus 6% of the true area covered. Its major drawback is a consistent overestimation of the pixel component percent of the darker materials (vegetation) and an underestimation of the pixel component percent of the brighter materials (sand).

  10. Open architectures for formal reasoning and deductive technologies for software development

    NASA Technical Reports Server (NTRS)

    Mccarthy, John; Manna, Zohar; Mason, Ian; Pnueli, Amir; Talcott, Carolyn; Waldinger, Richard

    1994-01-01

    The objective of this project is to develop an open architecture for formal reasoning systems. One goal is to provide a framework with a clear semantic basis for specification and instantiation of generic components; construction of complex systems by interconnecting components; and for making incremental improvements and tailoring to specific applications. Another goal is to develop methods for specifying component interfaces and interactions to facilitate use of existing and newly built systems as 'off the shelf' components, thus helping bridge the gap between producers and consumers of reasoning systems. In this report we summarize results in several areas: our data base of reasoning systems; a theory of binding structures; a theory of components of open systems; a framework for specifying components of open reasoning system; and an analysis of the integration of rewriting and linear arithmetic modules in Boyer-Moore using the above framework.

  11. On reliable time-frequency characterization and delay estimation of stimulus frequency otoacoustic emissions

    NASA Astrophysics Data System (ADS)

    Biswal, Milan; Mishra, Srikanta

    2018-05-01

    The limited information on origin and nature of stimulus frequency otoacoustic emissions (SFOAEs) necessitates a thorough reexamination into SFOAE analysis procedures. This will lead to a better understanding of the generation of SFOAEs. The SFOAE response waveform in the time domain can be interpreted as a summation of amplitude modulated and frequency modulated component waveforms. The efficiency of a technique to segregate these components is critical to describe the nature of SFOAEs. Recent advancements in robust time-frequency analysis algorithms have staked claims on the more accurate extraction of these components, from composite signals buried in noise. However, their potential has not been fully explored for SFOAEs analysis. Indifference to distinct information, due to nature of these analysis techniques, may impact the scientific conclusions. This paper attempts to bridge this gap in literature by evaluating the performance of three linear time-frequency analysis algorithms: short-time Fourier transform (STFT), continuous Wavelet transform (CWT), S-transform (ST) and two nonlinear algorithms: Hilbert-Huang Transform (HHT), synchrosqueezed Wavelet transform (SWT). We revisit the extraction of constituent components and estimation of their magnitude and delay, by carefully evaluating the impact of variation in analysis parameters. The performance of HHT and SWT from the perspective of time-frequency filtering and delay estimation were found to be relatively less efficient for analyzing SFOAEs. The intrinsic mode functions of HHT does not completely characterize the reflection components and hence IMF based filtering alone, is not recommended for segregating principal emission from multiple reflection components. We found STFT, WT, and ST to be suitable for canceling multiple internal reflection components with marginal altering in SFOAE.

  12. Local linear discriminant analysis framework using sample neighbors.

    PubMed

    Fan, Zizhu; Xu, Yong; Zhang, David

    2011-07-01

    The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.

  13. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  14. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  15. Multiple pass laser amplifier system

    DOEpatents

    Brueckner, Keith A.; Jorna, Siebe; Moncur, N. Kent

    1977-01-01

    A laser amplification method for increasing the energy extraction efficiency from laser amplifiers while reducing the energy flux that passes through a flux limited system which includes apparatus for decomposing a linearly polarized light beam into multiple components, passing the components through an amplifier in delayed time sequence and recombining the amplified components into an in phase linearly polarized beam.

  16. New Insights into the Folding of a β-Sheet Miniprotein in a Reduced Space of Collective Hydrogen Bond Variables: Application to a Hydrodynamic Analysis of the Folding Flow

    PubMed Central

    Kalgin, Igor V.; Caflisch, Amedeo; Chekmarev, Sergei F.; Karplus, Martin

    2013-01-01

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted Principal Component Analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed and it is complemented by a hydrodynamic analysis. The latter, making use of “passive tracers” in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface, but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this “hydrogen atom” of protein folding. PMID:23621790

  17. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  18. The neural basis of attaining conscious awareness of sad mood.

    PubMed

    Smith, Ryan; Braden, B Blair; Chen, Kewei; Ponce, Francisco A; Lane, Richard D; Baxter, Leslie C

    2015-09-01

    The neural processes associated with becoming aware of sad mood are not fully understood. We examined the dynamic process of becoming aware of sad mood and recovery from sad mood. Sixteen healthy subjects underwent fMRI while participating in a sadness induction task designed to allow for variable mood induction times. Individualized regressors linearly modeled the time periods during the attainment of self-reported sad and baseline "neutral" mood states, and the validity of the linearity assumption was further tested using independent component analysis. During sadness induction the dorsomedial and ventrolateral prefrontal cortices, and anterior insula exhibited a linear increase in the blood oxygen level-dependent (BOLD) signal until subjects became aware of a sad mood and then a subsequent linear decrease as subjects transitioned from sadness back to the non-sadness baseline condition. These findings extend understanding of the neural basis of conscious emotional experience.

  19. Quantitative structure-activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods.

    PubMed

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.

  20. Quantitative structure–activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods

    PubMed Central

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858

  1. Chemometric investigation of light-shade effects on essential oil yield and morphology of Moroccan Myrtus communis L.

    PubMed

    Fadil, Mouhcine; Farah, Abdellah; Ihssane, Bouchaib; Haloui, Taoufik; Lebrazi, Sara; Zghari, Badreddine; Rachiq, Saâd

    2016-01-01

    To investigate the effect of environmental factors such as light and shade on essential oil yield and morphological traits of Moroccan Myrtus communis, a chemometric study was conducted on 20 individuals growing under two contrasting light environments. The study of individual's parameters by principal component analysis has shown that essential oil yield, altitude, and leaves thickness were positively correlated between them and negatively correlated with plants height, leaves length and leaves width. Principal component analysis and hierarchical cluster analysis have also shown that the individuals of each sampling site were grouped separately. The one-way ANOVA test has confirmed the effect of light and shade on essential oil yield and morphological parameters by showing a statistically significant difference between them from the shaded side to the sunny one. Finally, the multiple linear model containing main, interaction and quadratic terms was chosen for the modeling of essential oil yield in terms of morphological parameters. Sun plants have a small height, small leaves length and width, but they are thicker and richer in essential oil than shade plants which have shown almost the opposite. The highlighted multiple linear model can be used to predict essential oil yield in the studied area.

  2. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  3. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  4. Implementation of an integrating sphere for the enhancement of noninvasive glucose detection using quantum cascade laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.

    2018-05-01

    An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.

  5. Air-coupled laser vibrometry: analysis and applications.

    PubMed

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2009-03-01

    Acousto-optic interaction between a narrow laser beam and acoustic waves in air is analyzed theoretically. The photoelastic relation in air is used to derive the phase modulation of laser light in air-coupled reflection vibrometry induced by angular spatial spectral components comprising the acoustic beam. Maximum interaction was found for the zero spatial acoustic component propagating normal to the laser beam. The angular dependence of the imaging efficiency is determined for the axial and nonaxial acoustic components with the regard for the laser beam steering in the scanning mode. The sensitivity of air-coupled vibrometry is compared with conventional "Doppler" reflection vibrometry. Applications of the methodology for visualization of linear and nonlinear air-coupled fields are demonstrated.

  6. SEM analysis of ionizing radiation effects in linear integrated circuits. [Scanning Electron Microscope

    NASA Technical Reports Server (NTRS)

    Stanley, A. G.; Gauthier, M. K.

    1977-01-01

    A successful diagnostic technique was developed using a scanning electron microscope (SEM) as a precision tool to determine ionization effects in integrated circuits. Previous SEM methods radiated the entire semiconductor chip or major areas. The large area exposure methods do not reveal the exact components which are sensitive to radiation. To locate these sensitive components a new method was developed, which consisted in successively irradiating selected components on the device chip with equal doses of electrons /10 to the 6th rad (Si)/, while the whole device was subjected to representative bias conditions. A suitable device parameter was measured in situ after each successive irradiation with the beam off.

  7. Polarization analysis of diamond subwavelength gratings acting as space-variant birefringent elements

    NASA Astrophysics Data System (ADS)

    Piron, P.; Vargas Catalan, E.; Karlsson, M.

    2018-02-01

    Subwavelength gratings are gratings with a period smaller than the incident wavelength. They only allow the zeroth order of diffraction, they possess form birefringence and they can be modeled as birefringent plates. In this paper, we present the first results of an experimental method designed to measure their polarization properties. The method consists in measuring the variation of the light transmitted through two linear polarizers with the subwavelength component between them for several orientations of the polarizers. In this paper, the basic principles of the method are introduced and the experimental setup is presented. Several types of components are numerically studied and the optical measurements of one component are presented.

  8. SpectralNET – an application for spectral graph analysis and visualization

    PubMed Central

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-01-01

    Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170

  9. SpectralNET--an application for spectral graph analysis and visualization.

    PubMed

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-10-19

    Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.

  10. A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use

    PubMed Central

    Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien

    2016-01-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  11. Linear and nonlinear subspace analysis of hand movements during grasping.

    PubMed

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  12. BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.

  13. SCBUCKLE user's manual: Buckling analysis program for simple supported and clamped panels

    NASA Technical Reports Server (NTRS)

    Cruz, Juan R.

    1993-01-01

    The program SCBUCKLE calculates the buckling loads and mode shapes of cylindrically curved, rectangular panels. The panel is assumed to have no imperfections. SCBUCKLE is capable of analyzing specially orthotropic symmetric panels (i.e., A(sub 16) = A(sub 26) = 0.0, D(sub 16) = D(sub 26) = 0.0, B(sub ij) = 0.0). The analysis includes first-order transverse shear theory and is capable of modeling sandwich panels. The analysis supports two types of boundary conditions: either simply supported or clamped on all four edges. The panel can be subjected to linearly varying normal loads N(sub x) and N(sub y) in addition to a constant shear load N(sub xy). The applied loads can be divided into two parts: a preload component; and a variable (eigenvalue-dependent) component. The analysis is based on the modified Donnell's equations for shallow shells. The governing equations are solved by Galerkin's method.

  14. Classification of adulterated honeys by multivariate analysis.

    PubMed

    Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad

    2017-06-01

    In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Analysis of friction and instability by the centre manifold theory for a non-linear sprag-slip model

    NASA Astrophysics Data System (ADS)

    Sinou, J.-J.; Thouverez, F.; Jezequel, L.

    2003-08-01

    This paper presents the research devoted to the study of instability phenomena in non-linear model with a constant brake friction coefficient. Indeed, the impact of unstable oscillations can be catastrophic. It can cause vehicle control problems and component degradation. Accordingly, complex stability analysis is required. This paper outlines stability analysis and centre manifold approach for studying instability problems. To put it more precisely, one considers brake vibrations and more specifically heavy trucks judder where the dynamic characteristics of the whole front axle assembly is concerned, even if the source of judder is located in the brake system. The modelling introduces the sprag-slip mechanism based on dynamic coupling due to buttressing. The non-linearity is expressed as a polynomial with quadratic and cubic terms. This model does not require the use of brake negative coefficient, in order to predict the instability phenomena. Finally, the centre manifold approach is used to obtain equations for the limit cycle amplitudes. The centre manifold theory allows the reduction of the number of equations of the original system in order to obtain a simplified system, without loosing the dynamics of the original system as well as the contributions of non-linear terms. The goal is the study of the stability analysis and the validation of the centre manifold approach for a complex non-linear model by comparing results obtained by solving the full system and by using the centre manifold approach. The brake friction coefficient is used as an unfolding parameter of the fundamental Hopf bifurcation point.

  16. Discriminating the Mineralogical Composition in Drill Cuttings Based on Absorption Spectra in the Terahertz Range.

    PubMed

    Miao, Xinyang; Li, Hao; Bao, Rima; Feng, Chengjing; Wu, Hang; Zhan, Honglei; Li, Yizhang; Zhao, Kun

    2017-02-01

    Understanding the geological units of a reservoir is essential to the development and management of the resource. In this paper, drill cuttings from several depths from an oilfield were studied using terahertz time domain spectroscopy (THz-TDS). Cluster analysis (CA) and principal component analysis (PCA) were employed to classify and analyze the cuttings. The cuttings were clearly classified based on CA and PCA methods, and the results were in agreement with the lithology. Moreover, calcite and dolomite have stronger absorption of a THz pulse than any other minerals, based on an analysis of the PC1 scores. Quantitative analyses of minor minerals were also realized by building a series of linear and non-linear models between contents and PC2 scores. The results prove THz technology to be a promising means for determining reservoir lithology as well as other properties, which will be a significant supplementary method in oil fields.

  17. Geographical identification of saffron (Crocus sativus L.) by linear discriminant analysis applied to the UV-visible spectra of aqueous extracts.

    PubMed

    D'Archivio, Angelo Antonio; Maggi, Maria Anna

    2017-03-15

    We attempted geographical classification of saffron using UV-visible spectroscopy, conventionally adopted for quality grading according to the ISO Normative 3632. We investigated 81 saffron samples produced in L'Aquila, Città della Pieve, Cascia, and Sardinia (Italy) and commercial products purchased in various supermarkets. Exploratory principal component analysis applied to the UV-vis spectra of saffron aqueous extracts revealed a clear differentiation of the samples belonging to different quality categories, but a poor separation according to the geographical origin of the spices. On the other hand, linear discriminant analysis based on 8 selected absorbance values, concentrated near 279, 305 and 328nm, allowed a good distinction of the spices coming from different sites. Under severe validation conditions (30% and 50% of saffron samples in the evaluation set), correct predictions were 85 and 83%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  19. A powerful local shear instability in weakly magnetized disks. I - Linear analysis. II - Nonlinear evolution

    NASA Technical Reports Server (NTRS)

    Balbus, Steven A.; Hawley, John F.

    1991-01-01

    A broad class of astronomical accretion disks is presently shown to be dynamically unstable to axisymmetric disturbances in the presence of a weak magnetic field, an insight with consequently broad applicability to gaseous, differentially-rotating systems. In the first part of this work, a linear analysis is presented of the instability, which is local and extremely powerful; the maximum growth rate, which is of the order of the angular rotation velocity, is independent of the strength of the magnetic field. Fluid motions associated with the instability directly generate both poloidal and toroidal field components. In the second part of this investigation, the scaling relation between the instability's wavenumber and the Alfven velocity is demonstrated, and the independence of the maximum growth rate from magnetic field strength is confirmed.

  20. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  1. Arrhythmia recognition and classification using combined linear and nonlinear features of ECG signals.

    PubMed

    Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa

    2016-04-01

    Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support vector machine and radial basis function method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  3. Non-linear dynamic analysis of geared systems, part 2

    NASA Technical Reports Server (NTRS)

    Singh, Rajendra; Houser, Donald R.; Kahraman, Ahmet

    1990-01-01

    A good understanding of the steady state dynamic behavior of a geared system is required in order to design reliable and quiet transmissions. This study focuses on a system containing a spur gear pair with backlash and periodically time-varying mesh stiffness, and rolling element bearings with clearance type non-linearities. A dynamic finite element model of the linear time-invariant (LTI) system is developed. Effects of several system parameters, such as torsional and transverse flexibilities of the shafts and prime mover/load inertias, on free and force vibration characteristics are investigated. Several reduced order LTI models are developed and validated by comparing their eigen solution with the finite element model results. Several key system parameters such as mean load and damping ratio are identified and their effects on the non-linear frequency response are evaluated quantitatively. Other fundamental issues such as the dynamic coupling between non-linear modes, dynamic interactions between component non-linearities and time-varying mesh stiffness, and the existence of subharmonic and chaotic solutions including routes to chaos have also been examined in depth.

  4. Dual-energy x-ray image decomposition by independent component analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang

    2001-09-01

    The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.

  5. Vestibular coriolis effect differences modeled with three-dimensional linear-angular interactions.

    PubMed

    Holly, Jan E

    2004-01-01

    The vestibular coriolis (or "cross-coupling") effect is traditionally explained by cross-coupled angular vectors, which, however, do not explain the differences in perceptual disturbance under different acceleration conditions. For example, during head roll tilt in a rotating chair, the magnitude of perceptual disturbance is affected by a number of factors, including acceleration or deceleration of the chair rotation or a zero-g environment. Therefore, it has been suggested that linear-angular interactions play a role. The present research investigated whether these perceptual differences and others involving linear coriolis accelerations could be explained under one common framework: the laws of motion in three dimensions, which include all linear-angular interactions among all six components of motion (three angular and three linear). The results show that the three-dimensional laws of motion predict the differences in perceptual disturbance. No special properties of the vestibular system or nervous system are required. In addition, simulations were performed with angular, linear, and tilt time constants inserted into the model, giving the same predictions. Three-dimensional graphics were used to highlight the manner in which linear-angular interaction causes perceptual disturbance, and a crucial component is the Stretch Factor, which measures the "unexpected" linear component.

  6. Structural analysis of gluten-free doughs by fractional rheological model

    NASA Astrophysics Data System (ADS)

    Orczykowska, Magdalena; Dziubiński, Marek; Owczarz, Piotr

    2015-02-01

    This study examines the effects of various components of tested gluten-free doughs, such as corn starch, amaranth flour, pea protein isolate, and cellulose in the form of plantain fibers on rheological properties of such doughs. The rheological properties of gluten-free doughs were assessed by using the rheological fractional standard linear solid model (FSLSM). Parameter analysis of the Maxwell-Wiechert fractional derivative rheological model allows to state that gluten-free doughs present a typical behavior of viscoelastic quasi-solid bodies. We obtained the contribution dependence of each component used in preparations of gluten-free doughs (either hard-gel or soft-gel structure). The complicate analysis of the mechanical structure of gluten-free dough was done by applying the FSLSM to explain quite precisely the effects of individual ingredients of the dough on its rheological properties.

  7. Designing a mixture experiment when the components are subject to a nonlinear multiple-component constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Greg F.; Cooley, Scott K.; Vienna, John D.

    This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit inmore » glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture (PQM) model expressed in the relative proportions of the 14 other components. The PQM model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study.« less

  8. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  9. Assessment of human exposure doses received by activation of medical linear accelerator components

    NASA Astrophysics Data System (ADS)

    Lee, D.-Y.; Kim, J.-H.; Park, E.-T.

    2017-08-01

    This study analyzes the radiation exposure dose that an operator can receive from radioactive components during maintenance or repair of a linear accelerator. This study further aims to evaluate radiological safety. Simulations are performed on 10 MV and 15 MV photon beams, which are the most frequently used high-energy beams in clinics. The simulation analyzes components in order of activity and the human exposure dose based on the amount of neutrons received. As a result, the neutron dose, radiation dose, and human exposure dose are ranked in order of target, primary collimator, flattening filter, multi-leaf collimator, and secondary collimator, where the minimum dose is 9.34E-07 mSv/h and the maximum is 1.71E-02 mSv/h. When applying the general dose limit (radiation worker 20 mSv/year, pubic 1 mSv/year) in accordance with the Nuclear Safety Act, all components of a linear accelerator are evaluated as below the threshold value. Therefore, the results suggest that there is no serious safety issue for operators in maintaining and repairing a linear accelerator. Nevertheless, if an operator recognizes an exposure from the components of a linear accelerator during operation and considers the operating time and shielding against external exposure, exposure of the operator is expected to be minimized.

  10. [Quantitative analysis of nucleotide mixtures with terahertz time domain spectroscopy].

    PubMed

    Zhang, Zeng-yan; Xiao, Ti-qiao; Zhao, Hong-wei; Yu, Xiao-han; Xi, Zai-jun; Xu, Hong-jie

    2008-09-01

    Adenosine, thymidine, guanosine, cytidine and uridine form the building blocks of ribose nucleic acid (RNA) and deoxyribose nucleic acid (DNA). Nucleosides and their derivants are all have biological activities. Some of them can be used as medicine directly or as materials to synthesize other medicines. It is meaningful to detect the component and content in nucleosides mixtures. In the present paper, components and contents of the mixtures of adenosine, thymidine, guanosine, cytidine and uridine were analyzed. THz absorption spectra of pure nucleosides were set as standard spectra. The mixture's absorption spectra were analyzed by linear regression with non-negative constraint to identify the components and their relative content in the mixtures. The experimental and analyzing results show that it is simple and effective to get the components and their relative percentage in the mixtures by terahertz time domain spectroscopy with a relative error less than 10%. Component which is absent could be excluded exactly by this method, and the error sources were also analyzed. All the experiments and analysis confirms that this method is of no damage or contamination to the sample. This means that it will be a simple, effective and new method in biochemical materials analysis, which extends the application field of THz-TDS.

  11. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  12. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  13. Stirling System Modeling for Space Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Johnson, Paul K.

    2008-01-01

    A dynamic model of a high-power Stirling convertor has been developed for space nuclear power systems modeling. The model is based on the Component Test Power Convertor (CTPC), a 12.5-kWe free-piston Stirling convertor. The model includes the fluid heat source, the Stirling convertor, output power, and heat rejection. The Stirling convertor model includes the Stirling cycle thermodynamics, heat flow, mechanical mass-spring damper systems, and the linear alternator. The model was validated against test data. Both nonlinear and linear versions of the model were developed. The linear version algebraically couples two separate linear dynamic models; one model of the Stirling cycle and one model of the thermal system, through the pressure factors. Future possible uses of the Stirling system dynamic model are discussed. A pair of commercially available 1-kWe Stirling convertors is being purchased by NASA Glenn Research Center. The specifications of those convertors may eventually be incorporated into the dynamic model and analysis compared to the convertor test data. Subsequent potential testing could include integrating the convertors into a pumped liquid metal hot-end interface. This test would provide more data for comparison to the dynamic model analysis.

  14. Modeling vertebrate diversity in Oregon using satellite imagery

    NASA Astrophysics Data System (ADS)

    Cablk, Mary Elizabeth

    Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.

  15. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Mao; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This is an old issue in computational fluid dynamics (CFD). What is the so-called contravariant velocity or contravariant velocity component? In the article, we review the basics of tensor analysis and give the contravariant velocity component a rigorous explanation. For a given coordinate system, there exist two uniquely determined sets of base vector systems - one is the covariant and another is the contravariant base vector system. The two base vector systems are reciprocal. The so-called contravariant velocity component is really the contravariant component of a velocity vector for a time-independent coordinate system, or the contravariant component of a relative velocity between fluid and coordinates, for a time-dependent coordinate system. The contravariant velocity components are not physical quantities of the velocity vector. Their magnitudes, dimensions, and associated directions are controlled by their corresponding covariant base vectors. Several 2-D (two-dimensional) linear examples and 2-D mass-conservation equation are used to illustrate the details of expressing a vector with respect to the covariant and contravariant base vector systems, respectively.

  16. Formulation of a Nonlinear, Compatible Finite Element for the Analysis of Laminated Composites.

    DTIC Science & Technology

    1982-12-01

    be gained through weight savings are obvious. The other advantage, which is being exploited in the design of the forward swept wing (2), is the...components of strain can also be represented in matrix form, ’-. fe t [LJfjut W’here + ~(3.211) The L - operator matrix can be broken down into linear, Loand

  17. Molecular Design and Evaluation of Biodegradable Polymers Using a Statistical Approach

    PubMed Central

    Lewitus, Dan; Rios, Fabian; Rojas, Ramiro; Kohn, Joachim

    2013-01-01

    The challenging paradigm of bioresorbable polymers, whether in drug delivery or tissue engineering, states that a fine-tuning of the interplay between polymer properties (e.g., thermal, degradation), and the degree of cell/tissue replacement and remodeling is required. In this paper we describe how changes in the molecular architecture of a series of terpolymers allow for the design of polymers with varying glass transition temperatures and degradation rates. The effect of each component in the terpolymers is quantified via design of experiment (DoE) analysis. A linear relationship between terpolymer components and resulting Tg (ranging from 34 to 86 °C) was demonstrated. These findings were further supported with mass-per-flexible-bond (MPFB) analysis. The effect of terpolymer composition on the in vitro degradation of these polymers revealed molecular weight loss ranging from 20 to 60% within the first 24 hours. DoE modeling further illustrated the linear (but reciprocal) relationship between structure elements and degradation for these polymers. Thus, we describe a simple technique to provide insight into the structure property relationship of degradable polymers, specifically applied using a new family of tyrosine-derived polycarbonates, allowing for optimal design of materials for specific applications. PMID:23888354

  18. Stochastic theory of polarized light in nonlinear birefringent media: An application to optical rotation

    NASA Astrophysics Data System (ADS)

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2018-05-01

    A stochastic theory is developed for the light transmitting the optical media exhibiting linear and nonlinear birefringence. The starting point is the two-component nonlinear Schrödinger equation (NLSE). On the basis of the ansatz of “soliton” solution for the NLSE, the evolution equation for the Stokes parameters is derived, which turns out to be the Langevin equation by taking account of randomness and dissipation inherent in the birefringent media. The Langevin equation is converted to the Fokker-Planck (FP) equation for the probability distribution by employing the technique of functional integral on the assumption of the Gaussian white noise for the random fluctuation. The specific application is considered for the optical rotation, which is described by the ellipticity (third component of the Stokes parameters) alone: (i) The asymptotic analysis is given for the functional integral, which leads to the transition rate on the Poincaré sphere. (ii) The FP equation is analyzed in the strong coupling approximation, by which the diffusive behavior is obtained for the linear and nonlinear birefringence. These would provide with a basis of statistical analysis for the polarization phenomena in nonlinear birefringent media.

  19. Petroleomic Analysis of Bio- Oils from the Fast Pyrolysis or Biomass: Laser Desorption Ionization-Linear Ion Trap-Orbitrap mass Spectrometry Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Erica A.; Lee, Young Jin

    2010-08-23

    Fast pyrolysis of biomass produces bio-oils that can be upgraded into biofuels. Despite similar physical properties to petroleum, the chemical properties of bio-oils are quite different and their chemical compositions, particularly those of non-volatile compounds, are not well-known. Here, we report the first time attempt at analyzing bio-oils using high-resolution mass spectrometry (MS), which employed laser desorption ionization-linear ion trap-Orbitrap MS. Besides a few limitations, we could determine chemical compositions for over 100 molecular compounds in a bio-oil sample produced from the pyrolysis of a loblolly pine tree. These compounds consist of 3-6 oxygens and 9-17 double-bond equivalents (DBEs). Amongmore » those, O{sub 4} compounds with a DBE of 9-13 were most abundant. Unlike petroleum oils, the lack of nearby molecules within a {+-}2 Da mass window for major components enabled clear isolation of precursor ions for subsequent MS/MS structural investigations. Petroleomic analysis and a comparison to low-mass components in hydrolytic lignin suggest that they are dimers and trimers of depolymerized lignin.« less

  20. Dynamical density functional theory analysis of the laning instability in sheared soft matter.

    PubMed

    Scacchi, A; Archer, A J; Brader, J M

    2017-12-01

    Using dynamical density functional theory (DDFT) methods we investigate the laning instability of a sheared colloidal suspension. The nonequilibrium ordering at the laning transition is driven by nonaffine particle motion arising from interparticle interactions. Starting from a DDFT which incorporates the nonaffine motion, we perform a linear stability analysis that enables identification of the regions of parameter space where lanes form. We illustrate our general approach by applying it to a simple one-component fluid of soft penetrable particles.

  1. A transverse Kelvin-Helmholtz instability in a magnetized plasma

    NASA Technical Reports Server (NTRS)

    Kintner, P.; Dangelo, N.

    1977-01-01

    An analysis is conducted of the transverse Kelvin-Helmholtz instability in a magnetized plasma for unstable flute modes. The analysis makes use of a two-fluid model. Details regarding the instability calculation are discussed, taking into account the ion continuity and momentum equations, the solution of a zero-order and a first-order component, and the properties of the solution. It is expected that the linear calculation conducted will apply to situations in which the plasma has experienced no more than a few growth periods.

  2. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  3. Interactive graphical system for small-angle scattering analysis of polydisperse systems

    NASA Astrophysics Data System (ADS)

    Konarev, P. V.; Volkov, V. V.; Svergun, D. I.

    2016-09-01

    A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.

  4. Ultra-Low-Dropout Linear Regulator

    NASA Technical Reports Server (NTRS)

    Thornton, Trevor; Lepkowski, William; Wilk, Seth

    2011-01-01

    A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.

  5. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  6. Simultaneous determination of nine kinds of dominating bile acids in various snake bile by ultrahigh-performance liquid chromatography with triple quadrupole linear iontrap mass spectrometry.

    PubMed

    Zhang, Jie; Fan, Yeqin; Gong, Yajun; Chen, Xiaoyong; Wan, Luosheng; Zhou, Chenggao; Zhou, Jiewen; Ma, Shuangcheng; Wei, Feng; Chen, Jiachun; Nie, Jing

    2017-11-15

    Snake bile is one of the most expensive traditional Chinese medicines (TCMs). However, due to the complicated constitutes of snake bile and the poor ultraviolet absorbance of some trace bile acids (BAs), effective analysis methods for snake bile acids were still unavailable, making it difficult to solve adulteration problems. In present study, ultrahigh-performance liquid chromatography with triple quadrupole linear ion trap mass spectrometry (UHPLC-QqQ-MS/MS) was applied to conduct a quantitative analysis on snake BAs. The mass spectrometer was monitored in the negative ion mode, and multiple-reaction monitoring (MRM) program was used to determine the contents of BAs in snake bile. In all, 61 snake bile from 17 commonly used species of three families (Elapidae, Colubridae and Viperidae), along with five batches of commercial snake bile from four companies, were collected and detected. Nine components, Tauro-3α,12α-dihydroxy-7-oxo-5β-cholenoic acid (T1), Tauro-3α,7α,12α,23R-tetrahydroxy-5β-cholenoic acid (T2), taurocholic acid (TCA), glycocholic acid (GCA), taurochenodeoxycholic acid (TCDCA), taurodeoxycholic acid (TDCA), cholic acid (CA), Tauro-3α,7α-dihydroxy-12-oxo-5β-cholenoic acid (T3), and Tauro-3α,7α,9α,16α-tetrahydroxy-5β-cholenoic acid (T4) were simultaneously and rapidly determined for the first time. In these BAs, T1 and T2, self-prepared with purity above 90%, were first reported with their quantitative determination, and the latter two (T3 and T4) were tentatively determined by quantitative analysis multi-components by single marker (QAMS) method for roughly estimating the components without reference. The developed method was validated with acceptable linearity (r 2 ≥0.995), precision (RSD<6.5%) and recovery (RSD<7.5%). It turned out that the contents of BAs among different species were also significantly different; T1 was one of the principle bile acids in some common snake bile, and also was the characteristic one in Viperidae and Elapidae; T2 was the dominant components in Enhydris chinensis. This quantitative study of BAs in snake bile is a remarkable improvement for clarifying the bile acid compositions and evaluating the quality of snake bile. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  8. Noise Reduction of Ocean-Bottom Pressure Data Toward Real-Time Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Tsushima, H.; Hino, R.

    2008-12-01

    We discuss a method of noise reduction of ocean-bottom pressure data to be fed into the near-field tsunami forecasting scheme proposed by Tsushima et al. [2008a]. In their scheme, the pressure data is processed in real time as follows: (1) removing ocean tide components by subtracting the sea-level variation computed from a theoretical tide model, (2) applying low-pass digital filter to remove high-frequency fluctuation due to seismic waves, and (3) removing DC-offset and linear-trend component to determine a baseline of relative sea level. However, it turns out this simple method is not always successful in extracting tsunami waveforms from the data, when the observed amplitude is ~1cm. For disaster mitigation, accurate forecasting of small tsunamis is important as well as large tsunamis. Since small tsunami events occur frequently, successful tsunami forecasting of those events are critical to obtain public reliance upon tsunami warnings. As a test case, we applied the data-processing described above to the bottom pressure records containing tsunami with amplitude less than 1 cm which was generated by the 2003 Off-Fukushima earthquake occurring in the Japan Trench subduction zone. The observed pressure variation due to the ocean tide is well explained by the calculated tide signals from NAO99Jb model [Matsumoto et al., 2000]. However, the tide components estimated by BAYTAP-G [Tamura et al., 1991] from the pressure data is more appropriate for predicting and removing the ocean tide signals. In the pressure data after removing the tide variations, there remain pressure fluctuations with frequencies ranging from about 0.1 to 1 mHz and with amplitudes around ~10 cm. These fluctuations distort the estimation of zero-level and linear trend to define relative sea-level variation, which is treated as tsunami waveform in the subsequent analysis. Since the linear trend is estimated from the data prior to the origin time of the earthquake, an artificial linear trend is produced in the processed waveform. This artificial linear trend degrades the accuracy of the tsunami forecasting, although the forecasting result is expected to be robust against the existence of short-period noise [Tsushima et al., 2008a]. Since the bottom pressure show gradual increase (or decrease) in the tsunami source region [Tsushima et al., 2008b], it is important to remove the linear trend not related to the tsunami generation from the data before fed into the analysis. Therefore, the reduction of the noise in sub-mHz band is critical for the forecasting small tsunamis. Applying a kind of frequency filters to eliminate this noise cannot be a solution for this problem because actual tsunami signals may also contain components of this frequency band. We investigate whether any statistical modelings of the noise are effective for reducing the sub-mHz noise.

  9. Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images

    PubMed Central

    Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049

  10. Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.

    PubMed

    Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.

  11. Gas chromatography-mass spectrometry of carbonyl compounds in cigarette mainstream smoke after derivatization with 2,4-dinitrophenylhydrazine.

    PubMed

    Dong, Ji-Zhou; Moldoveanu, Serban C

    2004-02-20

    An improved gas chromatography-mass spectrometry (GC-MS) method was described for the analysis of carbonyl compounds in cigarette mainstream smoke (CMS) after 2,4-dinitrophenylhydrazine (DNPH) derivatization. Besides formaldehyde, acetaldehyde, acetone, acrolein, propionaldehyde, methyl ethyl ketone, butyraldehyde, and crotonaldehyde that are routinely analyzed in cigarette smoke, this technique separates and allows the analysis of several C4, C5 and C6 isomeric carbonyl compounds. Differentiation could be made between the linear and branched carbon chain components. In cigarette smoke, the branched chain carbonyls are found at higher level than the linear chain carbonyls. Also, several trace carbonyl compounds such as methoxyacetaldehyde were found for the first time in cigarette smoke. For the analysis, cigarette smoke was collected using DNPH-treated pads, which is a simpler procedure compared to conventional impinger collection. Thermal decomposition of DNPH-carbonyl compounds was minimized by the optimization of the GC conditions. The linear range of the method was significantly improved by using a standard mixture of DNPH-carbonyl compounds instead of individual compounds for calibration. The minimum detectable quantity for the carbonyls ranged from 1.4 to 5.6 microg/cigarette.

  12. Separation of Trend and Chaotic Components of Time Series and Estimation of Their Characteristics by Linear Splines

    NASA Astrophysics Data System (ADS)

    Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.

    2018-03-01

    This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.

  13. Novel methods of time-resolved fluorescence data analysis for in-vivo tissue characterization: application to atherosclerosis.

    PubMed

    Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Dorafshar, A; Reil, T; Baker, D; Freischlag, J; Marcu, L

    2004-01-01

    This study investigates the ability of new analytical methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data to characterize tissue in-vivo, such as the composition of atherosclerotic vulnerable plaques. A total of 73 TR-LIFS measurements were taken in-vivo from the aorta of 8 rabbits, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified as normal aorta, thin or thick lesions, and lesions rich in either collagen or macrophages/foam-cells. Different linear and nonlinear classification algorithms (linear discriminant analysis, stepwise linear discriminant analysis, principal component analysis, and feedforward neural networks) were developed using spectral and TR features (ratios of intensity values and Laguerre expansion coefficients, respectively). Normal intima and thin lesions were discriminated from thick lesions (sensitivity >90%, specificity 100%) using only spectral features. However, both spectral and time-resolved features were necessary to discriminate thick lesions rich in collagen from thick lesions rich in foam cells (sensitivity >85%, specificity >93%), and thin lesions rich in foam cells from normal aorta and thin lesions rich in collagen (sensitivity >85%, specificity >94%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for in-vivo tissue characterization.

  14. Quantitative determination of multi markers in five varieties of Withania somnifera using ultra-high performance liquid chromatography with hybrid triple quadrupole linear ion trap mass spectrometer combined with multivariate analysis: Application to pharmaceutical dosage forms.

    PubMed

    Chandra, Preeti; Kannujia, Rekha; Saxena, Ankita; Srivastava, Mukesh; Bahadur, Lal; Pal, Mahesh; Singh, Bhim Pratap; Kumar Ojha, Sanjeev; Kumar, Brijesh

    2016-09-10

    An ultra-high performance liquid chromatography electrospray ionization tandem mass spectrometry method has been developed and validated for simultaneous quantification of six major bioactive compounds in five varieties of Withania somnifera in various plant parts (leaf, stem and root). The analysis was accomplished on Waters ACQUITY UPLC BEH C18 column with linear gradient elution of water/formic acid (0.1%) and acetonitrile at a flow rate of 0.3mLmin(-1). The proposed method was validated with acceptable linearity (r(2), 0.9989-0.9998), precision (RSD, 0.16-2.01%), stability (RSD, 1.04-1.62%) and recovery (RSD ≤2.45%), under optimum conditions. The method was also successfully applied for the simultaneous determination of six marker compounds in twenty-six marketed formulations. Hierarchical cluster analysis and principal component analysis were applied to discriminate these twenty-six batches based on characteristics of the bioactive compounds. The results indicated that this method is advance, rapid, sensitive and suitable to reveal the quality of Withania somnifera and also capable of performing quality evaluation of polyherbal formulations having similar markers/raw herbs. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Mean electromotive force generated by asymmetric fluid flow near the surface of earth's outer core

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Archana

    1992-10-01

    The phi component of the mean electromotive force, (ETF) generated by asymmetric flow of fluid just beneath the core-mantle boundary (CMB), is obtained using a geomagnetic field model. This analysis is based on the supposition that the axisymmetric part of fluid flow beneath the CMB is tangentially geostrophic and toroidal. For all the epochs studied, the computed phi component is stronger in the Southern Hemisphere than that in the Northern Hemisphere. Assuming a linear relationship between (ETF) and the azimuthally averaged magnetic field (AAMF), the only nonzero off-diagonal components of the pseudotensor relating ETF to AAMF, are estimated as functions of colatitude, and the physical implications of the results are discussed.

  16. On hydrodynamic phase field models for binary fluid mixtures

    NASA Astrophysics Data System (ADS)

    Yang, Xiaogang; Gong, Yuezheng; Li, Jun; Zhao, Jia; Wang, Qi

    2018-05-01

    Two classes of thermodynamically consistent hydrodynamic phase field models have been developed for binary fluid mixtures of incompressible viscous fluids of possibly different densities and viscosities. One is quasi-incompressible, while the other is incompressible. For the same binary fluid mixture of two incompressible viscous fluid components, which one is more appropriate? To answer this question, we conduct a comparative study in this paper. First, we visit their derivation, conservation and energy dissipation properties and show that the quasi-incompressible model conserves both mass and linear momentum, while the incompressible one does not. We then show that the quasi-incompressible model is sensitive to the density deviation of the fluid components, while the incompressible model is not in a linear stability analysis. Second, we conduct a numerical investigation on coarsening or coalescent dynamics of protuberances using the two models. We find that they can predict quite different transient dynamics depending on the initial conditions and the density difference although they predict essentially the same quasi-steady results in some cases. This study thus cast a doubt on the applicability of the incompressible model to describe dynamics of binary mixtures of two incompressible viscous fluids especially when the two fluid components have a large density deviation.

  17. Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)

    NASA Technical Reports Server (NTRS)

    Christhilf, David m.; Bacon, Barton J.

    2006-01-01

    The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.

  18. Two-component dark-bright solitons in three-dimensional atomic Bose-Einstein condensates.

    PubMed

    Wang, Wenlong; Kevrekidis, P G

    2017-03-01

    In the present work, we revisit two-component Bose-Einstein condensates in their fully three-dimensional (3D) form. Motivated by earlier studies of dark-bright solitons in the 1D case, we explore the stability of these structures in their fully 3D form in two variants. In one the dark soliton is planar and trapping a planar bright (disk) soliton. In the other case, a dark spherical shell soliton creates an effective potential in which a bright spherical shell of atoms is trapped in the second component. We identify these solutions as numerically exact states (up to a prescribed accuracy) and perform a Bogolyubov-de Gennes linearization analysis that illustrates that both structures can be dynamically stable in suitable intervals of sufficiently low chemical potentials. We corroborate this finding theoretically by analyzing the stability via degenerate perturbation theory near the linear limit of the system. When the solitary waves are found to be unstable, we explore their dynamical evolution via direct numerical simulations which, in turn, reveal wave forms that are more robust. Finally, using the SO(2) symmetry of the model, we produce multi-dark-bright planar or shell solitons involved in pairwise oscillatory motion.

  19. Q-mode versus R-mode principal component analysis for linear discriminant analysis (LDA)

    NASA Astrophysics Data System (ADS)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2017-05-01

    Many literature apply Principal Component Analysis (PCA) as either preliminary visualization or variable con-struction methods or both. Focus of PCA can be on the samples (R-mode PCA) or variables (Q-mode PCA). Traditionally, R-mode PCA has been the usual approach to reduce high-dimensionality data before the application of Linear Discriminant Analysis (LDA), to solve classification problems. Output from PCA composed of two new matrices known as loadings and scores matrices. Each matrix can then be used to produce a plot, i.e. loadings plot aids identification of important variables whereas scores plot presents spatial distribution of samples on new axes that are also known as Principal Components (PCs). Fundamentally, the scores matrix always be the input variables for building classification model. A recent paper uses Q-mode PCA but the focus of analysis was not on the variables but instead on the samples. As a result, the authors have exchanged the use of both loadings and scores plots in which clustering of samples was studied using loadings plot whereas scores plot has been used to identify important manifest variables. Therefore, the aim of this study is to statistically validate the proposed practice. Evaluation is based on performance of external error obtained from LDA models according to number of PCs. On top of that, bootstrapping was also conducted to evaluate the external error of each of the LDA models. Results show that LDA models produced by PCs from R-mode PCA give logical performance and the matched external error are also unbiased whereas the ones produced with Q-mode PCA show the opposites. With that, we concluded that PCs produced from Q-mode is not statistically stable and thus should not be applied to problems of classifying samples, but variables. We hope this paper will provide some insights on the disputable issues.

  20. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Higher-Order Theory: Structural/MicroAnalysis Code (HOTSMAC) Developed

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.

    2002-01-01

    The full utilization of advanced materials (be they composite or functionally graded materials) in lightweight aerospace components requires the availability of accurate analysis, design, and life-prediction tools that enable the assessment of component and material performance and reliability. Recently, a new commercially available software product called HOTSMAC (Higher-Order Theory--Structural/MicroAnalysis Code) was jointly developed by Collier Research Corporation, Engineered Materials Concepts LLC, and the NASA Glenn Research Center under funding provided by Glenn's Commercial Technology Office. The analytical framework for HOTSMAC is based on almost a decade of research into the coupled micromacrostructural analysis of heterogeneous materials. Consequently, HOTSMAC offers a comprehensive approach for analyzing/designing the response of components with various microstructural details, including certain advantages not always available in standard displacement-based finite element analysis techniques. The capabilities of HOTSMAC include combined thermal and mechanical analysis, time-independent and time-dependent material behavior, and internal boundary cells (e.g., those that can be used to represent internal cooling passages, see the preceding figure) to name a few. In HOTSMAC problems, materials can be randomly distributed and/or functionally graded (as shown in the figure, wherein the inclusions are distributed linearly), or broken down by strata, such as in the case of thermal barrier coatings or composite laminates.

  2. Product competitiveness analysis for e-commerce platform of special agricultural products

    NASA Astrophysics Data System (ADS)

    Wan, Fucheng; Ma, Ning; Yang, Dongwei; Xiong, Zhangyuan

    2017-09-01

    On the basis of analyzing the influence factors of the product competitiveness of the e-commerce platform of the special agricultural products and the characteristics of the analytical methods for the competitiveness of the special agricultural products, the price, the sales volume, the postage included service, the store reputation, the popularity, etc. were selected in this paper as the dimensionality for analyzing the competitiveness of the agricultural products, and the principal component factor analysis was taken as the competitiveness analysis method. Specifically, the web crawler was adopted to capture the information of various special agricultural products in the e-commerce platform ---- chi.taobao.com. Then, the original data captured thereby were preprocessed and MYSQL database was adopted to establish the information library for the special agricultural products. Then, the principal component factor analysis method was adopted to establish the analysis model for the competitiveness of the special agricultural products, and SPSS was adopted in the principal component factor analysis process to obtain the competitiveness evaluation factor system (support degree factor, price factor, service factor and evaluation factor) of the special agricultural products. Then, the linear regression method was adopted to establish the competitiveness index equation of the special agricultural products for estimating the competitiveness of the special agricultural products.

  3. Bearing-Load Modeling and Analysis Study for Mechanically Connected Structures

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2006-01-01

    Bearing-load response for a pin-loaded hole is studied within the context of two-dimensional finite element analyses. Pin-loaded-hole configurations are representative of mechanically connected structures, such as a stiffener fastened to a rib of an isogrid panel, that are idealized as part of a larger structural component. Within this context, the larger structural component may be idealized as a two-dimensional shell finite element model to identify load paths and high stress regions. Finite element modeling and analysis aspects of a pin-loaded hole are considered in the present paper including the use of linear and nonlinear springs to simulate the pin-bearing contact condition. Simulating pin-connected structures within a two-dimensional finite element analysis model using nonlinear spring or gap elements provides an effective way for accurate prediction of the local effective stress state and peak forces.

  4. Multivariate statistical analysis of stream-sediment geochemistry in the Grazer Paläozoikum, Austria

    USGS Publications Warehouse

    Weber, L.; Davis, J.C.

    1990-01-01

    The Austrian reconnaissance study of stream-sediment composition — more than 30000 clay-fraction samples collected over an area of 40000 km2 — is summarized in an atlas of regional maps that show the distributions of 35 elements. These maps, rich in information, reveal complicated patterns of element abundance that are difficult to compare on more than a small number of maps at one time. In such a study, multivariate procedures such as simultaneous R-Q mode components analysis may be helpful. They can compress a large number of variables into a much smaller number of independent linear combinations. These composite variables may be mapped and relationships sought between them and geological properties. As an example, R-Q mode components analysis is applied here to the Grazer Paläozoikum, a tectonic unit northeast of the city of Graz, which is composed of diverse lithologies and contains many mineral deposits.

  5. Typification of cider brandy on the basis of cider used in its manufacture.

    PubMed

    Rodríguez Madrera, Roberto; Mangas Alonso, Juan J

    2005-04-20

    A study of typification of cider brandies on the basis of the origin of the raw material used in their manufacture was conducted using chemometric techniques (principal component analysis, linear discriminant analysis, and Bayesian analysis) together with their composition in volatile compounds, as analyzed by gas chromatography with flame ionization to detect the major volatiles and by mass spectrometric to detect the minor ones. Significant principal components computed by a double cross-validation procedure allowed the structure of the database to be visualized as a function of the raw material, that is, cider made from fresh apple juice versus cider made from apple juice concentrate. Feasible and robust discriminant rules were computed and validated by a cross-validation procedure that allowed the authors to classify fresh and concentrate cider brandies, obtaining classification hits of >92%. The most discriminating variables for typifying cider brandies according to their raw material were 1-butanol and ethyl hexanoate.

  6. Self-sustained vibrations in volcanic areas extracted by Independent Component Analysis: a review and new results

    NASA Astrophysics Data System (ADS)

    de Lauro, E.; de Martino, S.; Falanga, M.; Palo, M.

    2011-12-01

    We investigate the physical processes associated with volcanic tremor and explosions. A volcano is a complex system where a fluid source interacts with the solid edifice so generating seismic waves in a regime of low turbulence. Although the complex behavior escapes a simple universal description, the phases of activity generate stable (self-sustained) oscillations that can be described as a non-linear dynamical system of low dimensionality. So, the system requires to be investigated with non-linear methods able to individuate, decompose, and extract the main characteristics of the phenomenon. Independent Component Analysis (ICA), an entropy-based technique is a good candidate for this purpose. Here, we review the results of ICA applied to seismic signals acquired in some volcanic areas. We emphasize analogies and differences among the self-oscillations individuated in three cases: Stromboli (Italy), Erebus (Antarctica) and Volcán de Colima (Mexico). The waveforms of the extracted independent components are specific for each volcano, whereas the similarity can be ascribed to a very general common source mechanism involving the interaction between gas/magma flow and solid structures (the volcanic edifice). Indeed, chocking phenomena or inhomogeneities in the volcanic cavity can play the same role in generating self-oscillations as the languid and the reed do in musical instruments. The understanding of these background oscillations is relevant not only for explaining the volcanic source process and to make a forecast into the future, but sheds light on the physics of complex systems developing low turbulence.

  7. A composite measure to explore visual disability in primary progressive multiple sclerosis.

    PubMed

    Poretto, Valentina; Petracca, Maria; Saiote, Catarina; Mormina, Enricomaria; Howard, Jonathan; Miller, Aaron; Lublin, Fred D; Inglese, Matilde

    2017-01-01

    Optical coherence tomography (OCT) and magnetic resonance imaging (MRI) can provide complementary information on visual system damage in multiple sclerosis (MS). The objective of this paper is to determine whether a composite OCT/MRI score, reflecting cumulative damage along the entire visual pathway, can predict visual deficits in primary progressive multiple sclerosis (PPMS). Twenty-five PPMS patients and 20 age-matched controls underwent neuro-ophthalmologic evaluation, spectral-domain OCT, and 3T brain MRI. Differences between groups were assessed by univariate general linear model and principal component analysis (PCA) grouped instrumental variables into main components. Linear regression analysis was used to assess the relationship between low-contrast visual acuity (LCVA), OCT/MRI-derived metrics and PCA-derived composite scores. PCA identified four main components explaining 80.69% of data variance. Considering each variable independently, LCVA 1.25% was significantly predicted by ganglion cell-inner plexiform layer (GCIPL) thickness, thalamic volume and optic radiation (OR) lesion volume (adjusted R 2 0.328, p  = 0.00004; adjusted R 2 0.187, p  = 0.002 and adjusted R 2 0.180, p  = 0.002). The PCA composite score of global visual pathway damage independently predicted both LCVA 1.25% (adjusted R 2 value 0.361, p  = 0.00001) and LCVA 2.50% (adjusted R 2 value 0.323, p  = 0.00003). A multiparametric score represents a more comprehensive and effective tool to explain visual disability than a single instrumental metric in PPMS.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Lu; Albright, Austin P; Rahimpour, Alireza

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-areamore » oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements« less

  9. Independent component analysis applied to long bunch beams in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Kolski, Jeffrey S.; Macek, Robert J.; McCrady, Rodney C.; Pang, Xiaoying

    2012-11-01

    Independent component analysis (ICA) is a powerful blind source separation (BSS) method. Compared to the typical BSS method, principal component analysis, ICA is more robust to noise, coupling, and nonlinearity. The conventional ICA application to turn-by-turn position data from multiple beam position monitors (BPMs) yields information about cross-BPM correlations. With this scheme, multi-BPM ICA has been used to measure the transverse betatron phase and amplitude functions, dispersion function, linear coupling, sextupole strength, and nonlinear beam dynamics. We apply ICA in a new way to slices along the bunch revealing correlations of particle motion within the beam bunch. We digitize beam signals of the long bunch at the Los Alamos Proton Storage Ring with a single device (BPM or fast current monitor) for an entire injection-extraction cycle. ICA of the digitized beam signals results in source signals, which we identify to describe varying betatron motion along the bunch, locations of transverse resonances along the bunch, measurement noise, characteristic frequencies of the digitizing oscilloscopes, and longitudinal beam structure.

  10. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  11. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  12. A pilot evaluation of a computer-based psychometric test battery designed to detect impairment in patients with cirrhosis.

    PubMed

    Cook, Nicola A; Kim, Jin Un; Pasha, Yasmin; Crossey, Mary Me; Schembri, Adrian J; Harel, Brian T; Kimhofer, Torben; Taylor-Robinson, Simon D

    2017-01-01

    Psychometric testing is used to identify patients with cirrhosis who have developed hepatic encephalopathy (HE). Most batteries consist of a series of paper-and-pencil tests, which are cumbersome for most clinicians. A modern, easy-to-use, computer-based battery would be a helpful clinical tool, given that in its minimal form, HE has an impact on both patients' quality of life and the ability to drive and operate machinery (with societal consequences). We compared the Cogstate™ computer battery testing with the Psychometric Hepatic Encephalopathy Score (PHES) tests, with a view to simplify the diagnosis. This was a prospective study of 27 patients with histologically proven cirrhosis. An analysis of psychometric testing was performed using accuracy of task performance and speed of completion as primary variables to create a correlation matrix. A stepwise linear regression analysis was performed with backward elimination, using analysis of variance. Strong correlations were found between the international shopping list, international shopping list delayed recall of Cogstate and the PHES digit symbol test. The Shopping List Tasks were the only tasks that consistently had P values of <0.05 in the linear regression analysis. Subtests of the Cogstate battery correlated very strongly with the digit symbol component of PHES in discriminating severity of HE. These findings would indicate that components of the current PHES battery with the international shopping list tasks of Cogstate would be discriminant and have the potential to be used easily in clinical practice.

  13. Short-term PV/T module temperature prediction based on PCA-RBF neural network

    NASA Astrophysics Data System (ADS)

    Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng

    2018-02-01

    Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.

  14. Modulated Hebb-Oja learning rule--a method for principal subspace analysis.

    PubMed

    Jankovic, Marko V; Ogawa, Hidemitsu

    2006-03-01

    This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.

  15. Artificial neural networks and multiple linear regression model using principal components to estimate rainfall over South America

    NASA Astrophysics Data System (ADS)

    Soares dos Santos, T.; Mendes, D.; Rodrigues Torres, R.

    2016-01-01

    Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANNs) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon; northeastern Brazil; and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model output and observed monthly precipitation. We used general circulation model (GCM) experiments for the 20th century (RCP historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.

  16. Artificial neural networks and multiple linear regression model using principal components to estimate rainfall over South America

    NASA Astrophysics Data System (ADS)

    dos Santos, T. S.; Mendes, D.; Torres, R. R.

    2015-08-01

    Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANN) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon, Northeastern Brazil and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model out- put and observed monthly precipitation. We used GCMs experiments for the 20th century (RCP Historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANN significantly outperforms the MLR downscaling of monthly precipitation variability.

  17. Component-Level Tuning of Kinematic Features from Composite Therapist Impressions of Movement Quality

    PubMed Central

    Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.

    2016-01-01

    In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331

  18. Analysis of the Laser Calibration System for the CMS HCAL at CERN's Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Lebolo, Luis

    2005-11-01

    The European Organization for Nuclear Physics' (CERN) Large Hadron Collider uses the Compact Muon Solenoid (CMS) detector to measure collision products from proton-proton interactions. CMS uses a hadron calorimeter (HCAL) to measure the energy and position of quarks and gluons by reconstructing their hadronic decay products. An essential component of the detector is the calibration system, which was evaluated in terms of its misalignment, linearity, and resolution. In order to analyze the data, the authors created scripts in ROOT 5.02/00 and C++. The authors also used Mathematica 5.1 to perform complex mathematics and AutoCAD 2006 to produce optical ray traces. The misalignment of the optical components was found to be satisfactory; the Hybrid Photodiodes (HPDs) were confirmed to be linear; the constant, noise and stochastic contributions to its resolution were analyzed; and the quantum efficiency of most HPDs was determined to be approximately 40%. With a better understanding of the laser calibration system, one can further understand and improve the HCAL.

  19. Polycyclic aromatic hydrocarbons in ambient air, surface soil and wheat grain near a large steel-smelting manufacturer in northern China.

    PubMed

    Liu, Weijian; Wang, Yilong; Chen, Yuanchen; Tao, Shu; Liu, Wenxin

    2017-07-01

    The total concentrations and component profiles of polycyclic aromatic hydrocarbons (PAHs) in ambient air, surface soil and wheat grain collected from wheat fields near a large steel-smelting manufacturer in Northern China were determined. Based on the specific isomeric ratios of paired species in ambient air, principle component analysis and multivariate linear regression, the main emission source of local PAHs was identified as a mixture of industrial and domestic coal combustion, biomass burning and traffic exhaust. The total organic carbon (TOC) fraction was considerably correlated with the total and individual PAH concentrations in surface soil. The total concentrations of PAHs in wheat grain were relatively low, with dominant low molecular weight constituents, and the compositional profile was more similar to that in ambient air than in topsoil. Combined with more significant results from partial correlation and linear regression models, the contribution from air PAHs to grain PAHs may be greater than that from soil PAHs. Copyright © 2016. Published by Elsevier B.V.

  20. Polarization Ratio Determination with Two Identical Linearly Polarized Antennas

    DTIC Science & Technology

    2017-01-17

    Fourier transform analysis of 21 measurements with one of the antennas rotating about its axis a circular polarization ratio is derived which can be...deter- mined directly from a discrete Fourier transform (DFT) of (5). However, leakage between closely spaced DFT bins requires improving the... Fourier transform and a mechanical antenna rotation to separate the principal and opposite circular polarization components followed by a basis

  1. Risk prediction for myocardial infarction via generalized functional regression models.

    PubMed

    Ieva, Francesca; Paganoni, Anna M

    2016-08-01

    In this paper, we propose a generalized functional linear regression model for a binary outcome indicating the presence/absence of a cardiac disease with multivariate functional data among the relevant predictors. In particular, the motivating aim is the analysis of electrocardiographic traces of patients whose pre-hospital electrocardiogram (ECG) has been sent to 118 Dispatch Center of Milan (the Italian free-toll number for emergencies) by life support personnel of the basic rescue units. The statistical analysis starts with a preprocessing of ECGs treated as multivariate functional data. The signals are reconstructed from noisy observations. The biological variability is then removed by a nonlinear registration procedure based on landmarks. Thus, in order to perform a data-driven dimensional reduction, a multivariate functional principal component analysis is carried out on the variance-covariance matrix of the reconstructed and registered ECGs and their first derivatives. We use the scores of the Principal Components decomposition as covariates in a generalized linear model to predict the presence of the disease in a new patient. Hence, a new semi-automatic diagnostic procedure is proposed to estimate the risk of infarction (in the case of interest, the probability of being affected by Left Bundle Brunch Block). The performance of this classification method is evaluated and compared with other methods proposed in literature. Finally, the robustness of the procedure is checked via leave-j-out techniques. © The Author(s) 2013.

  2. Simultaneous Determination of Multiple Classes of Hydrophilic and Lipophilic Components in Shuang-Huang-Lian Oral Liquid Formulations by UPLC-Triple Quadrupole Linear Ion Trap Mass Spectrometry.

    PubMed

    Liang, Jun; Sun, Hui-Min; Wang, Tian-Long

    2017-11-24

    The Shuang-Huang-Lian (SHL) oral liquid is a combined herbal prescription used in the treatment of acute upper respiratory tract infection, acute bronchitis and pneumonia. Multiple constituents are considered to be responsible for the therapeutic effects of SHL. However, the quantitation of the multi-components from multiple classes is still unsatisfactory because of the high complexity of constituents in SHL. In this study, an accurate, rapid, and specific UPLC-MS/MS method was established for simultaneous quantification of 18 compounds from multiple classes in SHL oral liquid formulations. Chromatographic separation was performed on a HSS T3 (1.8 μm, 2.1 mm × 100 mm) column, using a gradient mobile phase system of 0.1% formic acid in acetonitrile and 0.1% formic acid in water at a flow rate of 0.2 mL·min -1 ; the run time was 23 min. The MS was operated in negative electrospray ionization (ESI - ) for analysis of 18 compounds using multiple reaction monitoring (MRM) mode. UPLC-ESI - -MRM-MS/MS method showed good linear relationships ( R ² > 0.999), repeatability (RSD < 3%), precisions (RSD < 3%) and recovery (84.03-101.62%). The validated method was successfully used to determine multiple classes of hydrophilic and lipophilic components in the SHL oral liquids. Finally, principal component analysis (PCA) was used to classify and differentiate SHL oral liquid samples attributed to different manufacturers of China. The proposed UPLC-ESI - -MRM-MS/MS coupled with PCA has been elucidated to be a simple and reliable method for quality evaluation of SHL oral liquids.

  3. High sensitivity radiochromic film dosimetry using an optical common-mode rejection and a reflective-mode flatbed color scanner.

    PubMed

    Ohuchi, Hiroko

    2007-11-01

    A novel method that can greatly improve the dosimetric sensitivity limit of a radiochromic film (RCF) through use of a set of color components, e.g., red and green, outputs from a RGB color scanner has been developed. RCFs are known to have microscopic and macroscopic nonuniformities, which come from the thickness variations in the film's active radiochromic layer and coating. These variations in the response make the optical signal-to-noise ratio lower, resulting in lower film sensitivity. To mitigate the effects of RCF nonuniform response, an optical common-mode rejection (CMR) was developed. The CMR compensates nonuniform response by creating a ratio of the two signals where the factors common to both numerator and denominator cancel out. The CMR scheme was applied to the mathematical operation of creating a ratio using two components, red and green outputs from a scanner. The two light component lights are neighboring wavebands about 100 nm apart and suffer a common fate, with the exception of wavelength-dependent events, having passed together along common attenuation paths. Two types of dose-response curves as a function of delivered dose ranging from 3.7 mGy to 8.1 Gy for 100 kV x-ray beams were obtained with the optical CMR scheme and the conventional analysis method using red component, respectively. In the range of 3.7 mGy to 81 mGy, the optical densities obtained with the optical CMR showed a good consistency among eight measured samples and an improved consistency with a linear fit within 1 standard deviation of each measured optical densities, while those with the conventional analysis exhibited a large discrepancy among eight samples and did not show a consistency with a linear fit.

  4. The mechanism by which nonlinearity sustains turbulence in plane Couette flow

    NASA Astrophysics Data System (ADS)

    Nikolaidis, M.-A.; Farrell, B. F.; Ioannou, P. J.

    2018-04-01

    Turbulence in wall-bounded shear flow results from a synergistic interaction between linear non-normality and nonlinearity in which non-normal growth of a subset of perturbations configured to transfer energy from the externally forced component of the turbulent state to the perturbation component maintains the perturbation energy, while the subset of energy-transferring perturbations is replenished by nonlinearity. Although it is accepted that both linear non-normality mediated energy transfer from the forced component of the mean flow and nonlinear interactions among perturbations are required to maintain the turbulent state, the detailed physical mechanism by which these processes interact in maintaining turbulence has not been determined. In this work a statistical state dynamics based analysis is performed on turbulent Couette flow at R = 600 and a comparison to DNS is used to demonstrate that the perturbation component in Couette flow turbulence is replenished by a non-normality mediated parametric growth process in which the fluctuating streamwise mean flow has been adjusted to marginal Lyapunov stability. It is further shown that the alternative mechanism in which the subspace of non-normally growing perturbations is maintained directly by perturbation-perturbation nonlinearity does not contribute to maintaining the turbulent state. This work identifies parametric interaction between the fluctuating streamwise mean flow and the streamwise varying perturbations to be the mechanism of the nonlinear interaction maintaining the perturbation component of the turbulent state, and identifies the associated Lyapunov vectors with positive energetics as the structures of the perturbation subspace supporting the turbulence.

  5. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  6. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  7. Spectral discrimination of serum from liver cancer and liver cirrhosis using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Tianyue; Li, Xiaozhou; Yu, Ting; Sun, Ruomin; Li, Siqi

    2011-07-01

    In this paper, Raman spectra of human serum were measured using Raman spectroscopy, then the spectra was analyzed by multivariate statistical methods of principal component analysis (PCA). Then linear discriminant analysis (LDA) was utilized to differentiate the loading score of different diseases as the diagnosing algorithm. Artificial neural network (ANN) was used for cross-validation. The diagnosis sensitivity and specificity by PCA-LDA are 88% and 79%, while that of the PCA-ANN are 89% and 95%. It can be seen that modern analyzing method is a useful tool for the analysis of serum spectra for diagnosing diseases.

  8. Using color histograms and SPA-LDA to classify bacteria.

    PubMed

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  9. Determination and fingerprint analysis of steroidal saponins in roots of Liriope muscari (Decne.) L. H. Bailey by ultra high performance liquid chromatography coupled with ion trap time-of-flight mass spectrometry.

    PubMed

    Li, Yong-Wei; Qi, Jin; Wen-Zhang; Zhou, Shui-Ping; Yan-Wu; Yu, Bo-Yang

    2014-07-01

    Liriope muscari (Decne.) L. H. Bailey is a well-known traditional Chinese medicine used for treating cough and insomnia. There are few reports on the quality evaluation of this herb partly because the major steroid saponins are not readily identified by UV detectors and are not easily isolated due to the existence of many similar isomers. In this study, a qualitative and quantitative method was developed to analyze the major components in L. muscari (Decne.) L. H. Bailey roots. Sixteen components were deduced and identified primarily by the information obtained from ultra high performance liquid chromatography with ion-trap time-of-flight mass spectrometry. The method demonstrated the desired specificity, linearity, stability, precision, and accuracy for simultaneous determination of 15 constituents (13 steroidal glycosides, 25(R)-ruscogenin, and pentylbenzoate) in 26 samples from different origins. The fingerprint was established, and the evaluation was achieved using similarity analysis and principal component analysis of 15 fingerprint peaks from 26 samples by ultra high performance liquid chromatography. The results from similarity analysis were consistent with those of principal component analysis. All results suggest that the established method could be applied effectively to the determination of multi-ingredients and fingerprint analysis of steroid saponins for quality assessment and control of L. muscari (Decne.) L. H. Bailey. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Using McStas for modelling complex optics, using simple building bricks

    NASA Astrophysics Data System (ADS)

    Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim

    2011-04-01

    The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.

  11. Nonadiabatic laser-induced alignment of molecules: Reconstructing ⟨ θ⟩ directly from ⟨ θ2D⟩ by Fourier analysis.

    PubMed

    Søndergaard, Anders Aspegren; Shepperson, Benjamin; Stapelfeldt, Henrik

    2017-07-07

    We present an efficient, noise-robust method based on Fourier analysis for reconstructing the three-dimensional measure of the alignment degree, ⟨cos 2 θ⟩, directly from its two-dimensional counterpart, ⟨cos 2 θ 2D ⟩. The method applies to nonadiabatic alignment of linear molecules induced by a linearly polarized, nonresonant laser pulse. Our theoretical analysis shows that the Fourier transform of the time-dependent ⟨cos 2 θ 2D ⟩ trace over one molecular rotational period contains additional frequency components compared to the Fourier transform of ⟨cos 2 θ⟩. These additional frequency components can be identified and removed from the Fourier spectrum of ⟨cos 2 θ 2D ⟩. By rescaling of the remaining frequency components, the Fourier spectrum of ⟨cos 2 θ⟩ is obtained and, finally, ⟨cos 2 θ⟩ is reconstructed through inverse Fourier transformation. The method allows the reconstruction of the ⟨cos 2 θ⟩ trace from a measured ⟨cos 2 θ 2D ⟩ trace, which is the typical observable of many experiments, and thereby provides direct comparison to calculated ⟨cos 2 θ⟩ traces, which is the commonly used alignment metric in theoretical descriptions. We illustrate our method by applying it to the measurement of nonadiabatic alignment of I 2 molecules. In addition, we present an efficient algorithm for calculating the matrix elements of cos 2 θ 2D and any other observable in the symmetric top basis. These matrix elements are required in the rescaling step, and they allow for highly efficient numerical calculation of ⟨cos 2 θ 2D ⟩ and ⟨cos 2 θ⟩ in general.

  12. Pre-compound emission in low-energy heavy-ion interactions

    NASA Astrophysics Data System (ADS)

    Sharma, Manoj Kumar; Shuaib, Mohd.; Sharma, Vijay R.; Yadav, Abhishek; Singh, Pushpendra P.; Singh, Devendra P.; Unnati; Singh, B. P.; Prasad, R.

    2017-11-01

    Recent experimental studies have shown the presence of pre-compound emission component in heavy ion reactions at low projectile energy ranging from 4 to 7 MeV/nucleons. In earlier measurements strength of the pre-compound component has been estimated from the difference in forward-backward distributions of emitted particles. Present measurement is a part of an ongoing program on the study of reaction dynamics of heavy ion interactions at low energies aimed at investigating the effect of momentum transfer in compound, precompound, complete and incomplete fusion processes in heavy ion reactions. In the present work on the basis of momentum transfer the measurement of the recoil range distributions of heavy residues has been used to decipher the components of compound and pre-compound emission processes in the fusion of 16O projectile with 159Tb and 169Tm targets. The analysis of recoil range distribution measurements show two distinct linear momentum transfer components corresponding to pre-compound and compound nucleus processes are involved. In order to obtain the mean input angular momentum associated with compound and pre-compound emission processes, an online measurement of the spin distributions of the residues has been performed. The analysis of spin distribution indicate that the mean input angular momentum associated with pre-compound products is found to be relatively lower than that associated with compound nucleus process. The pre-compound components obtained from the present analysis are consistent with those obtained from the analysis of excitation functions.

  13. Validation of Shared and Specific Independent Component Analysis (SSICA) for Between-Group Comparisons in fMRI

    PubMed Central

    Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe

    2016-01-01

    Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843

  14. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  15. Attenuation of cryocooler induced vibration using multimodal tuned dynamic absorbers

    NASA Astrophysics Data System (ADS)

    Veprik, Alexander; Babitsky, Vladimir; Tuito, Avi

    2017-05-01

    Modern infrared imagers often rely on split Stirling linear cryocoolers comprising compressor and expander, the relative position of which is governed by the optical design and packaging constraints. A force couple generated by imbalanced reciprocation of moving components inside both compressor and expander result in cryocooler induced vibration comprising angular and translational tonal components manifesting itself in the form of line of sight jitter and dynamic defocusing. Since linear cryocooler is usually driven at a fixed and precisely adjustable frequency, a tuned dynamic absorber is a well suited tool for vibration control. It is traditionally made in the form of lightweight single degree of freedom undamped mechanical resonator, the frequency of which is essentially matched with the driving frequency or vice versa. Unfortunately, the performance of such a traditional approach is limited in terms of simultaneous attenuating translational and angular components of cooler induced vibration. The authors are enhancing the traditional concept and consider multimodal tuned dynamic absorber made in the form of weakly damped mechanical resonator, where the frequencies of useful dynamic modes are essentially matched with the driving frequency. Dynamic analysis and experimental testing show that the dynamic reactions (forces and moments) produced by such a device may simultaneously attenuate both translational and angular components of cryocoolerinduced vibration. The authors are considering different embodiments and their suitability for different packaging concepts. The outcomes of theoretical predictions are supported by full scale experimentation.

  16. Short communication: Principal components and factor analytic models for test-day milk yield in Brazilian Holstein cattle.

    PubMed

    Bignardi, A B; El Faro, L; Rosa, G J M; Cardoso, V L; Machado, P F; Albuquerque, L G

    2012-04-01

    A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Ion Elevators and Escalators in Multilevel Structures for Lossless Ion Manipulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Yehia M.; Hamid, Ahmed M.; Cox, Jonathan T.

    2017-01-19

    We describe two approaches based upon ion ‘elevator’ and ‘escalator’ components that allow moving ions to different levels in structures for lossless ion manipulations (SLIM). Guided by ion motion simulations we designed elevator and escalator components providing essentially lossless transmission in multi-level designs based upon ion current measurements. The ion elevator design allowed ions to efficiently bridge a 4 mm gap between levels. The component was integrated in a SLIM and coupled to a QTOF mass spectrometer using an ion funnel interface to evaluate the m/z range transmitted as compared to transmission within a level (e.g. in a linear section).more » Mass spectra for singly-charged ions of m/z 600-2700 produced similar mass spectra for both elevator and straight (linear motion) components. In the ion escalator design, traveling waves (TW) were utilized to transport ions efficiently between two SLIM levels. Ion current measurements and ion mobility (IM) spectrometry analysis illustrated that ions can be transported between TW-SLIM levels with no significant loss of either ions or IM resolution. These developments provide a path for the development of multilevel designs providing e.g. much longer IM path lengths, more compact designs, and the implementation of much more complex SLIM devices in which e.g. different levels may operate at different temperatures or with different gases.« less

  18. HPLC-PDA Combined with Chemometrics for Quantitation of Active Components and Quality Assessment of Raw and Processed Fruits of Xanthium strumarium L.

    PubMed

    Jiang, Hai; Yang, Liu; Xing, Xudong; Yan, Meiling; Guo, Xinyue; Yang, Bingyou; Wang, Qiuhong; Kuang, Haixue

    2018-01-25

    As a valuable herbal medicine, the fruits of Xanthium strumarium L. (Xanthii Fructus) have been widely used in raw and processed forms to achieve different therapeutic effects in practice. In this study, a comprehensive strategy was proposed for evaluating the active components in 30 batches of raw and processed Xanthii Fructus (RXF and PXF) samples, based on high-performance liquid chromatography coupled with photodiode array detection (HPLC-PDA). Twelve common peaks were detected and eight compounds of caffeoylquinic acids were simultaneously quantified in RXF and PXF. All the analytes were detected with satisfactory linearity (R² > 0.9991) over wide concentration ranges. Simultaneously, the chemically latent information was revealed by hierarchical cluster analysis (HCA) and principal component analysis (PCA). The results suggest that there were significant differences between RXF and PXF from different regions in terms of the content of eight caffeoylquinic acids. Potential chemical markers for XF were found during processing by chemometrics.

  19. Parametric Analysis to Study the Influence of Aerogel-Based Renders' Components on Thermal and Mechanical Performance.

    PubMed

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-05-04

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study's objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect.

  20. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  1. Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video.

    PubMed

    Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor

    2008-10-01

    We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.

  2. Parametric Analysis to Study the Influence of Aerogel-Based Renders’ Components on Thermal and Mechanical Performance

    PubMed Central

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-01-01

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study’s objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect. PMID:28773460

  3. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these results, the combination of naturalistic movie stimuli and classification analysis in fMRI experiments may prove to be a sensitive tool for the assessment of changes in natural cognitive processes under experimental manipulation. PMID:27065832

  4. Supervised chemical pattern recognition in almond ( Prunus dulcis ) Portuguese PDO cultivars: PCA- and LDA-based triennial study.

    PubMed

    Barreira, João C M; Casal, Susana; Ferreira, Isabel C F R; Peres, António M; Pereira, José Alberto; Oliveira, M Beatriz P P

    2012-09-26

    Almonds harvested in three years in Trás-os-Montes (Portugal) were characterized to find differences among Protected Designation of Origin (PDO) Amêndoa Douro and commercial non-PDO cultivars. Nutritional parameters, fiber (neutral and acid detergent fibers, acid detergent lignin, and cellulose), fatty acids, triacylglycerols (TAG), and tocopherols were evaluated. Fat was the major component, followed by carbohydrates, protein, and moisture. Fatty acids were mostly detected as monounsaturated and polyunsaturated forms, with relevance of oleic and linoleic acids. Accordingly, 1,2,3-trioleoylglycerol and 1,2-dioleoyl-3-linoleoylglycerol were the major TAG. α-Tocopherol was the leading tocopherol. To verify statistical differences among PDO and non-PDO cultivars independent of the harvest year, data were analyzed through an analysis of variance, a principal component analysis, and a linear discriminant analysis (LDA). These differences identified classification parameters, providing an important tool for authenticity purposes. The best results were achieved with TAG analysis coupled with LDA, which proved its effectiveness to discriminate almond cultivars.

  5. Neutron star dynamics under time-dependent external torques

    NASA Astrophysics Data System (ADS)

    Gügercinoǧlu, Erbil; Alpar, M. Ali

    2017-11-01

    The two-component model describes neutron star dynamics incorporating the response of the superfluid interior. Conventional solutions and applications involve constant external torques, as appropriate for radio pulsars on dynamical time-scales. We present the general solution of two-component dynamics under arbitrary time-dependent external torques, with internal torques that are linear in the rotation rates, or with the extremely non-linear internal torques due to vortex creep. The two-component model incorporating the response of linear or non-linear internal torques can now be applied not only to radio pulsars but also to magnetars and to neutron stars in binary systems, with strong observed variability and noise in the spin-down or spin-up rates. Our results allow the extraction of the time-dependent external torques from the observed spin-down (or spin-up) time series, \\dot{Ω }(t). Applications are discussed.

  6. Comprehensive Analysis of Large Sets of Age-Related Physiological Indicators Reveals Rapid Aging around the Age of 55 Years.

    PubMed

    Lixie, Erin; Edgeworth, Jameson; Shamir, Lior

    2015-01-01

    While many studies show a correlation between chronological age and physiological indicators, the nature of this correlation is not fully understood. To perform a comprehensive analysis of the correlation between chronological age and age-related physiological indicators. Physiological aging scores were deduced using principal component analysis from a large dataset of 1,227 variables measured in a cohort of 4,796 human subjects, and the correlation between the physiological aging scores and chronological age was assessed. Physiological age does not progress linearly or exponentially with chronological age: a more rapid physiological change is observed around the age of 55 years, followed by a mild decline until around the age of 70 years. These findings provide evidence that the progression of physiological age is not linear with that of chronological age, and that periods of mild change in physiological age are separated by periods of more rapid aging. © 2015 S. Karger AG, Basel.

  7. Frame sequences analysis technique of linear objects movement

    NASA Astrophysics Data System (ADS)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  8. Effect of noise in principal component analysis with an application to ozone pollution

    NASA Astrophysics Data System (ADS)

    Tsakiri, Katerina G.

    This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction

  9. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  10. Analysis and control of the METC fluid bed gasifier. Quarterly progress report, January--March 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 3/31/95. In this work, three components will form the basis for design of a control scheme for the Fluidized Bed Gasifier (FBG) at METC: (1) a control systems analysis based on simple linear models derived from process data, (2) review of the literature on fluid bed gasifier operation and control, and (3) understanding of present FBG operation and real world considerations. Below we summarize work accomplished to data in each of these areas.

  11. 1988 IEEE Aerospace Applications Conference, Park City, UT, Feb. 7-12, 1988, Digest

    NASA Astrophysics Data System (ADS)

    The conference presents papers on microwave applications, data and signal processing applications, related aerospace applications, and advanced microelectronic products for the aerospace industry. Topics include a high-performance antenna measurement system, microwave power beaming from earth to space, the digital enhancement of microwave component performance, and a GaAs vector processor based on parallel RISC microprocessors. Consideration is also given to unique techniques for reliable SBNR architectures, a linear analysis subsystem for CSSL-IV, and a structured singular value approach to missile autopilot analysis.

  12. Direct Sampling and Analysis from Solid Phase Extraction Cards using an Automated Liquid Extraction Surface Analysis Nanoelectrospray Mass Spectrometry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walworth, Matthew J; ElNaggar, Mariam S; Stankovich, Joseph J

    Direct liquid extraction based surface sampling, a technique previously demonstrated with continuous flow and autonomous pipette liquid microjunction surface sampling probes, has recently been implemented as the Liquid Extraction Surface Analysis (LESA) mode on the commercially available Advion NanoMate chip-based infusion nanoelectrospray ionization system. In the present paper, the LESA mode was applied to the analysis of 96-well format custom solid phase extraction (SPE) cards, with each well consisting of either a 1 or 2 mm diameter monolithic hydrophobic stationary phase. These substrate wells were conditioned, loaded with either single or multi-component aqueous mixtures, and read out using the LESAmore » mode of a TriVersa NanoMate or a Nanomate 100 coupled to an ABI/Sciex 4000QTRAPTM hybrid triple quadrupole/linear ion trap mass spectrometer and a Thermo LTQ XL linear ion trap mass spectrometer. Extraction conditions, including extraction/nanoESI solvent composition, volume, and dwell times, were optimized in the analysis of targeted compounds. Limit of detection and quantitation as well as analysis reproducibility figures of merit were measured. Calibration data was obtained for propranolol using a deuterated internal standard which demonstrated linearity and reproducibility. A 10x increase in signal and cleanup of micromolar Angiotensin II from a concentrated salt solution was demonstrated. Additionally, a multicomponent herbicide mixture at ppb concentration levels was analyzed using MS3 spectra for compound identification in the presence of isobaric interferences.« less

  13. An advanced panel method for analysis of arbitrary configurations in unsteady subsonic flow

    NASA Technical Reports Server (NTRS)

    Dusto, A. R.; Epton, M. A.

    1980-01-01

    An advanced method is presented for solving the linear integral equations for subsonic unsteady flow in three dimensions. The method is applicable to flows about arbitrary, nonplanar boundary surfaces undergoing small amplitude harmonic oscillations about their steady mean locations. The problem is formulated with a wake model wherein unsteady vorticity can be convected by the steady mean component of flow. The geometric location of the unsteady source and doublet distributions can be located on the actual surfaces of thick bodies in their steady mean locations. The method is an outgrowth of a recently developed steady flow panel method and employs the linear source and quadratic doublet splines of that method.

  14. Differentiation of Organically and Conventionally Grown Tomatoes by Chemometric Analysis of Combined Data from Proton Nuclear Magnetic Resonance and Mid-infrared Spectroscopy and Stable Isotope Analysis.

    PubMed

    Hohmann, Monika; Monakhova, Yulia; Erich, Sarah; Christoph, Norbert; Wachter, Helmut; Holzgrabe, Ulrike

    2015-11-04

    Because the basic suitability of proton nuclear magnetic resonance spectroscopy ((1)H NMR) to differentiate organic versus conventional tomatoes was recently proven, the approach to optimize (1)H NMR classification models (comprising overall 205 authentic tomato samples) by including additional data of isotope ratio mass spectrometry (IRMS, δ(13)C, δ(15)N, and δ(18)O) and mid-infrared (MIR) spectroscopy was assessed. Both individual and combined analytical methods ((1)H NMR + MIR, (1)H NMR + IRMS, MIR + IRMS, and (1)H NMR + MIR + IRMS) were examined using principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), linear discriminant analysis (LDA), and common components and specific weight analysis (ComDim). With regard to classification abilities, fused data of (1)H NMR + MIR + IRMS yielded better validation results (ranging between 95.0 and 100.0%) than individual methods ((1)H NMR, 91.3-100%; MIR, 75.6-91.7%), suggesting that the combined examination of analytical profiles enhances authentication of organically produced tomatoes.

  15. Using foreground/background analysis to determine leaf and canopy chemistry

    NASA Technical Reports Server (NTRS)

    Pinzon, J. E.; Ustin, S. L.; Hart, Q. J.; Jacquemoud, S.; Smith, M. O.

    1995-01-01

    Spectral Mixture Analysis (SMA) has become a well established procedure for analyzing imaging spectrometry data, however, the technique is relatively insensitive to minor sources of spectral variation (e.g., discriminating stressed from unstressed vegetation and variations in canopy chemistry). Other statistical approaches have been tried e.g., stepwise multiple linear regression analysis to predict canopy chemistry. Grossman et al. reported that SMLR is sensitive to measurement error and that the prediction of minor chemical components are not independent of patterns observed in more dominant spectral components like water. Further, they observed that the relationships were strongly dependent on the mode of expressing reflectance (R, -log R) and whether chemistry was expressed on a weight (g/g) or are basis (g/sq m). Thus, alternative multivariate techniques need to be examined. Smith et al. reported a revised SMA that they termed Foreground/Background Analysis (FBA) that permits directing the analysis along any axis of variance by identifying vectors through the n-dimensional spectral volume orthonormal to each other. Here, we report an application of the FBA technique for the detection of canopy chemistry using a modified form of the analysis.

  16. [Content determination of twelve major components in Tibetan medicine Zuozhu Daxi by UPLC].

    PubMed

    Qu, Yan; Li, Jin-hua; Zhang, Chen; Li, Chun-xue; Dong, Hong-jiao; Wang, Chang-sheng; Zeng, Rui; Chen, Xiao-hu

    2015-05-01

    A quantitative analytical method of ultra-high performance liquid chromatography (UPLC) was developed for simultaneously determining twelve components in Tibetan medicine Zuozhu Daxi. SIMPCA 12.0 software was used a principal component analysis PCA) and partial small squares analysis (PLSD-DA) on the twelve components in 10 batches from four pharmaceutical factories. Acquity UPLC BEH C15 column (2.1 mm x 100 mm, 1.7 µm) was adopted at the column temperature of 35 °C and eluted with acetonitrile (A) -0.05% phosphate acid solution (B) as the mobile phase with a flow rate of 0. 3 mL · min(-1). The injection volume was 1 µL. The detection wavelengths were set at 210 nm for alantolactone, isoalantolactone and oleanolic; 260 nm for trychnine and brucine; 288 nm for protopine; 306 nm for protopine, resveratrol and piperine; 370 nm for quercetin and isorhamnetin. The results showed a good separation among index components, with a good linearity relationship (R2 = 0.999 6) within the selected concentration range. The average sample recovery rates ranged between 99.44%-101.8%, with RSD between 0.37%-1.7%, indicating the method is rapid and accurate with a good repeatability and stability. The PCA and PLSD-DA analysis on the sample determination results revealed a great difference among samples from different pharmaceutical factories. The twelve components included in this study contributed significantly to the quantitative determination of intrinsic quality of Zuozhu Daxi. The UPLC established for to the quantitative determination of the twelve components can provide scientific basis for the comprehensive quality evaluation of Zuozhu Daxi.

  17. Decomposition of fluctuating initial conditions and flow harmonics

    NASA Astrophysics Data System (ADS)

    Qian, Wei-Liang; Mota, Philipe; Andrade, Rone; Gardim, Fernando; Grassi, Frédérique; Hama, Yogiro; Kodama, Takeshi

    2014-01-01

    Collective flow observed in heavy-ion collisions is largely attributed to initial geometrical fluctuations, and it is the hydrodynamic evolution of the system that transforms those initial spatial irregularities into final state momentum anisotropies. Cumulant analysis provides a mathematical tool to decompose those initial fluctuations in terms of radial and azimuthal components. It is usually thought that a specified order of azimuthal cumulant, for the most part, linearly produces flow harmonics of the same order. In this work, by considering the most central collisions (0%-5%), we carry out a systematic study on the connection between cumulants and flow harmonics using a hydrodynamic code called NeXSPheRIO. We conduct three types of calculation, by explicitly decomposing the initial conditions into components corresponding to a given eccentricity and studying the out-coming flow through hydrodynamic evolution. It is found that for initial conditions deviating significantly from Gaussian, such as those from NeXuS, the linearity between eccentricities and flow harmonics partially breaks down. Combined with the effect of coupling between cumulants of different orders, it causes the production of extra flow harmonics of higher orders. We argue that these results can be seen as a natural consequence of the non-linear nature of hydrodynamics, and they can be understood intuitively in terms of the peripheral-tube model.

  18. Experimental variability and data pre-processing as factors affecting the discrimination power of some chemometric approaches (PCA, CA and a new algorithm based on linear regression) applied to (+/-)ESI/MS and RPLC/UV data: Application on green tea extracts.

    PubMed

    Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A

    2016-08-01

    The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Parkes full polarization spectra of OH masers - II. Galactic longitudes 240° to 350°

    NASA Astrophysics Data System (ADS)

    Caswell, J. L.; Green, J. A.; Phillips, C. J.

    2014-04-01

    Full polarization measurements of 1665 and 1667 MHz OH masers at 261 sites of massive star formation have been made with the Parkes radio telescope. Here, we present the resulting spectra for 157 southern sources, complementing our previously published 104 northerly sources. For most sites, these are the first measurements of linear polarization, with good spectral resolution and complete velocity coverage. Our spectra exhibit the well-known predominance of highly circularly polarized features, interpreted as σ components of Zeeman patterns. Focusing on the generally weaker and rarer linear polarization, we found three examples of likely full Zeeman triplets (a linearly polarized π component, straddled in velocity by σ components), adding to the solitary example previously reported. We also identify 40 examples of likely isolated π components, contradicting past beliefs that π components might be extremely rare. These were recognized at 20 sites where a feature with high linear polarization on one transition is accompanied on the other transition by a matching feature, at the same velocity and also with significant linear polarization. Large velocity ranges are rare, but we find eight exceeding 25 km s-1, some of them indicating high-velocity blue-shifted outflows. Variability was investigated on time-scales of one year and over several decades. More than 20 sites (of 200) show high variability (intensity changes by factors of 4 or more) in some prominent features. Highly stable sites are extremely rare.

  20. Self-criticism interacts with the affective component of pain to predict depressive symptoms in female patients.

    PubMed

    Lerman, S F; Shahar, G; Rudich, Z

    2012-01-01

    This longitudinal study examined the role of the trait of self-criticism as a moderator of the relationship between the affective and sensory components of pain, and depression. One hundred and sixty-three chronic pain patients treated at a specialty pain clinic completed self-report questionnaires at two time points assessing affective and sensory components of pain, depression, and self-criticism. Hierarchical linear regression analysis revealed a significant 3-way interaction between self-criticism, affective pain and gender, whereby women with high affective pain and high self-criticism demonstrated elevated levels of depression. Our findings are the first to show within a broad, comprehensive model, that selfcriticism is activated by the affective, but not sensory component of pain in leading to depressive symptoms, and highlight the need to assess patients' personality as part of an effective treatment plan. © 2011 European Federation of International Association for the Study of Pain Chapters.

  1. First impressions: gait cues drive reliable trait judgements.

    PubMed

    Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P

    2012-09-01

    Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  3. Linearizing feedforward/feedback attitude control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1991-01-01

    An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.

  4. Soft actuators and soft actuating devices

    DOEpatents

    Yang, Dian; Whitesides, George M.

    2017-10-17

    A soft buckling linear actuator is described, including: a plurality of substantially parallel bucklable, elastic structural components each having its longest dimension along a first axis; and a plurality of secondary structural components each disposed between and bridging two adjacent bucklable, elastic structural components; wherein every two adjacent bucklable, elastic structural components and the secondary structural components in-between define a layer comprising a plurality of cells each capable of being connected with a fluid inflation or deflation source; the secondary structural components from two adjacent layers are not aligned along a second axis perpendicular to the first axis; and the secondary structural components are configured not to buckle, the bucklable, elastic structural components are configured to buckle along the second axis to generate a linear force, upon the inflation or deflation of the cells. Methods of actuation using the same are also described.

  5. TU-H-BRA-02: The Physics of Magnetic Field Isolation in a Novel Compact Linear Accelerator Based MRI-Guided Radiation Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D; Mutic, S; Shvartsman, S

    Purpose: To develop a method for isolating the MRI magnetic field from field-sensitive linear accelerator components at distances close to isocenter. Methods: A MRI-guided radiation therapy system has been designed that integrates a linear accelerator with simultaneous MR imaging. In order to accomplish this, the magnetron, port circulator, radiofrequency waveguide, gun driver, and linear accelerator needed to be placed in locations with low magnetic fields. The system was also required to be compact, so moving these components far from the main magnetic field and isocenter was not an option. The magnetic field sensitive components (exclusive of the waveguide) were placedmore » in coaxial steel sleeves that were electrically and mechanically isolated and whose thickness and placement were optimized using E&M modeling software. Six sets of sleeves were placed 60° apart, 85 cm from isocenter. The Faraday effect occurs when the direction of propagation is parallel to the magnetic RF field component, rotating the RF polarization, subsequently diminishing RF power. The Faraday effect was avoided by orienting the waveguides such that the magnetic field RF component was parallel to the magnetic field. Results: The magnetic field within the shields was measured to be less than 40 Gauss, significantly below the amount needed for the magnetron and port circulator. Additional mu-metal was employed to reduce the magnetic field at the linear accelerator to less than 1 Gauss. The orientation of the RF waveguides allowed the RT transport with minimal loss and reflection. Conclusion: One of the major challenges in designing a compact linear accelerator based MRI-guided radiation therapy system, that of creating low magnetic field environments for the magnetic-field sensitive components, has been solved. The measured magnetic fields are sufficiently small to enable system integration. This work supported by ViewRay, Inc.« less

  6. Analysis of radiation risk from alpha particle component of solar particle events

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Townsend, L. W.; Wilson, J. W.; Golightly, M. J.; Weyland, M.

    1994-01-01

    The solar particle events (SPE) will contain a primary alpha particle component, representing a possible increase in the potential risk to astronauts during an SPE over the often studied proton component. We discuss the physical interactions of alpha particles important in describing the transport of these particles through spacecraft and body shielding. Models of light ion reactions are presented and their effects on energy and linear energy transfer (LET) spectra in shielding discussed. We present predictions of particle spectra, dose, and dose equivalent in organs of interest for SPE spectra typical of those occurring in recent solar cycles. The large events of solar cycle 19 are found to have substantial increase in biological risk from alpha particles, including a large increase in secondary neutron production from alpha particle breakup.

  7. Hydrodynamic limit of the Yukawa one-component plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salin, Gwenaeel

    This paper presents a detailed mathematical analysis of the dynamical correlation of density fluctuations of the Yukawa one component plasma in the framework of linearized hydrodynamics. In particular, expressions for the hydrodynamic modes which hold both for the plasma and the neutral fluid are derived. This work constitutes an extension of the computation of the dynamical structure factor in the hydrodynamic limit done by Vieillefosse and Hansen [Phys. Rev. A 12, 1106 (1975)]. As a typical result of Yukawa plasma, a coupling appears between thermal and mechanical effects in the damping of the sound modes, which does not exist inmore » the classical one component plasma. Theoretical and numerical results obtained by means of equilibrium molecular-dynamic simulations in the microcanonical ensemble are compared and discussed.« less

  8. Three-dimensional analysis of magnetometer array data

    NASA Technical Reports Server (NTRS)

    Richmond, A. D.; Baumjohann, W.

    1984-01-01

    A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.

  9. Quasi-Linear Vacancy Dynamics Modeling and Circuit Analysis of the Bipolar Memristor

    PubMed Central

    Abraham, Isaac

    2014-01-01

    The quasi-linear transport equation is investigated for modeling the bipolar memory resistor. The solution accommodates vacancy and circuit level perspectives on memristance. For the first time in literature the component resistors that constitute the contemporary dual variable resistor circuit model are quantified using vacancy parameters and derived from a governing partial differential equation. The model describes known memristor dynamics even as it generates new insight about vacancy migration, bottlenecks to switching speed and elucidates subtle relationships between switching resistance range and device parameters. The model is shown to comply with Chua's generalized equations for the memristor. Independent experimental results are used throughout, to validate the insights obtained from the model. The paper concludes by implementing a memristor-capacitor filter and compares its performance to a reference resistor-capacitor filter to demonstrate that the model is usable for practical circuit analysis. PMID:25390634

  10. Quasi-linear vacancy dynamics modeling and circuit analysis of the bipolar memristor.

    PubMed

    Abraham, Isaac

    2014-01-01

    The quasi-linear transport equation is investigated for modeling the bipolar memory resistor. The solution accommodates vacancy and circuit level perspectives on memristance. For the first time in literature the component resistors that constitute the contemporary dual variable resistor circuit model are quantified using vacancy parameters and derived from a governing partial differential equation. The model describes known memristor dynamics even as it generates new insight about vacancy migration, bottlenecks to switching speed and elucidates subtle relationships between switching resistance range and device parameters. The model is shown to comply with Chua's generalized equations for the memristor. Independent experimental results are used throughout, to validate the insights obtained from the model. The paper concludes by implementing a memristor-capacitor filter and compares its performance to a reference resistor-capacitor filter to demonstrate that the model is usable for practical circuit analysis.

  11. UFVA, A Combined Linear and Nonlinear Factor Analysis Program Package for Chemical Data Evaluation.

    DTIC Science & Technology

    1980-11-01

    that one cluster consists of the monoterpenes and Isoprene; the second is of the sesquiterpenes. Compound 8 (Caryophyllene) should therefore belong to...two clusters very clearly (Fig. 6). Figure 6 The very similar fragmentation pattern of Isoprene and the monoterpenes is reflected by their close...13 of another set of 13 terpene components. These are Isoprene, four monoterpenes (Myrcene, Menthol, Camphene, Umbellulone), four sesquiterpenes

  12. Characterization of the lateral distribution of fluorescent lipid in binary-constituent lipid monolayers by principal component analysis.

    PubMed

    Sugár, István P; Zhai, Xiuhong; Boldyrev, Ivan A; Molotkovsky, Julian G; Brockman, Howard L; Brown, Rhoderick E

    2010-01-01

    Lipid lateral organization in binary-constituent monolayers consisting of fluorescent and nonfluorescent lipids has been investigated by acquiring multiple emission spectra during measurement of each force-area isotherm. The emission spectra reflect BODIPY-labeled lipid surface concentration and lateral mixing with different nonfluorescent lipid species. Using principal component analysis (PCA) each spectrum could be approximated as the linear combination of only two principal vectors. One point on a plane could be associated with each spectrum, where the coordinates of the point are the coefficients of the linear combination. Points belonging to the same lipid constituents and experimental conditions form a curve on the plane, where each point belongs to a different mole fraction. The location and shape of the curve reflects the lateral organization of the fluorescent lipid mixed with a specific nonfluorescent lipid. The method provides massive data compression that preserves and emphasizes key information pertaining to lipid distribution in different lipid monolayer phases. Collectively, the capacity of PCA for handling large spectral data sets, the nanoscale resolution afforded by the fluorescence signal, and the inherent versatility of monolayers for characterization of lipid lateral interactions enable significantly enhanced resolution of lipid lateral organizational changes induced by different lipid compositions.

  13. Acoustic-articulatory mapping in vowels by locally weighted regression

    PubMed Central

    McGowan, Richard S.; Berger, Michael A.

    2009-01-01

    A method for mapping between simultaneously measured articulatory and acoustic data is proposed. The method uses principal components analysis on the articulatory and acoustic variables, and mapping between the domains by locally weighted linear regression, or loess [Cleveland, W. S. (1979). J. Am. Stat. Assoc. 74, 829–836]. The latter method permits local variation in the slopes of the linear regression, assuming that the function being approximated is smooth. The methodology is applied to vowels of four speakers in the Wisconsin X-ray Microbeam Speech Production Database, with formant analysis. Results are examined in terms of (1) examples of forward (articulation-to-acoustics) mappings and inverse mappings, (2) distributions of local slopes and constants, (3) examples of correlations among slopes and constants, (4) root-mean-square error, and (5) sensitivity of formant frequencies to articulatory change. It is shown that the results are qualitatively correct and that loess performs better than global regression. The forward mappings show different root-mean-square error properties than the inverse mappings indicating that this method is better suited for the forward mappings than the inverse mappings, at least for the data chosen for the current study. Some preliminary results on sensitivity of the first two formant frequencies to the two most important articulatory principal components are presented. PMID:19813812

  14. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  15. Wavelet regression model in forecasting crude oil price

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  16. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  17. Modulation by EEG features of BOLD responses to interictal epileptiform discharges

    PubMed Central

    LeVan, Pierre; Tyvaert, Louise; Gotman, Jean

    2013-01-01

    Introduction EEG-fMRI of interictal epileptiform discharges (IEDs) usually assumes a fixed hemodynamic response function (HRF). This study investigates HRF variability with respect to IED amplitude fluctuations using independent component analysis (ICA), with the goal of improving the specificity of EEG-fMRI analyses. Methods We selected EEG-fMRI data from 10 focal epilepsy patients with a good quality EEG. IED amplitudes were calculated in an average reference montage. The fMRI data were decomposed by ICA and a deconvolution method identified IED-related components by detecting time courses with a significant HRF time-locked to the IEDs (F-test, p<0.05). Individual HRF amplitudes were then calculated for each IED. Components with a significant HRF/IED amplitude correlation (Spearman test, p< 0.05) were compared to the presumed epileptogenic focus and to results of a general linear model (GLM) analysis. Results In 7 patients, at least one IED-related component was concordant with the focus, but many IED-related components were at distant locations. When considering only components with a significant HRF/IED amplitude correlation, distant components could be discarded, significantly increasing the relative proportion of activated voxels in the focus (p=0.02). In the 3 patients without concordant IED-related components, no HRF/IED amplitude correlations were detected inside the brain. Integrating IED-related amplitudes in the GLM significantly improved fMRI signal modeling in the epileptogenic focus in 4 patients (p< 0.05). Conclusion Activations in the epileptogenic focus appear to show significant correlations between HRF and IED amplitudes, unlike distant responses. These correlations could be integrated in the analysis to increase the specificity of EEG-fMRI studies in epilepsy. PMID:20026222

  18. Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.

    PubMed

    Manceau, Alain; Marcus, Matthew; Lenoir, Thomas

    2014-09-01

    Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.

  19. Enhancing high-order harmonic generation by sculpting waveforms with chirp

    NASA Astrophysics Data System (ADS)

    Peng, Dian; Frolov, M. V.; Pi, Liang-Wen; Starace, Anthony F.

    2018-05-01

    We present a theoretical analysis showing how chirp can be used to sculpt two-color driving laser field waveforms in order to enhance high-order harmonic generation (HHG) and/or extend HHG cutoff energies. Specifically, we consider driving laser field waveforms composed of two ultrashort pulses having different carrier frequencies in each of which a linear chirp is introduced. Two pairs of carrier frequencies of the component pulses are considered: (ω , 2 ω ) and (ω , 3 ω ). Our results show how changing the signs of the chirps in each of the two component pulses leads to drastic changes in the HHG spectra. Our theoretical analysis is based on numerical solutions of the time-dependent Schrödinger equation and on a semiclassical analytical approach that affords a clear physical interpretation of how our optimized waveforms lead to enhanced HHG spectra.

  20. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  1. Individual Component Map of Rotatory Strength (ICM-RS) and Rotatory Strength Density (RSD) plots as analysis tools of circular dicroism spectra of complex systems.

    PubMed

    Chang, Le; Baseggio, Oscar; Sementa, Luca; Cheng, Daojian; Fronzoni, Giovanna; Toffoli, Daniele; Aprà, Edoardo; Stener, Mauro; Fortunelli, Alessandro

    2018-06-13

    We introduce Individual Component Maps of Rotatory Strength (ICM-RS) and Rotatory Strength Density (RSD) plots as analysis tools of chiro-optical linear response spectra deriving from time-dependent density functional theory (TDDFT) simulations. ICM-RS and RSD allow one to visualize the origin of chiro-optical response in momentum or real space, including signed contributions and therefore highlighting cancellation terms that are ubiquitous in chirality phenomena, and should be especially useful in analyzing the spectra of complex systems. As test cases, we use ICM-RS and RSD to analyze circular dichroism spectra of selected (Ag-Au)30(SR)18 monolayer-protected metal nanoclusters, showing the potential of the proposed tools to derive insight and understanding, and eventually rational design, in chiro-optical studies of complex systems.

  2. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  3. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  4. The Integration of Teacher's Pedagogical Content Knowledge Components in Teaching Linear Equation

    ERIC Educational Resources Information Center

    Yusof, Yusminah Mohd.; Effandi, Zakaria

    2015-01-01

    This qualitative research aimed to explore the integration of the components of pedagogical content knowledge (PCK) in teaching Linear Equation with one unknown. For the purpose of the study, a single local case study with multiple participants was used. The selection of the participants was made based on various criteria: having more than 5 years…

  5. A Modified Approach to Team-Based Learning in Linear Algebra Courses

    ERIC Educational Resources Information Center

    Nanes, Kalman M.

    2014-01-01

    This paper documents the author's adaptation of team-based learning (TBL), an active learning pedagogy developed by Larry Michaelsen and others, in the linear algebra classroom. The paper discusses the standard components of TBL and the necessary changes to those components for the needs of the course in question. There is also an empirically…

  6. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  7. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  8. Analysis system for characterisation of simple, low-cost microfluidic components

    NASA Astrophysics Data System (ADS)

    Smith, Suzanne; Naidoo, Thegaran; Nxumalo, Zandile; Land, Kevin; Davies, Emlyn; Fourie, Louis; Marais, Philip; Roux, Pieter

    2014-06-01

    There is an inherent trade-off between cost and operational integrity of microfluidic components, especially when intended for use in point-of-care devices. We present an analysis system developed to characterise microfluidic components for performing blood cell counting, enabling the balance between function and cost to be established quantitatively. Microfluidic components for sample and reagent introduction, mixing and dispensing of fluids were investigated. A simple inlet port plugging mechanism is used to introduce and dispense a sample of blood, while a reagent is released into the microfluidic system through compression and bursting of a blister pack. Mixing and dispensing of the sample and reagent are facilitated via air actuation. For these microfluidic components to be implemented successfully, a number of aspects need to be characterised for development of an integrated point-of-care device design. The functional components were measured using a microfluidic component analysis system established in-house. Experiments were carried out to determine: 1. the force and speed requirements for sample inlet port plugging and blister pack compression and release using two linear actuators and load cells for plugging the inlet port, compressing the blister pack, and subsequently measuring the resulting forces exerted, 2. the accuracy and repeatability of total volumes of sample and reagent dispensed, and 3. the degree of mixing and dispensing uniformity of the sample and reagent for cell counting analysis. A programmable syringe pump was used for air actuation to facilitate mixing and dispensing of the sample and reagent. Two high speed cameras formed part of the analysis system and allowed for visualisation of the fluidic operations within the microfluidic device. Additional quantitative measures such as microscopy were also used to assess mixing and dilution accuracy, as well as uniformity of fluid dispensing - all of which are important requirements towards the successful implementation of a blood cell counting system.

  9. Unveiling the molecular bipolar outflow of the peculiar red supergiant VY Canis Majoris

    NASA Astrophysics Data System (ADS)

    Shinnaga, Hiroko; Claussen, Mark J.; Lim, Jeremy; Dinh-van-Trung; Tsuboi, Masato

    2003-04-01

    We carried out polarimetric spectral-line imaging of the molecular outflow of the peculiar red supergiant VY Canis Majoris in SiO J=1-0 line in the ground vibrational state, which contains highly linearly-polarized velocity components, using the Very Large Array. We succeeded in unveiling the highly linearly polarized bipolar outflow for the first time at subarcsecond spatial resolution. The results clearly show that the direction of linear polarization of the brightest maser components is parallel to the outflow axis. The results strongly suggest that the linear polarization of the SiO maser is closely related to the outflow phenomena of the star. Furthermore, the results indicate that the linear polarization observed in the optical and infrared also occur due to the outflow phenomena.

  10. Repair-dependent cell radiation survival and transformation: an integrated theory.

    PubMed

    Sutherland, John C

    2014-09-07

    The repair-dependent model of cell radiation survival is extended to include radiation-induced transformations. The probability of transformation is presumed to scale with the number of potentially lethal damages that are repaired in a surviving cell or the interactions of such damages. The theory predicts that at doses corresponding to high survival, the transformation frequency is the sum of simple polynomial functions of dose; linear, quadratic, etc, essentially as described in widely used linear-quadratic expressions. At high doses, corresponding to low survival, the ratio of transformed to surviving cells asymptotically approaches an upper limit. The low dose fundamental- and high dose plateau domains are separated by a downwardly concave transition region. Published transformation data for mammalian cells show the high-dose plateaus predicted by the repair-dependent model for both ultraviolet and ionizing radiation. For the neoplastic transformation experiments that were analyzed, the data can be fit with only the repair-dependent quadratic function. At low doses, the transformation frequency is strictly quadratic, but becomes sigmodial over a wider range of doses. Inclusion of data from the transition region in a traditional linear-quadratic analysis of neoplastic transformation frequency data can exaggerate the magnitude of, or create the appearance of, a linear component. Quantitative analysis of survival and transformation data shows good agreement for ultraviolet radiation; the shapes of the transformation components can be predicted from survival data. For ionizing radiations, both neutrons and x-rays, survival data overestimate the transforming ability for low to moderate doses. The presumed cause of this difference is that, unlike UV photons, a single x-ray or neutron may generate more than one lethal damage in a cell, so the distribution of such damages in the population is not accurately described by Poisson statistics. However, the complete sigmodial dose-response data for neoplastic transformations can be fit using the repair-dependent functions with all parameters determined only from transformation frequency data.

  11. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    PubMed

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.

  12. Characteristics of seasonal variation and solar activity dependence of the geomagnetic solar quiet daily variation

    NASA Astrophysics Data System (ADS)

    Shinbori, A.; Koyama, Y.; Nose, M.; Hori, T.

    2017-12-01

    Characteristics of seasonal variation and solar activity dependence of the X- and Y-components of the geomagnetic solar quiet (Sq) daily variation at Memanbetsu in mid-latitudes and Guam near the equator have been investigated using long-term geomagnetic field data with 1-h time resolution from 1957 to 2016. In this analysis, we defined the quiet day when the maximum value of the Kp index is less than 3 for that day. In this analysis, we used the monthly average of the adjusted daily F10.7 corresponding to geomagnetically quiet days. For identification of the monthly mean Sq variation in the X and Y components (Sq-X and Sq-Y), we first determined the baseline of the X and Y components from the average value from 22 to 2 h (LT: local time) for each quiet day. Next, we calculated a deviation from the baseline of the X- and Y-components of the geomagnetic field for each quiet day, and computed the monthly mean value of the deviation for each local time. As a result, Sq-X and Sq-Y shows a clear seasonal variation and solar activity dependence. The amplitude of seasonal variation increases significantly during high solar activities, and is proportional to the solar F10.7 index. The pattern of the seasonal variation is quite different between Sq-X and Sq-Y. The result of the correlation analysis between the solar F10.7 index and Sq-X and Sq-Y shows almost the linear relationship, but the slope and intercept of the linear fitted line varies as function of local time and month. This implies that the sensitivity of Sq-X and Sq-Y to the solar activity is different for different local times and seasons. The local time dependence of the offset value of Sq-Y at Guam and its seasonal variation suggest a magnetic field produced by inter-hemispheric field-aligned currents (FACs). From the sign of the offset value of Sq-Y, it is infer that the inter-hemispheric FACs flow from the summer to winter hemispheres in the dawn and dusk sectors and from the winter to summer hemispheres in the pre-noon to afternoon sectors. From the slope of the linear fitted line, we observe a weak solar activity dependence of the inter-hemispheric FACs, which shows that the intensity of inter-hemispheric FACs has positive and negative correlations in the morning-noon and afternoon sectors, respectively.

  13. Fast and sensitive high performance liquid chromatography analysis of cosmetic creams for hydroquinone, phenol and six preservatives.

    PubMed

    Gao, Wenhui; Legido-Quigley, Cristina

    2011-07-15

    A fast and sensitive HPLC method for analysis of cosmetic creams for hydroquinone, phenol and six preservatives has been developed. The influence of sample preparation conditions and the composition of the mobile phase and elution mode were investigated to optimize the separation of the eight studied components. Final conditions were 60% methanol and 40% water (v/v) extraction of the cosmetic creams. A C18 column (100 mm × 2.1 mm) was used as the separation column and the mobile phase consisted of methanol and 0.05 mol/L ammonium formate in water (pH=3.0) with gradient elution. The results showed that complete separation of the eight studied components was achieved within 10 min, the linear ranges were 1.0-200 μg/mL for phenol, 0.1-150 μg/mL for sorbic acid, 2.0-200 μg/mL for benzoic acid, 0.5-200 μg/mL for hydroquinone, methyl paraben, ethyl paraben and propyl paraben, butyl paraben, and good linear correlation coefficient (≥0.9997) were obtained, the detection limit was in the range of 0.05-1.0 μg/mL, the average recovery was between 86.5% and 116.3%, and the relative standard deviation (RSD) was less than 5.0% (n=6). The method is easy, fast and sensitive, it can be employed to analyze component residues in cosmetic creams especially in a quality control setting. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Liquid chromatography tandem mass spectrometry determination of chemical markers and principal component analysis of Vitex agnus-castus L. fruits (Verbenaceae) and derived food supplements.

    PubMed

    Mari, Angela; Montoro, Paola; Pizza, Cosimo; Piacente, Sonia

    2012-11-01

    A validated analytical method for the quantitative determination of seven chemical markers occurring in a hydroalcoholic extract of Vitex agnus-castus fruits by liquid chromatography electrospray triple quadrupole tandem mass spectrometry (LC/ESI/(QqQ)MSMS) is reported. To carry out a comparative study, five commercial food supplements corresponding to hydroalcoholic extracts of V. agnus-castus fruits were analysed under the same chromatographic conditions of the crude extract. Principal component analysis (PCA), based only on the variation of the amount of the seven chemical markers, was applied in order to find similarities between the hydroalcoholic extract and the food supplements. A second PCA analysis was carried out considering the whole spectroscopic data deriving from liquid chromatography electrospray linear ion trap mass spectrometry (LC/ESI/(LIT)MS) analysis. High similarity between the two PCA was observed, showing the possibility to select one of these two approaches for future applications in the field of comparative analysis of food supplements and quality control procedures. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. A pilot evaluation of a computer-based psychometric test battery designed to detect impairment in patients with cirrhosis

    PubMed Central

    Cook, Nicola A; Kim, Jin Un; Pasha, Yasmin; Crossey, Mary ME; Schembri, Adrian J; Harel, Brian T; Kimhofer, Torben; Taylor-Robinson, Simon D

    2017-01-01

    Background Psychometric testing is used to identify patients with cirrhosis who have developed hepatic encephalopathy (HE). Most batteries consist of a series of paper-and-pencil tests, which are cumbersome for most clinicians. A modern, easy-to-use, computer-based battery would be a helpful clinical tool, given that in its minimal form, HE has an impact on both patients’ quality of life and the ability to drive and operate machinery (with societal consequences). Aim We compared the Cogstate™ computer battery testing with the Psychometric Hepatic Encephalopathy Score (PHES) tests, with a view to simplify the diagnosis. Methods This was a prospective study of 27 patients with histologically proven cirrhosis. An analysis of psychometric testing was performed using accuracy of task performance and speed of completion as primary variables to create a correlation matrix. A stepwise linear regression analysis was performed with backward elimination, using analysis of variance. Results Strong correlations were found between the international shopping list, international shopping list delayed recall of Cogstate and the PHES digit symbol test. The Shopping List Tasks were the only tasks that consistently had P values of <0.05 in the linear regression analysis. Conclusion Subtests of the Cogstate battery correlated very strongly with the digit symbol component of PHES in discriminating severity of HE. These findings would indicate that components of the current PHES battery with the international shopping list tasks of Cogstate would be discriminant and have the potential to be used easily in clinical practice. PMID:28919805

  16. ISAC: A tool for aeroservoelastic modeling and analysis

    NASA Technical Reports Server (NTRS)

    Adams, William M., Jr.; Hoadley, Sherwood Tiffany

    1993-01-01

    The capabilities of the Interaction of Structures, Aerodynamics, and Controls (ISAC) system of program modules is discussed. The major modeling, analysis, and data management components of ISAC are identified. Equations of motion are displayed for a Laplace-domain representation of the unsteady aerodynamic forces. Options for approximating a frequency-domain representation of unsteady aerodynamic forces with rational functions of the Laplace variable are shown. Linear time invariant state-space equations of motion that result are discussed. Model generation and analyses of stability and dynamic response characteristics are shown for an aeroelastic vehicle which illustrates some of the capabilities of ISAC as a modeling and analysis tool for aeroelastic applications.

  17. Blade loss transient dynamic analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.

    1982-01-01

    This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.

  18. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.

  19. Retrieval of Aerosol Microphysical Properties Based on the Optimal Estimation Method: Information Content Analysis for Satellite Polarimetric Remote Sensing Measurements

    NASA Astrophysics Data System (ADS)

    Hou, W. Z.; Li, Z. Q.; Zheng, F. X.; Qie, L. L.

    2018-04-01

    This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM) with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.

  20. Development of a simultaneous multiple solid-phase microextraction-single shot-gas chromatography/mass spectrometry method and application to aroma profile analysis of commercial coffee.

    PubMed

    Lee, Changgook; Lee, Younghoon; Lee, Jae-Gon; Buglass, Alan J

    2013-06-21

    A simultaneous multiple solid-phase microextraction-single shot-gas chromatography mass spectrometry (smSPME-ss-GC/MS) method has been developed for headspace analysis. Up to four fibers (50/30 μm DVB/CAR/PDMS) were used simultaneously for the extraction of aroma components from the headspace of a single sample chamber in order to increase sensitivity of aroma extraction. To avoid peak broadening and to maximize resolution, a simple cryofocusing technique was adopted during sequential thermal desorption of multiple SPME fibers prior to a 'single shot' chromatographic run. The method was developed and validated on a model flavor mixture, containing 81 known pure components. With the conditions of 10 min of incubation and 30 min of extraction at 50 °C, single, dual, triple and quadruple SPME extractions were compared. The increase in total peak area with increase in the number of fibers showed good linearity (R(2)=0.9917) and the mean precision was 12.0% (RSD) for the total peak sum, with quadruple simultaneous SPME extraction. Using a real sample such as commercial coffee granules, aroma profile analysis was conducted using single, dual, triple and quadruple SPME fibers. The increase in total peak intensity again showed good linearity with increase in the number of SPME fibers used (R(2)=0.9992) and the precision of quadruple SPME extraction was 9.9% (RSD) for the total peak sum. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Impact of Dental Disorders and its Influence on Self Esteem Levels among Adolescents.

    PubMed

    Kaur, Puneet; Singh, Simarpreet; Mathur, Anmol; Makkar, Diljot Kaur; Aggarwal, Vikram Pal; Batra, Manu; Sharma, Anshika; Goyal, Nikita

    2017-04-01

    Self esteem is more of a psychological concept therefore, even the common dental disorders like dental trauma, tooth loss and untreated carious lesions may affect the self esteem thus influencing the quality of life. This study aims to assess the impact of dental disorders among the adolescents on their self esteem level. The present cross-sectional study was conducted among 10 to 17 years adolescents. In order to obtain a representative sample, multistage sampling technique was used and sample was selected based on Probability Proportional to Enrolment size (PPE). Oral health assessment was carried out using WHO type III examination and self esteem was estimated using the Rosenberg Self Esteem Scale score (RSES). The descriptive and inferential analysis of the data was done by using IBM SPSS software. Logistic and linear regression analysis was executed to test the individual association of different independent clinical variables with self esteem. Total sample of 1140 adolescents with mean age of 14.95 ±2.08 and RSES of 27.09 ±3.12 were considered. Stepwise multiple linear regression analysis was applied and best predictors in relation to RSES in the descending order were Dental Health Component (DHC), Aesthetic Component (AC), dental decay {(aesthetic zone), (masticatory zone)}, tooth loss {(aesthetic zone), (masticatory zone)} and anterior fracture of tooth. It was found that various dental disorders like malocclusion, anterior traumatic tooth, tooth loss and untreated decay causes a profound impact on aesthetics and psychosocial behaviour of adolescents, thus affecting their self esteem.

  2. Characterization of CDOM of river waters in China using fluorescence excitation-emission matrix and regional integration techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Song, Kaishan; Shang, Yingxin; Shao, Tiantian; Wen, Zhidan; Lv, Lili

    2017-08-01

    The spatial characteristics of fluorescent dissolved organic matter (FDOM) components in river waters in China were first examined by excitation-emission matrix spectra and fluorescence regional integration (FRI) with the data collected during September to November between 2013 and 2015. One tyrosine-like (R1), one tryptophan-like (R2), one fulvic-like (R3), one microbial protein-like (R4), and one humic-like (R5) components have been identified by FRI method. Principal component analysis (PCA) was conducted to assess variations in the five FDOM components (FRί (ί = 1, 2, 3, 4, and 5)) and the humification index for all 194 river water samples. The average fluorescence intensities of the five fluorescent components and the total fluorescence intensities FSUM differed under spatial variation among the seven major river basins (Songhua, Liao, Hai, Yellow and Huai, Yangtze, Pearl, and Inflow Rivers) in China. When all the river water samples were pooled together, the fulvic-like FR3 and the humic-like FR5 showed a strong positive linear relationship (R2 = 0.90, n = 194), indicating that the two allochthonous FDOM components R3 and R5 may originate from similar sources. There is a moderate strong positive correlation between the tryptophan-like FR2 and the microbial protein-like FR4 (R2 = 0.71, n = 194), suggesting that parts of two autochthonous FDOM components R2 and R4 are likely from some common sources. However, the total allochthonous substance FR(3+5) and the total autochthonous substances FR(1+2+4) exhibited a weak correlation (R2 = 0.40, n = 194). Significant positive linear relationships between FR3 (R2 = 0.69, n = 194), FR5 (R2 = 0.79, n = 194), and chromophoric DOM (CDOM) absorption coefficient a(254) were observed, which demonstrated that the CDOM absorption was dominated by the allochthonous FDOM components R3 and R5.

  3. Screening and analysis of the multiple absorbed bioactive components and metabolites in rat plasma after oral administration of Jitai tablets by high-performance liquid chromatography/diode-array detection coupled with electrospray ionization tandem mass spectrometry.

    PubMed

    Wang, Shu-Ping; Liu, Lei; Wang, Ling-Ling; Jiang, Peng; Zhang, Ji-Quan; Zhang, Wei-Dong; Liu, Run-Hui

    2010-06-15

    Based on the serum pharmacochemistry technique and high-performance liquid chromatography/diode-array detection (HPLC/DAD) coupled with electrospray tandem mass spectrometry (HPLC/ESI-MS/MS), a method for screening and analysis of the multiple absorbed bioactive components and metabolites of Jitai tablets (JTT) in orally dosed rat plasma was developed. Plasma was treated by methanol precipitation prior to liquid chromatography, and the separation was carried out on a Symmetry C(18) column, with a linear gradient (0.1% formic acid/water/acetonitrile). Mass spectra were acquired in negative and positive ion modes, respectively. As a result, 26 bioactive components originated from JTT and 5 metabolites were tentatively identified in orally dosed rat plasma by comparing their retention times and MS spectra with those of authentic standards and literature data. It is concluded that an effective and reliable analytical method was set up for screening the bioactive components of Chinese herbal medicine, which provided a meaningful basis for further pharmacology and active mechanism research of JTT. Copyright (c) 2010 John Wiley & Sons, Ltd.

  4. Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components

    PubMed Central

    Wang, Min; Kornblau, Steven M; Coombes, Kevin R

    2018-01-01

    Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable. PMID:29881252

  5. Strain Transient Detection Techniques: A Comparison of Source Parameter Inversions of Signals Isolated through Principal Component Analysis (PCA), Non-Linear PCA, and Rotated PCA

    NASA Astrophysics Data System (ADS)

    Lipovsky, B.; Funning, G. J.

    2009-12-01

    We compare several techniques for the analysis of geodetic time series with the ultimate aim to characterize the physical processes which are represented therein. We compare three methods for the analysis of these data: Principal Component Analysis (PCA), Non-Linear PCA (NLPCA), and Rotated PCA (RPCA). We evaluate each method by its ability to isolate signals which may be any combination of low amplitude (near noise level), temporally transient, unaccompanied by seismic emissions, and small scale with respect to the spatial domain. PCA is a powerful tool for extracting structure from large datasets which is traditionally realized through either the solution of an eigenvalue problem or through iterative methods. PCA is an transformation of the coordinate system of our data such that the new "principal" data axes retain maximal variance and minimal reconstruction error (Pearson, 1901; Hotelling, 1933). RPCA is achieved by an orthogonal transformation of the principal axes determined in PCA. In the analysis of meteorological data sets, RPCA has been seen to overcome domain shape dependencies, correct for sampling errors, and to determine principal axes which more closely represent physical processes (e.g., Richman, 1986). NLPCA generalizes PCA such that principal axes are replaced by principal curves (e.g., Hsieh 2004). We achieve NLPCA through an auto-associative feed-forward neural network (Scholz, 2005). We show the geophysical relevance of these techniques by application of each to a synthetic data set. Results are compared by inverting principal axes to determine deformation source parameters. Temporal variability in source parameters, estimated by each method, are also compared.

  6. Dynamic analysis of Space Shuttle/RMS configuration using continuum approach

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Taylor, Lawrence W., Jr.

    1994-01-01

    The initial assembly of Space Station Freedom involves the Space Shuttle, its Remote Manipulation System (RMS) and the evolving Space Station Freedom. The dynamics of this coupled system involves both the structural and the control system dynamics of each of these components. The modeling and analysis of such an assembly is made even more formidable by kinematic and joint nonlinearities. The current practice of modeling such flexible structures is to use finite element modeling in which the mass and interior dynamics is ignored between thousands of nodes, for each major component. The model characteristics of only tens of modes are kept out of thousands which are calculated. The components are then connected by approximating the boundary conditions and inserting the control system dynamics. In this paper continuum models are used instead of finite element models because of the improved accuracy, reduced number of model parameters, the avoidance of model order reduction, and the ability to represent the structural and control system dynamics in the same system of equations. Dynamic analysis of linear versions of the model is performed and compared with finite element model results. Additionally, the transfer matrix to continuum modeling is presented.

  7. Prediction of Knee Joint Contact Forces From External Measures Using Principal Component Prediction and Reconstruction.

    PubMed

    Saliba, Christopher M; Clouthier, Allison L; Brandon, Scott C E; Rainbow, Michael J; Deluzio, Kevin J

    2018-05-29

    Abnormal loading of the knee joint contributes to the pathogenesis of knee osteoarthritis. Gait retraining is a non-invasive intervention that aims to reduce knee loads by providing audible, visual, or haptic feedback of gait parameters. The computational expense of joint contact force prediction has limited real-time feedback to surrogate measures of the contact force, such as the knee adduction moment. We developed a method to predict knee joint contact forces using motion analysis and a statistical regression model that can be implemented in near real-time. Gait waveform variables were deconstructed using principal component analysis and a linear regression was used to predict the principal component scores of the contact force waveforms. Knee joint contact force waveforms were reconstructed using the predicted scores. We tested our method using a heterogenous population of asymptomatic controls and subjects with knee osteoarthritis. The reconstructed contact force waveforms had mean (SD) RMS differences of 0.17 (0.05) bodyweight compared to the contact forces predicted by a musculoskeletal model. Our method successfully predicted subject-specific shape features of contact force waveforms and is a potentially powerful tool in biofeedback and clinical gait analysis.

  8. Optimized principal component analysis on coronagraphic images of the fomalhaut system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less

  9. Greenstone belts: Their components and structure

    NASA Technical Reports Server (NTRS)

    Vearncombe, J. R.; Barton, J. M., Jr.; Vanreenen, D. D.; Phillips, G. N.; Wilson, A. H.

    1986-01-01

    Greenstone sucessions are defined as the nongranitoid component of granitoid-greenstone terrain and are linear to irregular in shape and where linear are termed belts. The chemical composition of greenstones is described. Also discussed are the continental environments of greenstone successions. The effects of contact with granitoids, geophysical properties, recumbent folds and late formation structures upon greenstones are examined. Large stratigraphy thicknesses are explained.

  10. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  11. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    NASA Astrophysics Data System (ADS)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  12. Novel use of UV broad-band excitation and stretched exponential function in the analysis of fluorescent dissolved organic matter: study of interaction between protein and humic-like components

    NASA Astrophysics Data System (ADS)

    Panigrahi, Suraj Kumar; Mishra, Ashok Kumar

    2017-09-01

    A combination of broad-band UV radiation (UV A and UV B; 250-400 nm) and a stretched exponential function (StrEF) has been utilised in efforts towards convenient and sensitive detection of fluorescent dissolved organic matter (FDOM). This approach enables accessing the gross fluorescence spectral signature of both protein-like and humic-like components in a single measurement. Commercial FDOM components are excited with the broad-band UV excitation; the variation of spectral profile as a function of varying component ratio is analysed. The underlying fluorescence dynamics and non-linear quenching of amino acid moieties are studied with the StrEF (exp(-V[Q] β )). The complex quenching pattern reflects the inner filter effect (IFE) as well as inter-component interactions. The inter-component interactions are essentially captured through the ‘sphere of action’ and ‘dark complex’ models. The broad-band UV excitation ascertains increased excitation energy, resulting in increased population density in the excited state and thereby resulting in enhanced sensitivity.

  13. The potential of non-invasive pre- and post-mortem carcass measurements to predict the contribution of carcass components to slaughter yield of guinea pigs.

    PubMed

    Barba, Lida; Sánchez-Macías, Davinia; Barba, Iván; Rodríguez, Nibaldo

    2018-06-01

    Guinea pig meat consumption is increasing exponentially worldwide. The evaluation of the contribution of carcass components to carcass quality potentially can allow for the estimation of the value added to food animal origin and make research in guinea pigs more practicable. The aim of this study was to propose a methodology for modelling the contribution of different carcass components to the overall carcass quality of guinea pigs by using non-invasive pre- and post mortem carcass measurements. The selection of predictors was developed through correlation analysis and statistical significance; whereas the prediction models were based on Multiple Linear Regression. The prediction results showed higher accuracy in the prediction of carcass component contribution expressed in grams, compared to when expressed as a percentage of carcass quality components. The proposed prediction models can be useful for the guinea pig meat industry and research institutions by using non-invasive and time- and cost-efficient carcass component measuring techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Feed-forward control of gear mesh vibration using piezoelectric actuators

    NASA Technical Reports Server (NTRS)

    Montague, Gerald T.; Kascak, Albert F.; Palazzolo, Alan; Manchala, Daniel; Thomas, Erwin

    1994-01-01

    This paper presents a novel means for suppressing gear mesh-related vibrations. The key components in this approach are piezoelectric actuators and a high-frequency, analog feed-forward controller. Test results are presented and show up to a 70-percent reduction in gear mesh acceleration and vibration control up to 4500 Hz. The principle of the approach is explained by an analysis of a harmonically excited, general linear vibratory system.

  15. Analysis of Forest Foliage Using a Multivariate Mixture Model

    NASA Technical Reports Server (NTRS)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.

    1997-01-01

    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  16. Three-dimensional earthquake analysis of roller-compacted concrete dams

    NASA Astrophysics Data System (ADS)

    Kartal, M. E.

    2012-07-01

    Ground motion effect on a roller-compacted concrete (RCC) dams in the earthquake zone should be taken into account for the most critical conditions. This study presents three-dimensional earthquake response of a RCC dam considering geometrical non-linearity. Besides, material and connection non-linearity are also taken into consideration in the time-history analyses. Bilinear and multilinear kinematic hardening material models are utilized in the materially non-linear analyses for concrete and foundation rock respectively. The contraction joints inside the dam blocks and dam-foundation-reservoir interaction are modeled by the contact elements. The hydrostatic and hydrodynamic pressures of the reservoir water are modeled with the fluid finite elements based on the Lagrangian approach. The gravity and hydrostatic pressure effects are employed as initial condition before the strong ground motion. In the earthquake analyses, viscous dampers are defined in the finite element model to represent infinite boundary conditions. According to numerical solutions, horizontal displacements increase under hydrodynamic pressure. Besides, those also increase in the materially non-linear analyses of the dam. In addition, while the principle stress components by the hydrodynamic pressure effect the reservoir water, those decrease in the materially non-linear time-history analyses.

  17. Associations between Caries among Children and Household Sugar Procurement, Exposure to Fluoridated Water and Socioeconomic Indicators in the Brazilian Capital Cities

    PubMed Central

    Gonçalves, Michele Martins; Leles, Cláudio Rodrigues; Freire, Maria do Carmo Matias

    2013-01-01

    The objective of this ecological study was to investigate the association between caries experience in 5- and 12-year-old Brazilian children in 2010 and household sugar procurement in 2003 and the effects of exposure to water fluoridation and socioeconomic indicators. Sample units were all 27 Brazilian capital cities. Data were obtained from the National Surveys of Oral Health; the National Household Food Budget Survey; and the United Nations Program for Development. Data analysis included correlation coefficients, exploratory factor analysis, and linear regression. There were significant negative associations between caries experience and procurement of confectionery, fluoridated water, HDI, and per capita income. Procurement of confectionery and soft drinks was positively associated with HDI and per capita income. Exploratory factor analysis grouped the independent variables by reducing highly correlated variables into two uncorrelated component variables that explained 86.1% of total variance. The first component included income, HDI, water fluoridation, and procurement of confectionery, while the second included free sugar and procurement of soft drinks. Multiple regression analysis showed that caries is associated with the first component. Caries experience was associated with better socioeconomic indicators of a city and exposure to fluoridated water, which may affect the impact of sugars on the disease. PMID:24307900

  18. When syntax meets action: Brain potential evidence of overlapping between language and motor sequencing.

    PubMed

    Casado, Pilar; Martín-Loeches, Manuel; León, Inmaculada; Hernández-Gutiérrez, David; Espuny, Javier; Muñoz, Francisco; Jiménez-Ortega, Laura; Fondevila, Sabela; de Vega, Manuel

    2018-03-01

    This study aims to extend the embodied cognition approach to syntactic processing. The hypothesis is that the brain resources to plan and perform motor sequences are also involved in syntactic processing. To test this hypothesis, Event-Related brain Potentials (ERPs) were recorded while participants read sentences with embedded relative clauses, judging for their acceptability (half of the sentences contained a subject-verb morphosyntactic disagreement). The sentences, previously divided into three segments, were self-administered segment-by-segment in two different sequential manners: linear or non-linear. Linear self-administration consisted of successively pressing three buttons with three consecutive fingers in the right hand, while non-linear self-administration implied the substitution of the finger in the middle position by the right foot. Our aim was to test whether syntactic processing could be affected by the manner the sentences were self-administered. Main results revealed that the ERPs LAN component vanished whereas the P600 component increased in response to incorrect verbs, for non-linear relative to linear self-administration. The LAN and P600 components reflect early and late syntactic processing, respectively. Our results convey evidence that language syntactic processing and performing non-linguistic motor sequences may share resources in the human brain. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Linear relations in microbial reaction systems: a general overview of their origin, form, and use.

    PubMed

    Noorman, H J; Heijnen, J J; Ch A M Luyben, K

    1991-09-01

    In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable.

  1. Internal rotor friction instability

    NASA Technical Reports Server (NTRS)

    Walton, J.; Artiles, A.; Lund, J.; Dill, J.; Zorzi, E.

    1990-01-01

    The analytical developments and experimental investigations performed in assessing the effect of internal friction on rotor systems dynamic performance are documented. Analytical component models for axial splines, Curvic splines, and interference fit joints commonly found in modern high speed turbomachinery were developed. Rotor systems operating above a bending critical speed were shown to exhibit unstable subsynchronous vibrations at the first natural frequency. The effect of speed, bearing stiffness, joint stiffness, external damping, torque, and coefficient of friction, was evaluated. Testing included material coefficient of friction evaluations, component joint quantity and form of damping determinations, and rotordynamic stability assessments. Under conditions similar to those in the SSME turbopumps, material interfaces experienced a coefficient of friction of approx. 0.2 for lubricated and 0.8 for unlubricated conditions. The damping observed in the component joints displayed nearly linear behavior with increasing amplitude. Thus, the measured damping, as a function of amplitude, is not represented by either linear or Coulomb friction damper models. Rotordynamic testing of an axial spline joint under 5000 in.-lb of static torque, demonstrated the presence of an extremely severe instability when the rotor was operated above its first flexible natural frequency. The presence of this instability was predicted by nonlinear rotordynamic time-transient analysis using the nonlinear component model developed under this program. Corresponding rotordynamic testing of a shaft with an interference fit joint demonstrated the presence of subsynchronous vibrations at the first natural frequency. While subsynchronous vibrations were observed, they were bounded and significantly lower in amplitude than the synchronous vibrations.

  2. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  3. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  4. Estimation of neural energy in microelectrode signals

    NASA Astrophysics Data System (ADS)

    Gaumond, R. P.; Clement, R.; Silva, R.; Sander, D.

    2004-09-01

    We considered the problem of determining the neural contribution to the signal recorded by an intracortical electrode. We developed a linear least-squares approach to determine the energy fraction of a signal attributable to an arbitrary number of autocorrelation-defined signals buried in noise. Application of the method requires estimation of autocorrelation functions Rap(tgr) characterizing the action potential (AP) waveforms and Rn(tgr) characterizing background noise. This method was applied to the analysis of chronically implanted microelectrode signals from motor cortex of rat. We found that neural (AP) energy consisted of a large-signal component which grows linearly with the number of threshold-detected neural events and a small-signal component unrelated to the count of threshold-detected AP signals. The addition of pseudorandom noise to electrode signals demonstrated the algorithm's effectiveness for a wide range of noise-to-signal energy ratios (0.08 to 39). We suggest, therefore, that the method could be of use in providing a measure of neural response in situations where clearly identified spike waveforms cannot be isolated, or in providing an additional 'background' measure of microelectrode neural activity to supplement the traditional AP spike count.

  5. Power analysis to detect treatment effect in longitudinal studies with heterogeneous errors and incomplete data.

    PubMed

    Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián

    2016-08-01

     S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.   For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.   The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.   The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.

  6. Linear measurements of the neurocranium are better indicators of population differences than those of the facial skeleton: comparative study of 1,961 skulls.

    PubMed

    Holló, Gábor; Szathmáry, László; Marcsik, Antónia; Barta, Zoltán

    2010-02-01

    The aim of this study is to individualize potential differences between two cranial regions used to differentiate human populations. We compared the neurocranium and the facial skeleton using skulls from the Great Hungarian Plain. The skulls date to the 1st-11th centuries, a long space of time that encompasses seven archaeological periods. We analyzed six neurocranial and seven facial measurements. The reduction of the number of variables was carried out using principal components analysis. Linear mixed-effects models were fitted to the principal components of each archaeological period, and then the models were compared using multiple pairwise tests. The neurocranium showed significant differences in seven cases between nonsubsequent periods and in one case, between two subsequent populations. For the facial skeleton, no significant results were found. Our results, which are also compared to previous craniofacial heritability estimates, suggest that the neurocranium is a more conservative region and that population differences can be pointed out better in the neurocranium than in the facial skeleton.

  7. Hierarchical Time-Lagged Independent Component Analysis: Computing Slow Modes and Reaction Coordinates for Large Molecular Systems.

    PubMed

    Pérez-Hernández, Guillermo; Noé, Frank

    2016-12-13

    Analysis of molecular dynamics, for example using Markov models, often requires the identification of order parameters that are good indicators of the rare events, i.e. good reaction coordinates. Recently, it has been shown that the time-lagged independent component analysis (TICA) finds the linear combinations of input coordinates that optimally represent the slow kinetic modes and may serve in order to define reaction coordinates between the metastable states of the molecular system. A limitation of the method is that both computing time and memory requirements scale with the square of the number of input features. For large protein systems, this exacerbates the use of extensive feature sets such as the distances between all pairs of residues or even heavy atoms. Here we derive a hierarchical TICA (hTICA) method that approximates the full TICA solution by a hierarchical, divide-and-conquer calculation. By using hTICA on distances between heavy atoms we identify previously unknown relaxation processes in the bovine pancreatic trypsin inhibitor.

  8. Self organising maps for visualising and modelling

    PubMed Central

    2012-01-01

    The paper describes the motivation of SOMs (Self Organising Maps) and how they are generally more accessible due to the wider available modern, more powerful, cost-effective computers. Their advantages compared to Principal Components Analysis and Partial Least Squares are discussed. These allow application to non-linear data, are not so dependent on least squares solutions, normality of errors and less influenced by outliers. In addition there are a wide variety of intuitive methods for visualisation that allow full use of the map space. Modern problems in analytical chemistry include applications to cultural heritage studies, environmental, metabolomic and biological problems result in complex datasets. Methods for visualising maps are described including best matching units, hit histograms, unified distance matrices and component planes. Supervised SOMs for classification including multifactor data and variable selection are discussed as is their use in Quality Control. The paper is illustrated using four case studies, namely the Near Infrared of food, the thermal analysis of polymers, metabolomic analysis of saliva using NMR, and on-line HPLC for pharmaceutical process monitoring. PMID:22594434

  9. Handy elementary algebraic properties of the geometry of entanglement

    NASA Astrophysics Data System (ADS)

    Blair, Howard A.; Alsing, Paul M.

    2013-05-01

    The space of separable states of a quantum system is a hyperbolic surface in a high dimensional linear space, which we call the separation surface, within the exponentially high dimensional linear space containing the quantum states of an n component multipartite quantum system. A vector in the linear space is representable as an n-dimensional hypermatrix with respect to bases of the component linear spaces. A vector will be on the separation surface iff every determinant of every 2-dimensional, 2-by-2 submatrix of the hypermatrix vanishes. This highly rigid constraint can be tested merely in time asymptotically proportional to d, where d is the dimension of the state space of the system due to the extreme interdependence of the 2-by-2 submatrices. The constraint on 2-by-2 determinants entails an elementary closed formformula for a parametric characterization of the entire separation surface with d-1 parameters in the char- acterization. The state of a factor of a partially separable state can be calculated in time asymptotically proportional to the dimension of the state space of the component. If all components of the system have approximately the same dimension, the time complexity of calculating a component state as a function of the parameters is asymptotically pro- portional to the time required to sort the basis. Metric-based entanglement measures of pure states are characterized in terms of the separation hypersurface.

  10. Revealing the Structural Complexity of Component Interactions of Topic-Specific PCK when Planning to Teach

    NASA Astrophysics Data System (ADS)

    Mavhunga, Elizabeth

    2018-04-01

    Teaching pedagogical content knowledge (PCK) at a topic-specific level requires clarity on the content-specific nature of the components employed, as well as the specific features that bring about the desirable depth in teacher explanations. Such understanding is often hazy; yet, it influences the nature of teacher tasks and learning opportunities afforded to pre-service teachers in a teaching program. The purpose of this study was twofold: firstly, to illuminate the emerging complexity when content-specific components of PCK interact when planning to teach a chemistry topic; and secondly, to identify the kinds of teacher tasks that promote the emergence of such complexity. Data collected were content representations (CoRes) in chemical equilibrium accompanied by expanded lesson outlines from 15 pre-service teachers in their final year of study towards a first degree in teaching (B Ed). The analysis involved extraction of episodes that exhibited component interaction by using a qualitative in-depth analysis method. The results revealed the structure in which the components of PCK in a topic interact among each other to be linear, interwoven, or a combination of the two. The interwoven interactions contained multiple components that connected explanations on different aspects of a concept, all working in a complementary manner. The most sophisticated component interactions emerged from teacher tasks on descriptions of a lesson sequence and a summary of a lesson. Recommendations in this study highlight core practices for making pedagogical transformation of topic content knowledge more accessible.

  11. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    DOEpatents

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  12. Analysis on the multi-dimensional spectrum of the thrust force for the linear motor feed drive system in machine tools

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Lu, Dun; Ma, Chengfang; Zhang, Jun; Zhao, Wanhua

    2017-01-01

    The motor thrust force has lots of harmonic components due to the nonlinearity of drive circuit and motor itself in the linear motor feed drive system. What is more, in the motion process, these thrust force harmonics may vary with the position, velocity, acceleration and load, which affects the displacement fluctuation of the feed drive system. Therefore, in this paper, on the basis of the thrust force spectrum obtained by the Maxwell equation and the electromagnetic energy method, the multi-dimensional variation of each thrust harmonic is analyzed under different motion parameters. Then the model of the servo system is established oriented to the dynamic precision. The influence of the variation of the thrust force spectrum on the displacement fluctuation is discussed. At last the experiments are carried out to verify the theoretical analysis above. It can be found that the thrust harmonics show multi-dimensional spectrum characteristics under different motion parameters and loads, which should be considered to choose the motion parameters and optimize the servo control parameters in the high-speed and high-precision machine tools equipped with the linear motor feed drive system.

  13. Dose-effect relationships, epidemiological analysis and the derivation of low dose risk.

    PubMed

    Leenhouts, H P; Chadwick, K H

    2011-03-01

    This paper expands on our recent comments in a letter to this journal about the analysis of epidemiological studies and the determination of low dose RBE of low LET radiation (Chadwick and Leenhouts 2009 J. Radiol. Prot. 29 445-7). Using the assumption that radiation induced cancer arises from a somatic mutation (Chadwick and Leenhouts 2011 J. Radiol. Prot. 31 41-8) a model equation is derived to describe cancer induction as a function of dose. The model is described briefly, evidence is provided in support of it, and it is applied to a set of experimental animal data. The results are compared with a linear fit to the data as has often been done in epidemiological studies. The article presents arguments to support several related messages which are relevant to epidemiological analysis, the derivation of low dose risk and the weighting factor of sparsely ionising radiations. The messages are: (a) cancer incidence following acute exposure should, in principle, be fitted to a linear-quadratic curve with cell killing using all the data available; (b) the acute data are dominated by the quadratic component of dose; (c) the linear fit of any acute data will essentially be dependent on the quadratic component and will be unrelated to the effectiveness of the radiation at low doses; consequently, (d) the method used by ICRP to derive low dose risk from the atomic bomb survivor data means that it is unrelated to the effectiveness of the hard gamma radiation at low radiation doses; (e) the low dose risk value should, therefore, not be used as if it were representative for hard gamma rays to argue for an increased weighting factor for tritium and soft x-rays even though there are mechanistic reasons to expect this; (f) epidemiological studies of chronically exposed populations supported by appropriate cellular radiobiological studies have the best chance of revealing different RBE values for different sparsely ionising radiations.

  14. Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming

    PubMed Central

    Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian

    2017-01-01

    Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870

  15. Automated mapping of impervious surfaces in urban and suburban areas: Linear spectral unmixing of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jian; He, Yuhong

    2017-02-01

    Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.

  16. Analysis of gait patterns pre- and post- Single Event Multilevel Surgery in children with Cerebral Palsy by means of Offset-Wise Movement Analysis Profile and Linear Fit Method.

    PubMed

    Ancillao, Andrea; van der Krogt, Marjolein M; Buizer, Annemieke I; Witbreuk, Melinda M; Cappa, Paolo; Harlaar, Jaap

    2017-10-01

    Gait analysis is used for the assessment of walking ability of children with cerebral palsy (CP), to inform clinical decision making and to quantify changes after treatment. To simplify gait analysis interpretation and to quantify deviations from normality, some quantitative synthetic descriptors were developed over the years, such as the Movement Analysis Profile (MAP) and the Linear Fit Method (LFM), but their interpretation is not always straightforward. The aims of this work were to: (i) study gait changes, by means of synthetic descriptors, in children with CP that underwent Single Event Multilevel Surgery; (ii) compare the MAP and the LFM on these patients; (iii) design a new index that may overcome the limitations of the previous methods, i.e. the lack of information about the direction of deviation or its source. Gait analysis exams of 10 children with CP, pre- and post-surgery, were collected and MAP and LFM were computed. A new index was designed asa modified version of the MAP by separating out changes in offset (named OC-MAP). MAP documented an improvement in the gait pattern after surgery. The highest effect was observed for the knee flexion/extension angle. However, a worsening was observed as an increase in anterior pelvic tilt. An important source of gait deviation was recognized in the offset between observed tracks and reference. OC-MAP allowed the assessment of the offset component versus the shape component of deviation. LFM provided results similar to OC-MAP offset analysis but could not be considered reliable due to intrinsic limitations. As offset in gait features played an important role in gait deviation, OC-MAP synthetic analysis was proposed as a novel approach to a meaningful parameterisation of global deviations in gait patterns of subjects with CP and gait changes after treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability

    NASA Astrophysics Data System (ADS)

    Hamdan, Lubna

    Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.

  18. Cosmological Density and Power Spectrum from Peculiar Velocities: Nonlinear Corrections and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Silberman, L.; Dekel, A.; Eldar, A.; Zehavi, I.

    2001-08-01

    We allow for nonlinear effects in the likelihood analysis of galaxy peculiar velocities and obtain ~35% lower values for the cosmological density parameter Ωm and for the amplitude of mass density fluctuations σ8Ω0.6m. This result is obtained under the assumption that the power spectrum in the linear regime is of the flat ΛCDM model (h=0.65, n=1, COBE normalized) with only Ωm as a free parameter. Since the likelihood is driven by the nonlinear regime, we ``break'' the power spectrum at kb~0.2 (h-1 Mpc)-1 and fit a power law at k>kb. This allows for independent matching of the nonlinear behavior and an unbiased fit in the linear regime. The analysis assumes Gaussian fluctuations and errors and a linear relation between velocity and density. Tests using mock catalogs that properly simulate nonlinear effects demonstrate that this procedure results in a reduced bias and a better fit. We find for the Mark III and SFI data Ωm=0.32+/-0.06 and 0.37+/-0.09, respectively, with σ8Ω0.6m=0.49+/-0.06 and 0.63+/-0.08, in agreement with constraints from other data. The quoted 90% errors include distance errors and cosmic variance, for fixed values of the other parameters. The improvement in the likelihood due to the nonlinear correction is very significant for Mark III and moderately significant for SFI. When allowing deviations from ΛCDM, we find an indication for a wiggle in the power spectrum: an excess near k~0.05 (h-1 Mpc)-1 and a deficiency at k~0.1 (h-1 Mpc)-1, or a ``cold flow.'' This may be related to the wiggle seen in the power spectrum from redshift surveys and the second peak in the cosmic microwave background (CMB) anisotropy. A χ2 test applied to modes of a principal component analysis (PCA) shows that the nonlinear procedure improves the goodness of fit and reduces a spatial gradient that was of concern in the purely linear analysis. The PCA allows us to address spatial features of the data and to evaluate and fine-tune the theoretical and error models. It demonstrates in particular that the models used are appropriate for the cosmological parameter estimation performed. We address the potential for optimal data compression using PCA.

  19. A new approach in space-time analysis of multivariate hydrological data: Application to Brazil's Nordeste region rainfall

    NASA Astrophysics Data System (ADS)

    Sicard, Emeline; Sabatier, Robert; Niel, HéLèNe; Cadier, Eric

    2002-12-01

    The objective of this paper is to implement an original method for spatial and multivariate data, combining a method of three-way array analysis (STATIS) with geostatistical tools. The variables of interest are the monthly amounts of rainfall in the Nordeste region of Brazil, recorded from 1937 to 1975. The principle of the technique is the calculation of a linear combination of the initial variables, containing a large part of the initial variability and taking into account the spatial dependencies. It is a promising method that is able to analyze triple variability: spatial, seasonal, and interannual. In our case, the first component obtained discriminates a group of rain gauges, corresponding approximately to the Agreste, from all the others. The monthly variables of July and August strongly influence this separation. Furthermore, an annual study brings out the stability of the spatial structure of components calculated for each year.

  20. Morphological instability of a thermophoretically growing deposit

    NASA Technical Reports Server (NTRS)

    Castillo, Jose L.; Garcia-Ybarra, Pedro L.; Rosner, Daniel E.

    1992-01-01

    The stability of the planar interface of a structureless solid growing from a depositing component dilute in a carrier fluid is studied when the main solute transport mechanism is thermal (Soret) diffusion. A linear stability analysis, carried out in the limit of low growth Peclet number, leads to a dispersion relation which shows that the planar front is unstable either when the thermal diffusion factor of the condensing component is positive and the latent heat release is small or when the thermal diffusion factor is negative and the solid grows over a thermally-insulating substrate. Furthermore, the influence of interfacial energy effects and constitutional supersaturation in the vicinity of the moving interface is analyzed in the limit of very small Schmidt numbers (small solute Fickian diffusion). The analysis is relevant to physical vapor deposition of very massive species on cold surfaces, as in recent experiments of organic solid film growth under microgravity conditions.

  1. Hydrodynamic Stability of Multicomponent Droplet Gasification in Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Aharon, I.; Shaw, B. D.

    1995-01-01

    This investigation addresses the problem of hydrodynamic stability of a two-component droplet undergoing spherically-symmetrical gasification. The droplet components are assumed to have characteristic liquid species diffusion times that are large relative to characteristic droplet surface regression times. The problem is formulated as a linear stability analysis, with a goal of predicting when spherically-symmetric droplet gasification can be expected to be hydrodynamically unstable from surface-tension gradients acting along the surface of a droplet which result from perturbations. It is found that for the conditions assumed in this paper (quasisteady gas phase, no initial droplet temperature gradients, diffusion-dominated gasification), surface tension gradients do not play a role in the stability characteristics. In addition, all perturbations are predicted to decay such that droplets were hydrodynamically stable. Conditions are identified, however, that deserve more analysis as they may lead to hydrodynamic instabilities driven by capillary effects.

  2. Covariances and spectra of the kinematics and dynamics of nonlinear waves

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.

    1985-01-01

    Using the Stokes waves as a model of nonlinear waves and considering the linear component as a narrow-band Gaussian process, the covariances and spectra of velocity and acceleration components and pressure for points in the vicinity of still water level were derived taking into consideration the effects of free surface fluctuations. The results are compared with those obtained earlier using linear Gaussian waves.

  3. Analysis of single-degree-of-freedom piezoelectric energy harvester with stopper by incremental harmonic balance method

    NASA Astrophysics Data System (ADS)

    Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju

    2018-05-01

    Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.

  4. Validation of drift and diffusion coefficients from experimental data

    NASA Astrophysics Data System (ADS)

    Riera, R.; Anteneodo, C.

    2010-04-01

    Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.

  5. Cortical pitch response components show differential sensitivity to native and nonnative pitch contours

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Suresh, Chandan H.

    2015-01-01

    The aim of this study is to evaluate how nonspeech pitch contours of varying shape influence latency and amplitude of cortical pitch-specific response (CPR) components differentially as a function of language experience. Stimuli included time-varying, high rising Mandarin Tone 2 (T2) and linear rising ramp (Linear), and steady-state (Flat). Both the latency and magnitude of CPR components were differentially modulated by (i) the overall trajectory of pitch contours (time-varying vs. steady-state), (ii) their pitch acceleration rates (changing vs. constant), and (iii) their linguistic status (lexical vs. non-lexical). T2 elicited larger amplitude than Linear in both language groups, but size of the effect was larger in Chinese than English. The magnitude of CPR components elicited by T2 were larger for Chinese than English at the right temporal electrode site. Using the CPR, we provide evidence in support of experience-dependent modulation of dynamic pitch contours at an early stage of sensory processing. PMID:25306506

  6. Nonautonomous linear system of the terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2012-12-01

    Carbon cycle has been studied by uses of observation through various networks, field and laboratory experiments, and simulation models. Much less has been done on theoretical thinking and analysis to understand fundament properties of carbon cycle and then guide observatory, experimental, and modeling research. This presentation is to explore what would be the theoretical properties of terrestrial carbon cycle and how those properties can be used to make observatory, experimental, and modeling research more effective. Thousands of published data sets from litter decomposition and soil incubation studies almost all indicate that decay processes of litter and soil organic carbon can be well described by first order differential equations with one or more pools. Carbon pool dynamics in plants and soil after disturbances (e.g., wildfire, clear-cut of forests, and plows of soil for cropping) and during natural recovery or ecosystem restoration also exhibit characteristics of first-order linear systems. Thus, numerous lines of empirical evidence indicate that the terrestrial carbon cycle can be adequately described as a nonautonomous linear system. The linearity reflects the nature of the carbon cycle that carbon, once fixed by photosynthesis, is linearly transferred among pools within an ecosystem. The linear carbon transfer, however, is modified by nonlinear functions of external forcing variables. In addition, photosynthetic carbon influx is also nonlinearly influenced by external variables. This nonautonomous linear system can be mathematically expressed by a first-order linear ordinary matrix equation. We have recently used this theoretical property of terrestrial carbon cycle to develop a semi-analytic solution of spinup. The new methods have been applied to five global land models, including NCAR's CLM and CABLE models and can computationally accelerate spinup by two orders of magnitude. We also use this theoretical property to develop an analytic framework to decompose modeled carbon cycle into a few traceable components so as to facilitate model intercompsirosn, benchmark analysis, and data assimilation of global land models.

  7. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes

    PubMed Central

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-01

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451

  8. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.

    PubMed

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-07

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.

  9. Separation mechanism of nortriptyline and amytriptyline in RPLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritti, Fabrice; Guiochon, Georges A

    2005-08-01

    The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less

  10. Characterizing fluorescent dissolved organic matter in a membrane bioreactor via excitation-emission matrix combined with parallel factor analysis.

    PubMed

    Maqbool, Tahir; Quang, Viet Ly; Cho, Jinwoo; Hur, Jin

    2016-06-01

    In this study, we successfully tracked the dynamic changes in different constitutes of bound extracellular polymeric substances (bEPS), soluble microbial products (SMP), and permeate during the operation of bench scale membrane bioreactors (MBRs) via fluorescence excitation-emission matrix (EEM) combined with parallel factor analysis (PARAFAC). Three fluorescent groups were identified, including two protein-like (tryptophan-like C1 and tyrosine-like C2) and one microbial humic-like components (C3). In bEPS, protein-like components were consistently more dominant than C3 during the MBR operation, while their relative abundance in SMP depended on aeration intensities. C1 of bEPS exhibited a linear correlation (R(2)=0.738; p<0.01) with bEPS amounts in sludge, and C2 was closely related to the stability of sludge. The protein-like components were more greatly responsible for membrane fouling. Our study suggests that EEM-PARAFAC can be a promising monitoring tool to provide further insight into process evaluation and membrane fouling during MBR operation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Health related quality of life among Iraqi immigrants settled in Malaysia.

    PubMed

    Daher, Aqil M; Ibrahim, Hisham S; Daher, Thaaer M; Anbori, Ali K

    2011-05-30

    Migrants everywhere face several demands for health and maintaining good health and quality of life could be challenging. Iraqis are the second largest refugee group that has sought refuge in the recent years, yet little is known about their health related quality of life (HRQOL). The study aims at assessing the HRQOL among Iraqis living in Malaysia. A self-administered Arabic version of Sf-36 questionnaire was distributed among 300 Iraqi migrants in Malaysia. The questionnaire taps eight concepts of physical and mental health to assess the HRQOL. Univariate analysis was performed for group analysis (t test, ANOVA) and Multiple Linear Regression was used to control for confounding effects. Two hundred and fifty three participants ranging in age from 18 to 67 years (Mean = 33.6) returned the completed questionnaire. The majority was males (60.1%) and more than half of the respondents (59.5%) were married. Less than half (45.4%) and about a quarter (25.9%) reported bachelor degree and secondary school education respectively and the remaining 28.7% had either a master or a PhD degree.Univariate analysis showed that the HRQOL scores among male immigrants were found to be higher than those of females in physical function (80.0 vs. 73.5), general health (72.5 vs. 60.7) and bodily pain (87.9 vs. 72.5) subscales. The youngest age group had significantly higher physical function (79.32) and lower mental health scores (57.62).The mean score of physical component summary was higher than the mental component summary mean score (70.22 vs. 63.34).Stepwise multiple linear regression, revealed that gender was significantly associated with physical component summary (β = - 6.06, p = 0.007) and marital status was associated with mental component summary (β = 7.08, p = 0.003). From the data it appears that Iraqi immigrants living in Malaysia have HRQOL scores that might be considered to indicate a relatively moderate HRQOL. The HRQOL is significantly affected by gender and marital status. Further studies are needed to explore determinants of HRQOL consequent to immigration. The findings could be worthy of further exploration.

  12. Health related quality of life among Iraqi immigrants settled in Malaysia

    PubMed Central

    2011-01-01

    Background Migrants everywhere face several demands for health and maintaining good health and quality of life could be challenging. Iraqis are the second largest refugee group that has sought refuge in the recent years, yet little is known about their health related quality of life (HRQOL). The study aims at assessing the HRQOL among Iraqis living in Malaysia. Methods A self-administered Arabic version of Sf-36 questionnaire was distributed among 300 Iraqi migrants in Malaysia. The questionnaire taps eight concepts of physical and mental health to assess the HRQOL. Univariate analysis was performed for group analysis (t test, ANOVA) and Multiple Linear Regression was used to control for confounding effects. Results Two hundred and fifty three participants ranging in age from 18 to 67 years (Mean = 33.6) returned the completed questionnaire. The majority was males (60.1%) and more than half of the respondents (59.5%) were married. Less than half (45.4%) and about a quarter (25.9%) reported bachelor degree and secondary school education respectively and the remaining 28.7% had either a master or a PhD degree. Univariate analysis showed that the HRQOL scores among male immigrants were found to be higher than those of females in physical function (80.0 vs. 73.5), general health (72.5 vs. 60.7) and bodily pain (87.9 vs. 72.5) subscales. The youngest age group had significantly higher physical function (79.32) and lower mental health scores (57.62). The mean score of physical component summary was higher than the mental component summary mean score (70.22 vs. 63.34). Stepwise multiple linear regression, revealed that gender was significantly associated with physical component summary (β = - 6.06, p = 0.007) and marital status was associated with mental component summary (β = 7.08, p = 0.003). Conclusions From the data it appears that Iraqi immigrants living in Malaysia have HRQOL scores that might be considered to indicate a relatively moderate HRQOL. The HRQOL is significantly affected by gender and marital status. Further studies are needed to explore determinants of HRQOL consequent to immigration. The findings could be worthy of further exploration. PMID:21624118

  13. Loneliness Literacy Scale: Development and Evaluation of an Early Indicator for Loneliness Prevention.

    PubMed

    Honigh-de Vlaming, Rianne; Haveman-Nies, Annemien; Bos-Oude Groeniger, Inge; Hooft van Huysduynen, Eveline J C; de Groot, Lisette C P G M; Van't Veer, Pieter

    2014-01-01

    To develop and evaluate the Loneliness Literacy Scale for the assessment of short-term outcomes of a loneliness prevention programme among Dutch elderly persons. Scale development was based on evidence from literature and experiences from local stakeholders and representatives of the target group. The scale was pre-tested among 303 elderly persons aged 65 years and over. Principal component analysis and internal consistency analysis were used to affirm the scale structure, reduce the number of items and assess the reliability of the constructs. Linear regression analysis was conducted to evaluate the association between the literacy constructs and loneliness. The four constructs "motivation", "self-efficacy", "perceived social support" and "subjective norm" derived from principal component analysis captured 56 % of the original variance. Cronbach's coefficient α was above 0.7 for each construct. The constructs "self-efficacy" and "perceived social support" were positively and "subjective norm" was negatively associated with loneliness. To our knowledge this is the first study developing a short-term indicator for loneliness prevention. The indicator contributes to the need of evaluating public health interventions more close to the intervention activities.

  14. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    NASA Astrophysics Data System (ADS)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  15. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model.

    PubMed

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-08-16

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.

  16. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model

    PubMed Central

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-01-01

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065

  17. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  18. Quality evaluation of Houttuynia cordata Thunb. by high performance liquid chromatography with photodiode-array detection (HPLC-DAD).

    PubMed

    Yang, Zhan-nan; Sun, Yi-ming; Luo, Shi-qiong; Chen, Jin-wu; Chen, Jin-wu; Yu, Zheng-wen; Sun, Min

    2014-03-01

    A new, validated method, developed for the simultaneous determination of 16 phenolics (chlorogenic acid, scopoletin, vitexin, rutin, afzelin, isoquercitrin, narirutin, kaempferitrin, quercitrin, quercetin, kaempferol, chrysosplenol D, vitexicarpin, 5-hydroxy-3,3',4',7-tetramethoxy flavonoids, 5-hydroxy-3,4',6,7-tetramethoxy flavonoids and kaempferol-3,7,4'-trimethyl ether) in Houttuynia cordata Thunb. was successfully applied to 35 batches of samples collected from different regions or at different times and their total antioxidant activities (TAAs) were investigated. The aim was to develop a quality control method to simultaneously determine the major active components in H. cordata. The HPLC-DAD method was performed using a reverse-phase C18 column with a gradient elution system (acetonitrile-methanol-water) and simultaneous detection at 345 nm. Linear behaviors of method for all the analytes were observed with linear regression relationship (r(2)>0.999) at the concentration ranges investigated. The recoveries of the 16 phenolics ranged from 98.93% to 101.26%. The samples analyzed were differentiated and classified based on the contents of the 16 characteristic compounds and the TAA using hierarchical clustering analysis (HCA) and principal component analysis (PCA). The results analyzed showed that similar chemical profiles and TAAs were divided into the same group. There was some evidence that active compounds, although they varied significantly, may possess uniform anti-oxidant activities and have potentially synergistic effects.

  19. Serum Folate Shows an Inverse Association with Blood Pressure in a Cohort of Chinese Women of Childbearing Age: A Cross-Sectional Study

    PubMed Central

    Shen, Minxue; Tan, Hongzhuan; Zhou, Shujin; Retnakaran, Ravi; Smith, Graeme N.; Davidge, Sandra T.; Trasler, Jacquetta; Walker, Mark C.; Wen, Shi Wu

    2016-01-01

    Background It has been reported that higher folate intake from food and supplementation is associated with decreased blood pressure (BP). The association between serum folate concentration and BP has been examined in few studies. We aim to examine the association between serum folate and BP levels in a cohort of young Chinese women. Methods We used the baseline data from a pre-conception cohort of women of childbearing age in Liuyang, China, for this study. Demographic data were collected by structured interview. Serum folate concentration was measured by immunoassay, and homocysteine, blood glucose, triglyceride and total cholesterol were measured through standardized clinical procedures. Multiple linear regression and principal component regression model were applied in the analysis. Results A total of 1,532 healthy normotensive non-pregnant women were included in the final analysis. The mean concentration of serum folate was 7.5 ± 5.4 nmol/L and 55% of the women presented with folate deficiency (< 6.8 nmol/L). Multiple linear regression and principal component regression showed that serum folate levels were inversely associated with systolic and diastolic BP, after adjusting for demographic, anthropometric, and biochemical factors. Conclusions Serum folate is inversely associated with BP in non-pregnant women of childbearing age with high prevalence of folate deficiency. PMID:27182603

  20. Effects of pumice mining on soil quality

    NASA Astrophysics Data System (ADS)

    Cruz-Ruíz, A.; Cruz-Ruíz, E.; Vaca, R.; Del Aguila, P.; Lugo, J.

    2016-01-01

    Mexico is the world's fourth most important maize producer; hence, there is a need to maintain soil quality for sustainable production in the upcoming years. Pumice mining is a superficial operation that modifies large areas in central Mexico. The main aim was to assess the present state of agricultural soils differing in elapsed time since pumice mining (0-15 years) in a representative area of the Calimaya region in the State of Mexico. The study sites in 0, 1, 4, 10, and 15 year old reclaimed soils were compared with an adjacent undisturbed site. Our results indicate that gravimetric moisture content, water hold capacity, bulk density, available phosphorus, total nitrogen, soil organic carbon, microbial biomass carbon and phosphatase and urease activity were greatly impacted by disturbance. A general trend of recovery towards the undisturbed condition with reclamation age was found after disturbance, the recovery of soil total N being faster than soil organic C. The soil quality indicators were selected using principal component analysis (PCA), correlations and multiple linear regressions. The first three components gathered explain 76.4 % of the total variability. The obtained results revealed that the most appropriate indicators to diagnose the quality of the soils were urease, available phosphorus and bulk density and minor total nitrogen. According to linear score analysis and the additive index, the soils showed a recuperation starting from 4 years of pumice extraction.

Top