Sample records for component analysis reduced

  1. NASGRO(registered trademark): Fracture Mechanics and Fatigue Crack Growth Analysis Software

    NASA Technical Reports Server (NTRS)

    Forman, Royce; Shivakumar, V.; Mettu, Sambi; Beek, Joachim; Williams, Leonard; Yeh, Feng; McClung, Craig; Cardinal, Joe

    2004-01-01

    This viewgraph presentation describes NASGRO, which is a fracture mechanics and fatigue crack growth analysis software package that is used to reduce risk of fracture in Space Shuttles. The contents include: 1) Consequences of Fracture; 2) NASA Fracture Control Requirements; 3) NASGRO Reduces Risk; 4) NASGRO Use Inside NASA; 5) NASGRO Components: Crack Growth Module; 6) NASGRO Components:Material Property Module; 7) Typical NASGRO analysis: Crack growth or component life calculation; and 8) NASGRO Sample Application: Orbiter feedline flowliner crack analysis.

  2. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  3. Novel Framework for Reduced Order Modeling of Aero-engine Components

    NASA Astrophysics Data System (ADS)

    Safi, Ali

    The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.

  4. Application of Principal Component Analysis (PCA) to Reduce Multicollinearity Exchange Rate Currency of Some Countries in Asia Period 2004-2014

    ERIC Educational Resources Information Center

    Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad

    2017-01-01

    This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…

  5. Inventory of File nam.t00z.awp21100.tm00.grib2

    Science.gov Websites

    analysis Pressure Reduced to MSL [Pa] 002 surface GUST analysis Wind Speed (Gust) [m/s] 003 100 mb HGT -Component of Wind [m/s] 007.2 100 mb VGRD analysis V-Component of Wind [m/s] 008 150 mb HGT analysis Wind [m/s] 012.2 150 mb VGRD analysis V-Component of Wind [m/s] 013 200 mb HGT analysis Geopotential

  6. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  7. Finite element analysis of helicopter structures

    NASA Technical Reports Server (NTRS)

    Rich, M. J.

    1978-01-01

    Application of the finite element analysis is now being expanded to three dimensional analysis of mechanical components. Examples are presented for airframe, mechanical components, and composite structure calculations. Data are detailed on the increase of model size, computer usage, and the effect on reducing stress analysis costs. Future applications for use of finite element analysis for helicopter structures are projected.

  8. Wind Turbine Control Design to Reduce Capital Costs: 7 January 2009 - 31 August 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, P. J.

    2010-01-01

    This report first discusses and identifies which wind turbine components can benefit from advanced control algorithms and also presents results from a preliminary loads case analysis using a baseline controller. Next, it describes the design, implementation, and simulation-based testing of an advanced controller to reduce loads on those components. The case-by-case loads analysis and advanced controller design will help guide future control research.

  9. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  10. Model reduction by weighted Component Cost Analysis

    NASA Technical Reports Server (NTRS)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  11. A reduction in ag/residential signature conflict using principal components analysis of LANDSAT temporal data

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Borden, F. Y.

    1977-01-01

    Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.

  12. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  14. Reduction hybrid artifacts of EMG-EOG in electroencephalography evoked by prefrontal transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wan, Xiaohong; Zeng, Ke; Ni, Yinmei; Qiu, Lirong; Li, Xiaoli

    2016-12-01

    Objective. When prefrontal-transcranial magnetic stimulation (p-TMS) performed, it may evoke hybrid artifact mixed with muscle activity and blink activity in EEG recordings. Reducing this kind of hybrid artifact challenges the traditional preprocessing methods. We aim to explore method for the p-TMS evoked hybrid artifact removal. Approach. We propose a novel method used as independent component analysis (ICA) post processing to reduce the p-TMS evoked hybrid artifact. Ensemble empirical mode decomposition (EEMD) was used to decompose signal into multi-components, then the components were separated with artifact reduced by blind source separation (BSS) method. Three standard BSS methods, ICA, independent vector analysis, and canonical correlation analysis (CCA) were tested. Main results. Synthetic results showed that EEMD-CCA outperformed others as ICA post processing step in hybrid artifacts reduction. Its superiority was clearer when signal to noise ratio (SNR) was lower. In application to real experiment, SNR can be significantly increased and the p-TMS evoked potential could be recovered from hybrid artifact contaminated signal. Our proposed method can effectively reduce the p-TMS evoked hybrid artifacts. Significance. Our proposed method may facilitate future prefrontal TMS-EEG researches.

  15. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  16. Use of Geochemistry Data Collected by the Mars Exploration Rover Spirit in Gusev Crater to Teach Geomorphic Zonation through Principal Components Analysis

    ERIC Educational Resources Information Center

    Rodrigue, Christine M.

    2011-01-01

    This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…

  17. Augmented wedge-shaped glenoid component for the correction of glenoid retroversion: a finite element analysis.

    PubMed

    Hermida, Juan C; Flores-Hernandez, Cesar; Hoenecke, Heinz R; D'Lima, Darryl D

    2014-03-01

    This study undertook a computational analysis of a wedged glenoid component for correction of retroverted glenoid arthritic deformity to determine whether a wedge-shaped glenoid component design with a built-in correction for version reduces excessive stresses in the implant, cement, and glenoid bone. Recommendations for correcting retroversion deformity are asymmetric reaming of the anterior glenoid, bone grafting of the posterior glenoid, or a glenoid component with posterior augmentation. Eccentric reaming has the disadvantages of removing normal bone, reducing structural support for the glenoid component, and increasing the risk of bone perforation by the fixation pegs. Bone grafting to correct retroverted deformity does not consistently generate successful results. Finite element models of 2 scapulae models representing a normal and an arthritic retroverted glenoid were implanted with a standard glenoid component (in retroversion or neutral alignment) or a wedged component. Glenohumeral forces representing in vivo loading were applied and stresses and strains computed in the bone, cement, and glenoid component. The retroverted glenoid components generated the highest compressive stresses and decreased cyclic fatigue life predictions for trabecular bone. Correction of retroversion by the wedged glenoid component significantly decreased stresses and predicted greater bone fatigue life. The cement volume estimated to survive 10 million cycles was the lowest for the retroverted components and the highest for neutrally implanted glenoid components and for wedged components. A wedged glenoid implant is a viable option to correct severe arthritic retroversion, reducing the need for eccentric reaming and the risk for implant failure. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  18. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  19. Lexical Sophistication as a Multidimensional Phenomenon: Relations to Second Language Lexical Proficiency, Development, and Writing Quality

    ERIC Educational Resources Information Center

    Kim, Minkyung; Crossley, Scott A.; Kyle, Kristopher

    2018-01-01

    This study conceptualizes lexical sophistication as a multidimensional phenomenon by reducing numerous lexical features of lexical sophistication into 12 aggregated components (i.e., dimensions) via a principal component analysis approach. These components were then used to predict second language (L2) writing proficiency levels, holistic lexical…

  20. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    NASA Astrophysics Data System (ADS)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  1. Ceramic Matrix Composites for Rotorcraft Engines

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.

    2011-01-01

    Ceramic matrix composite (CMC) components are being developed for turbine engine applications. Compared to metallic components, the CMC components offer benefits of higher temperature capability and less cooling requirements which correlates to improved efficiency and reduced emissions. This presentation discusses a technology develop effort for overcoming challenges in fabricating a CMC vane for the high pressure turbine. The areas of technology development include small component fabrication, ceramic joining and integration, material and component testing and characterization, and design and analysis of concept components.

  2. Experimental Researches on the Durability Indicators and the Physiological Comfort of Fabrics using the Principal Component Analysis (PCA) Method

    NASA Astrophysics Data System (ADS)

    Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.

    2017-06-01

    The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.

  3. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  4. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    NASA Astrophysics Data System (ADS)

    Asachi, Maryam; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew

    2017-06-01

    In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  5. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  6. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  7. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution

    PubMed Central

    Feng, Xiao-Liang; He, Yun-biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria. PMID:24286016

  8. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution.

    PubMed

    Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  9. K-Fold Crossvalidation in Canonical Analysis.

    ERIC Educational Resources Information Center

    Liang, Kun-Hsia; And Others

    1995-01-01

    A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)

  10. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  11. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  12. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  13. Strength Analysis and Reliability Evaluation for Speed Reducers

    NASA Astrophysics Data System (ADS)

    Tsai, Yuo-Tern; Hsu, Yung-Yuan

    2017-09-01

    This paper studies the structural stresses of differential drive (DD) and harmonic drive (HD) for design improvement of reducers. The designed principles of the two reducers are reported for function comparison. The critical components of the reducers are constructed for performing motion simulation and stress analysis. DD is designed based on differential displacement of the decelerated gear ring as well as HD on a flexible spline. Finite element method (FEM) is used to analyze the structural stresses including the dynamic properties of the reducers. The stresses including kinematic properties of the two reducers are compared to observe the properties of the designs. The analyzed results are applied to identify the allowable loads of the reducers in use. The reliabilities of the reducers in different loads are further calculated according to the variation of stress. The studied results are useful on engineering analysis and reliability evaluation for designing a speed reducer with high ratios.

  14. Identifying Effective Components of Child Maltreatment Interventions: A Meta-analysis.

    PubMed

    van der Put, Claudia E; Assink, Mark; Gubbels, Jeanne; Boekhout van Solinge, Noëlle F

    2018-06-01

    There is a lack of knowledge about specific components that make interventions effective in preventing or reducing child maltreatment. The aim of the present meta-analysis was to increase this knowledge by summarizing findings on effects of interventions for child maltreatment and by examining potential moderators of this effect, such as intervention components and study characteristics. Identifying effective components is essential for developing or improving child maltreatment interventions. A literature search yielded 121 independent studies (N = 39,044) examining the effects of interventions for preventing or reducing child maltreatment. From these studies, 352 effect sizes were extracted. The overall effect size was significant and small in magnitude for both preventive interventions (d = 0.26, p < .001) and curative interventions (d = 0.36, p < .001). Cognitive behavioral therapy, home visitation, parent training, family-based/multisystemic, substance abuse, and combined interventions were effective in preventing and/or reducing child maltreatment. For preventive interventions, larger effect sizes were found for short-term interventions (0-6 months), interventions focusing on increasing self-confidence of parents, and interventions delivered by professionals only. Further, effect sizes of preventive interventions increased as follow-up duration increased, which may indicate a sleeper effect of preventive interventions. For curative interventions, larger effect sizes were found for interventions focusing on improving parenting skills and interventions providing social and/or emotional support. Interventions can be effective in preventing or reducing child maltreatment. Theoretical and practical implications are discussed.

  15. Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles

    NASA Astrophysics Data System (ADS)

    Panday, Aishwarya; Bansal, Hari Om

    2017-02-01

    Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.

  16. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  17. [Analysis and identification of emulsifying viscosity reducer by FTIR and 1H NMR].

    PubMed

    Zhu, Hong; Guan, Run-ling; Shen, Jing-mei

    2007-01-01

    Separation and purification of viscosity reducer for crude oil were performed with distillation and dissolution-precipitation. The functional group of its main component was identified by FTIR It was deduced that the main component of the crude oil viscosity reducer is the tricopolymer of poly ethyl acrylate/methyl methacrylate/acrylic acid. The structure of the component was also ascertained and quantitively analysed with 1 H NMR and MS. The mol ratio of the three monomers is 37. 1 : 25. 8 : 37. 1, and the mass ratio is 41. 1 : 28. 8 : 29. 8. The structure of the part soluble in methanol was identified by FTIR. The result showed that the nonionic sufactant is poly ethylene oxide with the moleculear mass range of 800-1600, and the anionic surfactant is alkylbenzene sulfonate. The residue is accessory ingredient and water.

  18. Analysis of minerals containing dissolved traces of the fluid phase components water and carbon dioxide

    NASA Technical Reports Server (NTRS)

    Freund, Friedemann

    1991-01-01

    Substantial progress has been made towards a better understanding of the dissolution of common gas/fluid phase components, notably H2O and CO2, in minerals. It has been shown that the dissolution mechanisms are significantly more complex than currently believed. By judiciously combining various solid state analytical techniques, convincing evidence was obtained that traces of dissolved gas/fluid phase components undergo, at least in part, a redox conversion by which they split into reduced H2 and and reduced C on one hand and oxidized oxygen, O(-), on the other. Analysis for 2 and C as well as for any organic molecules which may form during the process of co-segregation are still impeded by the omnipresent danger of extraneous contamination. However, the presence of O(-), an unusual oxidized form of oxygen, has been proven beyond a reasonable doubt. The presence of O(-) testifies to the fact that a redox reaction must have taken place in the solid state involving the dissolved traces of gas/fluid phase components. Detailed information on the techniques used and the results obtained are given.

  19. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  20. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  1. Sequential Proton Loss Electron Transfer in Deactivation of Iron(IV) Binding Protein by Tyrosine Based Food Components.

    PubMed

    Tang, Ning; Skibsted, Leif H

    2017-08-02

    The iron(IV) binding protein ferrylmyoglobin, MbFe(IV)═O, was found to be reduced by tyrosine based food components in aqueous solution through a sequential proton loss electron transfer reaction mechanism without binding to the protein as confirmed by isothermal titration calorimetry. Dopamine and epinephrine are the most efficient food components reducing ferrylmyoglobin to oxymyoglobin, MbFe(II)O 2 , and metmyoglobin, MbFe(III), as revealed by multivariate curve resolution alternating least-squares with second order rate constants of 33.6 ± 2.3 L/mol/s (ΔH ⧧ of 19 ± 5 kJ/mol, ΔS ⧧ of -136 ± 18 J/mol K) and 228.9 ± 13.3 L/mol/s (ΔH ⧧ of 110 ± 7 kJ/mol, ΔS ⧧ of 131 ± 25 J/mol K), respectively, at pH 7.4 and 25 °C. The other tyrosine based food components were found to reduce ferrylmyoglobin to metmyoglobin with similar reduction rates at pH 7.4 and 25 °C. These reduction reactions were enhanced by protonation of ferrylmyoglobin and facilitated proton transfer at acidic conditions. Enthalpy-entropy compensation effects were observed for the activation parameters (ΔH ⧧ and ΔS ⧧ ), indicating the common reaction mechanism. Moreover, principal component analysis combined with heat map were performed to understand the relationship between density functional theory calculated molecular descriptors and kinetic data, which was further modeled by partial least squares for quantitative structure-activity relationship analysis. In addition, a three tyrosine residue containing protein, lysozyme, was also found to be able to reduce ferrylmyoglobin with a second order rate constant of 66 ± 28 L/mol/s as determined by a competitive kinetic method.

  2. Weight minimization of structural components for launch in space shuttle

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Gendy, Atef S.; Hopkins, Dale A.; Berke, Laszlo

    1994-01-01

    Minimizing the weight of structural components of the space station launched into orbit in a space shuttle can save cost, reduce the number of space shuttle missions, and facilitate on-orbit fabrication. Traditional manual design of such components, although feasible, cannot represent a minimum weight condition. At NASA Lewis Research Center, a design capability called CometBoards (Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures) has been developed especially for the design optimization of such flight components. Two components of the space station - a spacer structure and a support system - illustrate the capability of CometBoards. These components are designed for loads and behavior constraints that arise from a variety of flight accelerations and maneuvers. The optimization process using CometBoards reduced the weights of the components by one third from those obtained with traditional manual design. This paper presents a brief overview of the design code CometBoards and a description of the space station components, their design environments, behavior limitations, and attributes of their optimum designs.

  3. The Future is Now: Reducing Psychological Distance to Increase Public Engagement with Climate Change.

    PubMed

    Jones, Charlotte; Hine, Donald W; Marks, Anthony D G

    2017-02-01

    Many people perceive climate change as psychologically distant-a set of uncertain events that might occur far in the future, impacting distant places and affecting people dissimilar to themselves. In this study, we employed construal level theory to investigate whether a climate change communication intervention could increase public engagement by reducing the psychological distance of climate change. Australian residents (N = 333) were randomly assigned to one of two treatment conditions: one framed to increase psychological distance to climate change (distal frame), and the other framed to reduce psychological distance (proximal frame). Participants then completed measures of psychological distance of climate change impacts, climate change concern, and intentions to engage in mitigation behavior. Principal components analysis indicated that psychological distance to climate change was best conceptualized as a multidimensional construct consisting of four components: geographic, temporal, social, and uncertainty. Path analysis revealed the effect of the treatment frame on climate change concern and intentions was fully mediated by psychological distance dimensions related to uncertainty and social distance. Our results suggest that climate communications framed to reduce psychological distance represent a promising strategy for increasing public engagement with climate change. © 2016 Society for Risk Analysis.

  4. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  5. The Incredible Shrinking Institution: A Five-Component Downsizing Model.

    ERIC Educational Resources Information Center

    Dawson, Bradley L.

    1991-01-01

    Most colleges and universities need to reduce expenditures and downsize. Such a project is difficult and emotionally charged. Many immediate remedies can ba applied to reduce and better control expenditures, but a more thorough analysis of the institution's organizational structure, operating procedures, automated systems, strategic planning, and…

  6. Relaxation Treatment for Insomnia: A Component Analysis.

    ERIC Educational Resources Information Center

    Woolfolk, Robert L.; McNulty, Terrence F.

    1983-01-01

    Compared four relaxation treatments for sleep onset insomnia with a waiting-list control. Treatments varied in presence or absence of muscular tension-release instructions and in foci of attention. Results showed all treatment conditions reduced latency of sleep onset and fatigue; visual focusing best reduced the number of nocturnal awakenings.…

  7. A Cost-Utility Model of Care for Peristomal Skin Complications

    PubMed Central

    Inglese, Gary; Manson, Andrea; Townshend, Arden

    2016-01-01

    PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166

  8. Household Food Waste: Multivariate Regression and Principal Components Analyses of Awareness and Attitudes among U.S. Consumers

    PubMed Central

    2016-01-01

    We estimate models of consumer food waste awareness and attitudes using responses from a national survey of U.S. residents. Our models are interpreted through the lens of several theories that describe how pro-social behaviors relate to awareness, attitudes and opinions. Our analysis of patterns among respondents’ food waste attitudes yields a model with three principal components: one that represents perceived practical benefits households may lose if food waste were reduced, one that represents the guilt associated with food waste, and one that represents whether households feel they could be doing more to reduce food waste. We find our respondents express significant agreement that some perceived practical benefits are ascribed to throwing away uneaten food, e.g., nearly 70% of respondents agree that throwing away food after the package date has passed reduces the odds of foodborne illness, while nearly 60% agree that some food waste is necessary to ensure meals taste fresh. We identify that these attitudinal responses significantly load onto a single principal component that may represent a key attitudinal construct useful for policy guidance. Further, multivariate regression analysis reveals a significant positive association between the strength of this component and household income, suggesting that higher income households most strongly agree with statements that link throwing away uneaten food to perceived private benefits. PMID:27441687

  9. Advanced Self-Calibrating, Self-Repairing Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)

    2002-01-01

    An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.

  10. Drug target identification using network analysis: Taking active components in Sini decoction as an example

    NASA Astrophysics Data System (ADS)

    Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng

    2016-04-01

    Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound.

  11. Drug target identification using network analysis: Taking active components in Sini decoction as an example

    PubMed Central

    Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng

    2016-01-01

    Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound. PMID:27095146

  12. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  13. Characterization of an ntrX mutant of Neisseria gonorrhoeae reveals a response regulator that controls expression of respiratory enzymes in oxidase-positive proteobacteria.

    PubMed

    Atack, John M; Srikhanta, Yogitha N; Djoko, Karrera Y; Welch, Jessica P; Hasri, Norain H M; Steichen, Christopher T; Vanden Hoven, Rachel N; Grimmond, Sean M; Othman, Dk Seti Maimonah Pg; Kappler, Ulrike; Apicella, Michael A; Jennings, Michael P; Edwards, Jennifer L; McEwan, Alastair G

    2013-06-01

    NtrYX is a sensor-histidine kinase/response regulator two-component system that has had limited characterization in a small number of Alphaproteobacteria. Phylogenetic analysis of the response regulator NtrX showed that this two-component system is extensively distributed across the bacterial domain, and it is present in a variety of Betaproteobacteria, including the human pathogen Neisseria gonorrhoeae. Microarray analysis revealed that the expression of several components of the respiratory chain was reduced in an N. gonorrhoeae ntrX mutant compared to that in the isogenic wild-type (WT) strain 1291. These included the cytochrome c oxidase subunit (ccoP), nitrite reductase (aniA), and nitric oxide reductase (norB). Enzyme activity assays showed decreased cytochrome oxidase and nitrite reductase activities in the ntrX mutant, consistent with microarray data. N. gonorrhoeae ntrX mutants had reduced capacity to survive inside primary cervical cells compared to the wild type, and although they retained the ability to form a biofilm, they exhibited reduced survival within the biofilm compared to wild-type cells, as indicated by LIVE/DEAD staining. Analyses of an ntrX mutant in a representative alphaproteobacterium, Rhodobacter capsulatus, showed that cytochrome oxidase activity was also reduced compared to that in the wild-type strain SB1003. Taken together, these data provide evidence that the NtrYX two-component system may be a key regulator in the expression of respiratory enzymes and, in particular, cytochrome c oxidase, across a wide range of proteobacteria, including a variety of bacterial pathogens.

  14. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  16. Custom-designed orthopedic implants evaluated using finite element analysis of patient-specific computed tomography data: femoral-component case study

    PubMed Central

    Harrysson, Ola LA; Hosni, Yasser A; Nayfeh, Jamal F

    2007-01-01

    Background Conventional knee and hip implant systems have been in use for many years with good success. However, the custom design of implant components based on patient-specific anatomy has been attempted to overcome existing shortcomings of current designs. The longevity of cementless implant components is highly dependent on the initial fit between the bone surface and the implant. The bone-implant interface design has historically been limited by the surgical tools and cutting guides available; and the cost of fabricating custom-designed implant components has been prohibitive. Methods This paper describes an approach where the custom design is based on a Computed Tomography scan of the patient's joint. The proposed design will customize both the articulating surface and the bone-implant interface to address the most common problems found with conventional knee-implant components. Finite Element Analysis is used to evaluate and compare the proposed design of a custom femoral component with a conventional design. Results The proposed design shows a more even stress distribution on the bone-implant interface surface, which will reduce the uneven bone remodeling that can lead to premature loosening. Conclusion The proposed custom femoral component design has the following advantages compared with a conventional femoral component. (i) Since the articulating surface closely mimics the shape of the distal femur, there is no need for resurfacing of the patella or gait change. (ii) Owing to the resulting stress distribution, bone remodeling is even and the risk of premature loosening might be reduced. (iii) Because the bone-implant interface can accommodate anatomical abnormalities at the distal femur, the need for surgical interventions and fitting of filler components is reduced. (iv) Given that the bone-implant interface is customized, about 40% less bone must be removed. The primary disadvantages are the time and cost required for the design and the possible need for a surgical robot to perform the bone resection. Some of these disadvantages may be eliminated by the use of rapid prototyping technologies, especially the use of Electron Beam Melting technology for quick and economical fabrication of custom implant components. PMID:17854508

  17. Reducing the financial impact of pathogen inactivation technology for platelet components: our experience.

    PubMed

    Girona-Llobera, Enrique; Jimenez-Marco, Teresa; Galmes-Trueba, Ana; Muncunill, Josep; Serret, Carmen; Serra, Neus; Sedeño, Matilde

    2014-01-01

    Pathogen inactivation (PI) technology for blood components enhances blood safety by inactivating viruses, bacteria, parasites, and white blood cells. Additionally, PI for platelet (PLT) components has the potential to extend PLT storage time from 5 to 7 days. A retrospective analysis was conducted into the percentage of outdated PLT components during the 3 years before and after the adoption of PLT PI technology in our institution. The PLT transfusion dose for both pre-PI and post-PI periods was similar. A retrospective analysis to study clinical safety and component utilization was also performed in the Balearic Islands University Hospital. As a result of PI implementation in our institution, the PLT production cost increased by 85.5%. However, due to the extension of PLT storage time, the percentage of outdated PLT units substantially decreased (-83.9%) and, consequently, the cost associated with outdated units (-69.8%). This decrease represented a 13.7% reduction of the initial cost increase which, together with the saving in blood transportation (0.1%), led to a saving of 13.8% over the initial cost. Therefore, the initial 85.5% increase in the cost of PLT production was markedly reduced to 71.7%. The mean number of PLT concentrates per patient was similar during both periods. The extension of PLT storage time can substantially contribute to reducing the financial impact of PI by decreasing the percentage of outdated PLTs while improving blood safety. Since the adoption of PI, there have been no documented cases of PLT transfusion-related sepsis in our region. © 2013 American Association of Blood Banks.

  18. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  19. Clustering of metabolic and cardiovascular risk factors in the polycystic ovary syndrome: a principal component analysis.

    PubMed

    Stuckey, Bronwyn G A; Opie, Nicole; Cussons, Andrea J; Watts, Gerald F; Burke, Valerie

    2014-08-01

    Polycystic ovary syndrome (PCOS) is a prevalent condition with heterogeneity of clinical features and cardiovascular risk factors that implies multiple aetiological factors and possible outcomes. To reduce a set of correlated variables to a smaller number of uncorrelated and interpretable factors that may delineate subgroups within PCOS or suggest pathogenetic mechanisms. We used principal component analysis (PCA) to examine the endocrine and cardiometabolic variables associated with PCOS defined by the National Institutes of Health (NIH) criteria. Data were retrieved from the database of a single clinical endocrinologist. We included women with PCOS (N = 378) who were not taking the oral contraceptive pill or other sex hormones, lipid lowering medication, metformin or other medication that could influence the variables of interest. PCA was performed retaining those factors with eigenvalues of at least 1.0. Varimax rotation was used to produce interpretable factors. We identified three principal components. In component 1, the dominant variables were homeostatic model assessment (HOMA) index, body mass index (BMI), high density lipoprotein (HDL) cholesterol and sex hormone binding globulin (SHBG); in component 2, systolic blood pressure, low density lipoprotein (LDL) cholesterol and triglycerides; in component 3, total testosterone and LH/FSH ratio. These components explained 37%, 13% and 11% of the variance in the PCOS cohort respectively. Multiple correlated variables from patients with PCOS can be reduced to three uncorrelated components characterised by insulin resistance, dyslipidaemia/hypertension or hyperandrogenaemia. Clustering of risk factors is consistent with different pathogenetic pathways within PCOS and/or differing cardiometabolic outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    1999-01-01

    Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  1. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    NASA Astrophysics Data System (ADS)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  2. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  3. Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task

    NASA Technical Reports Server (NTRS)

    Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.

    1978-01-01

    Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.

  4. Recognition of units in coarse, unconsolidated braided-stream deposits from geophysical log data with principal components analysis

    USGS Publications Warehouse

    Morin, R.H.

    1997-01-01

    Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.

  5. Rotordynamic Characteristics of the HPOTP (High Pressure Oxygen Turbopump) of the SSME (Space Shuttle Main Engine)

    NASA Technical Reports Server (NTRS)

    Childs, D. W.

    1984-01-01

    Rotational stability of turbopump components in the space shuttle main engine was studied via analysis of component and structural dynamic models. Subsynchronous vibration caused unacceptable migration of the rotor/housing unit with unequal load sharing of the synchronous bearings that resulted in the failure of the High Pressure Oxygen Turbopump. Linear analysis shows that a shrouded inducer eliminates the second critical speed and the stability problem, a stiffened rotor improves the rotordynamic characteristics of the turbopump, and installing damper boost/impeller seals reduces bearing loads. Nonlinear analysis shows that by increasing the "dead band' clearances, a marked reduction in peak bearing loads occurs.

  6. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  7. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  8. A systematic review and meta-analysis of workplace intervention strategies to reduce sedentary time in white-collar workers.

    PubMed

    Chu, A H Y; Ng, S H X; Tan, C S; Win, A M; Koh, D; Müller-Riemenschneider, F

    2016-05-01

    Prolonged sedentary behaviour has been associated with various detrimental health risks. Workplace sitting is particularly important, providing it occupies majority of total daily sedentary behaviour among desk-based employees. The aim of this systematic review and meta-analysis was to examine the effectiveness of workplace interventions overall, and according to different intervention strategies (educational/behavioural, environmental and multi-component interventions) for reducing sitting among white-collar working adults. Articles published through December 2015 were identified in five online databases and manual searches. Twenty-six controlled intervention studies published between 2003 and 2015 of 4568 working adults were included. All 26 studies were presented qualitatively, and 21 studies with a control group without any intervention were included in the meta-analysis. The pooled intervention effect showed a significant workplace sitting reduction of -39.6 min/8-h workday (95% confidence interval [CI]: -51.7, -27.5), favouring the intervention group. Multi-component interventions reported the greatest workplace sitting reduction (-88.8 min/8-h workday; 95% CI: -132.7, -44.9), followed by environmental (-72.8 min/8-h workday; 95% CI: -104.9, -40.6) and educational/behavioural strategies -15.5 min/8-h workday (95% CI:-22.9,-8.2). Our study found consistent evidence for intervention effectiveness in reducing workplace sitting, particularly for multi-component and environmental strategies. Methodologically rigorous studies using standardized and objectively determined outcomes are warranted. © 2016 World Obesity. © 2016 World Obesity.

  9. Stability analysis of an autocatalytic protein model

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2016-05-01

    A self-regulatory genetic circuit, where a protein acts as a positive regulator of its own production, is known to be the simplest biological network with a positive feedback loop. Although at least three components—DNA, RNA, and the protein—are required to form such a circuit, stability analysis of the fixed points of this self-regulatory circuit has been performed only after reducing the system to a two-component system, either by assuming a fast equilibration of the DNA component or by removing the RNA component. Here, stability of the fixed points of the three-component positive feedback loop is analyzed by obtaining eigenvalues of the full three-dimensional Hessian matrix. In addition to rigorously identifying the stable fixed points and saddle points, detailed information about the system can be obtained, such as the existence of complex eigenvalues near a fixed point.

  10. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  11. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  12. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  13. Adhesive strength of total knee endoprostheses to bone cement - analysis of metallic and ceramic femoral components under worst-case conditions.

    PubMed

    Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer

    2016-06-01

    Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.

  14. Nontidal Loading Applied in VLBI Geodetic Analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.

    2015-12-01

    We investigate the application of nontidal atmosphere pressure, hydrology, and ocean loading series in the analysis of VLBI data. The annual amplitude of VLBI scale variation is reduced to less than 0.1 ppb, a result of the annual components of the vertical loading series. VLBI site vertical scatter and baseline length scatter is reduced when these loading models are applied. We operate nontidal loading services for hydrology loading (GLDAS model), atmospheric pressure loading (NCEP), and nontidal ocean loading (JPL ECCO model). As an alternative validation, we compare these loading series with corresponding series generated by other analysis centers.

  15. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

  16. ClustVis: a web tool for visualizing clustering of multivariate data using Principal Component Analysis and heatmap

    PubMed Central

    Metsalu, Tauno; Vilo, Jaak

    2015-01-01

    The Principal Component Analysis (PCA) is a widely used method of reducing the dimensionality of high-dimensional data, often followed by visualizing two of the components on the scatterplot. Although widely used, the method is lacking an easy-to-use web interface that scientists with little programming skills could use to make plots of their own data. The same applies to creating heatmaps: it is possible to add conditional formatting for Excel cells to show colored heatmaps, but for more advanced features such as clustering and experimental annotations, more sophisticated analysis tools have to be used. We present a web tool called ClustVis that aims to have an intuitive user interface. Users can upload data from a simple delimited text file that can be created in a spreadsheet program. It is possible to modify data processing methods and the final appearance of the PCA and heatmap plots by using drop-down menus, text boxes, sliders etc. Appropriate defaults are given to reduce the time needed by the user to specify input parameters. As an output, users can download PCA plot and heatmap in one of the preferred file formats. This web server is freely available at http://biit.cs.ut.ee/clustvis/. PMID:25969447

  17. Fetal ECG extraction using independent component analysis by Jade approach

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  18. A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok

    2012-02-01

    We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.

  19. A feasibility study on age-related factors of wrist pulse using principal component analysis.

    PubMed

    Jang-Han Bae; Young Ju Jeon; Sanghun Lee; Jaeuk U Kim

    2016-08-01

    Various analysis methods for examining wrist pulse characteristics are needed for accurate pulse diagnosis. In this feasibility study, principal component analysis (PCA) was performed to observe age-related factors of wrist pulse from various analysis parameters. Forty subjects in the age group of 20s and 40s were participated, and their wrist pulse signal and respiration signal were acquired with the pulse tonometric device. After pre-processing of the signals, twenty analysis parameters which have been regarded as values reflecting pulse characteristics were calculated and PCA was performed. As a results, we could reduce complex parameters to lower dimension and age-related factors of wrist pulse were observed by combining-new analysis parameter derived from PCA. These results demonstrate that PCA can be useful tool for analyzing wrist pulse signal.

  20. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    NASA Astrophysics Data System (ADS)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  1. Rapid analysis of fertilizers by the direct-reading thermometric method.

    PubMed

    Sajó, I; Sipos, B

    1972-05-01

    The authors have developed rapid methods for the determination of the main components of fertilizers, namely phosphate, potassium and nitrogen fixed in various forms. In the absence of magnesium ions phosphate is precipitated with magnesia mixture; in the presence of magnesium ions ammonium phosphomolybdate is precipitated and the excess of molybdate is reacted with hydrogen peroxide. Potassium is determined by precipitation with silico-fluoride. For nitrogen fixed as ammonium salts the ammonium ions are condensed in a basic solution with formalin to hexamethylenetetramine; for nitrogen fixed as carbamide the latter is decomposed with sodium nitrite; for nitrogen fixed as nitrate the latter is reduced with titanium(III). In each case the temperature change of the test solution is measured. Practically all essential components of fertilizers may be determined by direct-reading thermometry; with this method and special apparatus the time of analysis is reduced to at most about 15 min for any determination.

  2. Real-space analysis of radiation-induced specific changes with independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borek, Dominika; Bromberg, Raquel; Hattne, Johan

    A method of analysis is presented that allows for the separation of specific radiation-induced changes into distinct components in real space. The method relies on independent component analysis (ICA) and can be effectively applied to electron density maps and other types of maps, provided that they can be represented as sets of numbers on a grid. Here, for glucose isomerase crystals, ICA was used in a proof-of-concept analysis to separate temperature-dependent and temperature-independent components of specific radiation-induced changes for data sets acquired from multiple crystals across multiple temperatures. ICA identified two components, with the temperature-independent component being responsible for themore » majority of specific radiation-induced changes at temperatures below 130 K. The patterns of specific temperature-independent radiation-induced changes suggest a contribution from the tunnelling of electron holes as a possible explanation. In the second case, where a group of 22 data sets was collected on a single thaumatin crystal, ICA was used in another type of analysis to separate specific radiation-induced effects happening on different exposure-level scales. Here, ICA identified two components of specific radiation-induced changes that likely result from radiation-induced chemical reactions progressing with different rates at different locations in the structure. In addition, ICA unexpectedly identified the radiation-damage state corresponding to reduced disulfide bridges rather than the zero-dose extrapolated state as the highest contrast structure. The application of ICA to the analysis of specific radiation-induced changes in real space and the data pre-processing for ICA that relies on singular value decomposition, which was used previously in data space to validate a two-component physical model of X-ray radiation-induced changes, are discussed in detail. This work lays a foundation for a better understanding of protein-specific radiation chemistries and provides a framework for analysing effects of specific radiation damage in crystallographic and cryo-EM experiments.« less

  3. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  4. How multi segmental patterns deviate in spastic diplegia from typical developed.

    PubMed

    Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela

    2017-10-01

    The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  6. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  7. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  8. New insights into the folding of a β-sheet miniprotein in a reduced space of collective hydrogen bond variables: application to a hydrodynamic analysis of the folding flow.

    PubMed

    Kalgin, Igor V; Caflisch, Amedeo; Chekmarev, Sergei F; Karplus, Martin

    2013-05-23

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted principal component analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed, and it is complemented by a hydrodynamic analysis. The latter, making use of "passive tracers" in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this "hydrogen atom" of protein folding.

  9. A diffusion-matched principal component analysis (DM-PCA) based two-channel denoising procedure for high-resolution diffusion-weighted MRI

    PubMed Central

    Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.

    2018-01-01

    Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400

  10. Domain adaptation via transfer component analysis.

    PubMed

    Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang

    2011-02-01

    Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.

  11. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  12. Effects of phenytoin on N100 augmenting/reducing and the late positive complex of the event-related potential: a topographic analysis.

    PubMed

    Pritchard, W S; Barratt, E S; Faulk, D M; Brandt, M E; Bryant, S G

    1986-01-01

    The effects of 100 mg of phenytoin on the topographic distribution of augmenting/reducing (amplitude response/nonresponse to increases in stimulus intensity) of the visual N100 component of the event-related potential (ERP) were examined. In normal subjects, visual N100 augmenting is associated with impulsivity and attentional distraction. Effects of phenytoin on the topographic distributions of the P300 and slow-wave cognitive ERP components were also examined. Subjects counted the total number of light flashes presented at two highly discriminable but equiprobable intensities. Results indicated that phenytoin had a significant reducing effect on the intensity response of N100 at the vertex and anterior temporal electrode sites, and approached significance at the frontal pole. That is, at these loci N100 showed less of an increase in amplitude (or, in some subjects, more of a decrease) in going from baseline to drug than in going from baseline to placebo. Results also indicated that phenytoin significantly enhanced the amplitude of the frontal, negative portion of slow wave, but not the posterior, positive portion or the P300 component. These findings are consistent with behavioral evidence that phenytoin reduces impulsivity and improves concentration.

  13. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  14. Building Block Approach' for Structural Analysis of Thermoplastic Composite Components for Automotive Applications

    NASA Astrophysics Data System (ADS)

    Carello, M.; Amirth, N.; Airale, A. G.; Monti, M.; Romeo, A.

    2017-12-01

    Advanced thermoplastic prepreg composite materials stand out with regard to their ability to allow complex designs with high specific strength and stiffness. This makes them an excellent choice for lightweight automotive components to reduce mass and increase fuel efficiency, while maintaining the functionality of traditional thermosetting prepreg (and mechanical characteristics) and with a production cycle time and recyclability suited to mass production manufacturing. Currently, the aerospace and automotive sectors struggle to carry out accurate Finite Elements (FE) component analyses and in some cases are unable to validate the obtained results. In this study, structural Finite Elements Analysis (FEA) has been done on a thermoplastic fiber reinforced component designed and manufactured through an integrated injection molding process, which consists in thermoforming the prepreg laminate and overmolding the other parts. This process is usually referred to as hybrid molding, and has the provision to reinforce the zones subjected to additional stresses with thermoformed themoplastic prepreg as required and overmolded with a shortfiber thermoplastic resin in single process. This paper aims to establish an accurate predictive model on a rational basis and an innovative methodology for the structural analysis of thermoplastic composite components by comparison with the experimental tests results.

  15. Analysis of the state of the art of precast concrete bridge substructure systems.

    DOT National Transportation Integrated Search

    2013-10-01

    Precasting of bridge substructure components holds potential for accelerating the construction of bridges,reducing : impacts to the traveling public on routes adjacent to construction sites, improving bridge durability and hence service : life, and r...

  16. Which type of sedentary behaviour intervention is more effective at reducing body mass index in children? A meta-analytic review.

    PubMed

    Liao, Y; Liao, J; Durand, C P; Dunton, G F

    2014-03-01

    Sedentary behaviour is emerging as an independent risk factor for paediatric obesity. Some evidence suggests that limiting sedentary behaviour alone could be effective in reducing body mass index (BMI) in children. However, whether adding physical activity and diet-focused components to sedentary behaviour reduction interventions could lead to an additive effect is unclear. This meta-analysis aims to assess the overall effect size of sedentary behaviour interventions on BMI reduction and to compare whether interventions that have multiple components (sedentary behaviour, physical activity and diet) have a higher mean effect size than interventions with single (sedentary behaviour) component. Included studies (n = 25) were randomized controlled trials of children (<18 years) with intervention components aimed to reduce sedentary behaviour and measured BMI at pre- and post-intervention. Effect size was calculated as the mean difference in BMI change between children in an intervention group and a control group. Results indicated that sedentary behaviour interventions had a significant effect on BMI reduction. The pooled effect sizes of multi-component interventions (g = -0.060∼-0.089) did not differ from the single-component interventions (g = -0.154), and neither of them had a significant effect size on its own. Future paediatric obesity interventions may consider focusing on developing strategies to decrease multiple screen-related sedentary behaviours. © 2013 The Authors. obesity reviews © 2013 International Association for the Study of Obesity.

  17. A Second Look at Dwyer's Studies by Means of Meta-Analysis: The Effects of Pictorial Realism on Text Comprehension and Vocabulary.

    ERIC Educational Resources Information Center

    Reinwein, Joachim; Huberdeau, Lucie

    A meta-analysis examined a series of studies by F.M. Dwyer on the effect of illustrations on text comprehension. Principal component analysis was used to reduce the four posttests used by Dwyer to more fundamental factors of learning, followed by analyses of variance. All nine studies (involving secondary-school and college students) in which…

  18. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    NASA Technical Reports Server (NTRS)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  19. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  20. Measuring farm sustainability using data envelope analysis with principal components: the case of Wisconsin cranberry.

    PubMed

    Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed

    2015-01-01

    Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Multivariate Analysis of Solar Spectral Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Pilewskie, P.; Rabbette, M.

    2001-01-01

    Principal component analysis is used to characterize approximately 7000 downwelling solar irradiance spectra retrieved at the Southern Great Plains site during an Atmospheric Radiation Measurement (ARM) shortwave intensive operating period. This analysis technique has proven to be very effective in reducing a large set of variables into a much smaller set of independent variables while retaining the information content. It is used to determine the minimum number of parameters necessary to characterize atmospheric spectral irradiance or the dimensionality of atmospheric variability. It was found that well over 99% of the spectral information was contained in the first six mutually orthogonal linear combinations of the observed variables (flux at various wavelengths). Rotation of the principal components was effective in separating various components by their independent physical influences. The majority of the variability in the downwelling solar irradiance (380-1000 nm) was explained by the following fundamental atmospheric parameters (in order of their importance): cloud scattering, water vapor absorption, molecular scattering, and ozone absorption. In contrast to what has been proposed as a resolution to a clear-sky absorption anomaly, no unexpected gaseous absorption signature was found in any of the significant components.

  2. Landslides Identification Using Airborne Laser Scanning Data Derived Topographic Terrain Attributes and Support Vector Machine Classification

    NASA Astrophysics Data System (ADS)

    Pawłuszek, Kamila; Borkowski, Andrzej

    2016-06-01

    Since the availability of high-resolution Airborne Laser Scanning (ALS) data, substantial progress in geomorphological research, especially in landslide analysis, has been carried out. First and second order derivatives of Digital Terrain Model (DTM) have become a popular and powerful tool in landslide inventory mapping. Nevertheless, an automatic landslide mapping based on sophisticated classifiers including Support Vector Machine (SVM), Artificial Neural Network or Random Forests is often computationally time consuming. The objective of this research is to deeply explore topographic information provided by ALS data and overcome computational time limitation. For this reason, an extended set of topographic features and the Principal Component Analysis (PCA) were used to reduce redundant information. The proposed novel approach was tested on a susceptible area affected by more than 50 landslides located on Rożnów Lake in Carpathian Mountains, Poland. The initial seven PCA components with 90% of the total variability in the original topographic attributes were used for SVM classification. Comparing results with landslide inventory map, the average user's accuracy (UA), producer's accuracy (PA), and overall accuracy (OA) were calculated for two models according to the classification results. Thereby, for the PCA-feature-reduced model UA, PA, and OA were found to be 72%, 76%, and 72%, respectively. Similarly, UA, PA, and OA in the non-reduced original topographic model, was 74%, 77% and 74%, respectively. Using the initial seven PCA components instead of the twenty original topographic attributes does not significantly change identification accuracy but reduce computational time.

  3. The relationship between fuel lubricity and diesel injection system wear

    NASA Astrophysics Data System (ADS)

    Lacy, Paul I.

    1992-01-01

    Use of low-lubricity fuel may have contributed to increased failure rates associated with critical fuel injection equipment during the 1991 Operation Desert Storm. However, accurate quantitative analysis of failed components from the field is almost impossible due to the unique service history of each pump. This report details the results of pump stand tests with fuels of equal viscosity, but widely different lubricity. Baseline tests were also performed using reference no. 2 diesel fuel. Use of poor lubricity fuel under these controlled conditions was found to greatly reduce both pump durability and engine performance. However, both improved metallurgy and fuel lubricity additives significantly reduced wear. Good correlation was obtained between standard bench tests and lightly loaded pump components. However, high contact loads on isolated components produced a more severe wear mechanism that is not well reflected by the Ball-on-Cylinder Lubricity Evaluator.

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. Extension of similarity test procedures to cooled engine components with insulating ceramic coatings

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.

    1980-01-01

    Material thermal conductivity was analyzed for its effect on the thermal performance of air cooled gas turbine components, both with and without a ceramic thermal-barrier material, tested at reduced temperatures and pressures. The analysis shows that neglecting the material thermal conductivity can contribute significant errors when metal-wall-temperature test data taken on a turbine vane are extrapolated to engine conditions. This error in metal temperature for an uncoated vane is of opposite sign from that for a ceramic-coated vane. A correction technique is developed for both ceramic-coated and uncoated components.

  6. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  7. Derivation of Boundary Manikins: A Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  8. Independent component analysis based channel equalization for 6 × 6 MIMO-OFDM transmission over few-mode fiber.

    PubMed

    He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua

    2016-05-02

    We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs.

  9. Strategies for reducing large fMRI data sets for independent component analysis.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R

    2006-06-01

    In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.

  10. Improving the accuracy of Density Functional Theory (DFT) calculation for homolysis bond dissociation energies of Y-NO bond: generalized regression neural network based on grey relational analysis and principal component analysis.

    PubMed

    Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2011-01-01

    We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.

  11. Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine

    NASA Astrophysics Data System (ADS)

    Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo

    2017-07-01

    Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.

  12. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    NASA Astrophysics Data System (ADS)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  13. Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis

    DTIC Science & Technology

    2012-01-01

    industries through hide damage, reduc- tions in animal weight gains, or reduced production of animal products such as milk or eggs (Reviewed by Lehane...chiopterus (Meigen 1830) was abundant on sheep in southern England, although relatively uncommon in nearby light traps. Furthermore, attraction to or...Cross correlationmaps: a tool for visualizing andmodeling time lagged associations. Vector Borne Zoonotic Dis. 5: 267Ð 275. Duehl, A. J., L. W

  14. Effect of biological and coagulation pre-treatments to control organic and biofouling potential components of ultrafiltration membrane in the treatment of lake water.

    PubMed

    Pramanik, Biplob Kumar; Kajol, Annaduzzaman; Suja, Fatihah; Md Zain, Shahrom

    2017-03-01

    Biological aerated filter (BAF), sand filtration (SF), alum and Moringa oleifera coagulation were investigated as a pre-treatment for reducing the organic and biofouling potential component of an ultrafiltration (UF) membrane in the treatment of lake water. The carbohydrate content was mainly responsible for reversible fouling of the UF membrane compared to protein or dissolved organic carbon (DOC) content. All pre-treatment could effectively reduce these contents and led to improve the UF filterability. Both BAF and SF markedly led to improvement in flux than coagulation processes, and alum gave greater flux than M. oleifera. This was attributed to the effective removal and/or breakdown of high molecular weight (MW) organics by biofilters. BAF led to greater improvement in flux than SF, due to greater breakdown of high MW organics, and this was also confirmed by the attenuated total reflection-Fourier transform infrared spectroscopy analysis. Coagulation processes were ineffective in removing biofouling potential components, whereas both biofilters were very effective as shown by the reduction of low MW organics, biodegradable dissolved organic carbon and assimilable organic carbon contents. This study demonstrated the potential of biological pre-treatments for reducing organic and biofouling potential component and thus improving flux for the UF of lake water treatment.

  15. POD/MAC-Based Modal Basis Selection for a Reduced Order Nonlinear Response Analysis

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2007-01-01

    A feasibility study was conducted to explore the applicability of a POD/MAC basis selection technique to a nonlinear structural response analysis. For the case studied the application of the POD/MAC technique resulted in a substantial improvement of the reduced order simulation when compared to a classic approach utilizing only low frequency modes present in the excitation bandwidth. Further studies are aimed to expand application of the presented technique to more complex structures including non-planar and two-dimensional configurations. For non-planar structures the separation of different displacement components may not be necessary or desirable.

  16. Transportation Safety Data and Analysis : volume 3 framework for highway safety mitigation and workforce development.

    DOT National Transportation Integrated Search

    2011-05-01

    Safety has always been an important component in the planning, design, and operation of highways. In an effort : to reduce crashes occurring on highway facilities, the Safe, Accountable, Flexible, and Efficient Transportation : Equity Act - A Legacy ...

  17. Transportation safety data and analysis : Volume 3, Framework for highway safety mitigation and workforce development.

    DOT National Transportation Integrated Search

    2011-05-01

    Safety has always been an important component in the planning, design, and operation of highways. In an effort : to reduce crashes occurring on highway facilities, the Safe, Accountable, Flexible, and Efficient Transportation : Equity Act - A Legacy ...

  18. Frontal auditory evoked potentials and augmenting-reducing.

    PubMed

    Bruneau, N; Roux, S; Garreau, B; Lelord, G

    1985-09-01

    Auditory evoked potentials (AEPs) to tones (750 Hz--200 msec) ranging from 50 to 80 dB SPL were studied at Cz and Fz leads in 29 normal adults (15 males) ranging in age from 20 to 22. Peak-to-trough amplitudes were measured for the P1-N1 and the N1-P2 wave forms as well as baseline (500 msec prestimulus)-to-peak amplitudes for each component, i.e., P1, N1 and P2. Amplitudes were examined as a function of intensity and electrode location. Cz-Fz amplitude differences increased with increasing stimulus intensity, the differentiating peak being the N1 component. An overall reducing phenomenon was found at Fz in the 70-80 dB range whereas an augmenting effect was observed at Cz for these intensities. The augmenting/reducing groups defined by analysis of individual amplitude-intensity patterns were different whether we considered Fz or Cz results: Fz reducers were more numerous than Cz reducers. These results on prominent reducing at the frontal level were examined in relation to the data concerning the modulatory function of the frontal cortex on auditory EPs. Implications were drawn for the role of the frontal cortex in cortical augmenting-reducing.

  19. Analysis of components of variance in multiple-reader studies of computer-aided diagnosis with different tasks

    NASA Astrophysics Data System (ADS)

    Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei

    2001-06-01

    In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.

  20. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  1. Common relationships among proximate composition components in fishes

    USGS Publications Warehouse

    Hartman, K.J.; Margraf, F.J.

    2008-01-01

    Relationships between the various body proximate components and dry matter content were examined for five species of fishes, representing anadromous, marine and freshwater species: chum salmon Oncorhynchus keta, Chinook salmon Oncorhynchus tshawytscha, brook trout Salvelinus fontinalis, bluefish Pomatomus saltatrix and striped bass Morone saxatilis. The dry matter content or per cent dry mass of these fishes can be used to reliably predict the per cent composition of the other components. Therefore, with validation it is possible to estimate fat, protein and ash content of fishes from per cent dry mass information, reducing the need for costly and time-consuming laboratory proximate analysis. This approach coupled with new methods of non-lethal estimation of per cent dry mass, such as from bioelectrical impedance analysis, can provide non-destructive measurements of proximate composition of fishes. ?? 2008 The Authors.

  2. Breast Cancer Detection with Reduced Feature Set.

    PubMed

    Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin

    2015-01-01

    This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  3. Global loss of a nuclear lamina component, lamin A/C, and LINC complex components SUN1, SUN2, and nesprin-2 in breast cancer.

    PubMed

    Matsumoto, Ayaka; Hieda, Miki; Yokoyama, Yuhki; Nishioka, Yu; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Matsuura, Nariaki

    2015-10-01

    Cancer cells exhibit a variety of features indicative of atypical nuclei. However, the molecular mechanisms underlying these phenomena remain to be elucidated. The linker of nucleoskeleton and cytoskeleton (LINC) complex, a nuclear envelope protein complex consisting mainly of the SUN and nesprin proteins, connects nuclear lamina and cytoskeletal filaments and helps to regulate the size and shape of the nucleus. Using immunohistology, we found that a nuclear lamina component, lamin A/C and all of the investigated LINC complex components, SUN1, SUN2, and nesprin-2, were downregulated in human breast cancer tissues. In the majority of cases, we observed lower expression levels of these analytes in samples' cancerous regions as compared to their cancer-associated noncancerous regions (in cancerous regions, percentage of tissue samples exhibiting low protein expression: lamin A/C, 85% [n = 73]; SUN1, 88% [n = 43]; SUN2, 74% [n = 43]; and nesprin-2, 79% [n = 53]). Statistical analysis showed that the frequencies of recurrence and HER2 expression were negatively correlated with lamin A/C expression (P < 0.05), and intrinsic subtype and ki-67 level were associated with nesprin-2 expression (P < 0.05). In addition, combinatorial analysis using the above four parameters showed that all patients exhibited reduced expression of at least one of four components despite the tumor's pathological classification. Furthermore, several cultured breast cancer cell lines expressed less SUN1, SUN2, nesprin-2 mRNA, and lamin A/C compared to noncancerous mammary gland cells. Together, these results suggest that the strongly reduced expression of LINC complex and nuclear lamina components may play fundamental pathological functions in breast cancer progression. © 2015 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  4. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    PubMed

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P <.05). In addition, the effective dose decreased markedly from groups A to D at 3.78, 1.81, 1.07, and 0.37 mSv, respectively. Decreasing the DE tube currents from 100 mA and 570 mA to 50 mA and 290 mA resulted in 96.6% accuracy for urinary calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Performance-based maintenance of gas turbines for reliable control of degraded power systems

    NASA Astrophysics Data System (ADS)

    Mo, Huadong; Sansavini, Giovanni; Xie, Min

    2018-03-01

    Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.

  6. Reverse engineering of wörner type drilling machine structure.

    NASA Astrophysics Data System (ADS)

    Wibowo, A.; Belly, I.; llhamsyah, R.; Indrawanto; Yuwana, Y.

    2018-03-01

    A product design needs to be modified based on the conditions of production facilities and existing resource capabilities without reducing the functional aspects of the product itself. This paper describes the reverse engineering process of the main structure of the wörner type drilling machine to obtain a machine structure design that can be made by resources with limited ability by using simple processes. Some structural, functional and the work mechanism analyzes have been performed to understand the function and role of each basic components. The process of dismantling of the drilling machine and measuring each of the basic components was performed to obtain sets of the geometry and size data of each component. The geometric model of each structure components and the machine assembly were built to facilitate the simulation process and machine performance analysis that refers to ISO standard of drilling machine. The tolerance stackup analysis also performed to determine the type and value of geometrical and dimensional tolerances, which could affect the ease of the components to be manufactured and assembled

  7. Prognostic Significance of Solid and Micropapillary Components in Invasive Lung Adenocarcinomas Measuring ≤3 cm.

    PubMed

    Matsuoka, Yuki; Yurugi, Yohei; Takagi, Yuzo; Wakahara, Makoto; Kubouchi, Yasuaki; Sakabe, Tomohiko; Haruki, Tomohiro; Araki, Kunio; Taniguchi, Yuji; Nakamura, Hiroshige; Umekita, Yoshihisa

    2016-09-01

    We aimed to analyze the clinical impact of solid and micropapillary components in a series of Japanese patients resected for ≤3 cm lung adenocarcinoma. A total of 115 patients with ≤3 cm lung adenocarcinomas were reviewed and classified according to the American Thoracic Society and the European Respiratory Society classification. The presence of solid (S+) or micropapillary component (MP+) was defined when the component constituted ≥1% of the entire tumor. The impact of these components on disease-free (DFS) and disease-specific (DSS) survival was analyzed. Thirty (26.1%) cases with S+ and 27 (23.5%) with MP+ were identified, and multivariate analysis indicated that S+ status significantly reduced the duration of DFS and DSS. In 86 patients of acinar- and papillary-predominant subgroups, S+ and/or MP+ had the most significant effect on DFS and DSS by multivariate analysis. S+ and/or MP+ status predict worse prognosis in patients with acinar- and papillary-predominant lung adenocarcinoma. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  8. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  9. Equivalent water height extracted from GRACE gravity field model with robust independent component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei

    2014-08-01

    The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.

  10. Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin using excitation-emission matrix (EEM) fluorescence and parallel factor analysis (PARAFAC).

    PubMed

    Singh, Shatrughan; D'Sa, Eurico J; Swenson, Erick M

    2010-07-15

    Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin, Louisiana, USA,was examined by excitation emission matrix (EEM) fluorescence combined with parallel factor analysis (PARAFAC). CDOM optical properties of absorption and fluorescence at 355nm along an axial transect (36 stations) during March, April, and May 2008 showed an increasing trend from the marine end member to the upper basin with mean CDOM absorption of 11.06 + or - 5.01, 10.05 + or - 4.23, 11.67 + or - 6.03 (m(-)(1)) and fluorescence 0.80 + or - 0.37, 0.78 + or - 0.39, 0.75 + or - 0.51 (RU), respectively. PARAFAC analysis identified two terrestrial humic-like (component 1 and 2), one non-humic like (component 3), and one soil derived humic acid like (component 4) components. The spatial variation of the components showed an increasing trend from station 1 (near the mouth of basin) to station 36 (end member of bay; upper basin). Deviations from this increasing trend were observed at a bayou channel with very high chlorophyll-a concentrations especially for component 3 in May 2008 that suggested autochthonous production of CDOM. The variability of components with salinity indicated conservative mixing along the middle part of the transect. Component 1 and 4 were found to be relatively constant, while components 2 and 3 revealed an inverse relationship for the sampling period. Total organic carbon showed increasing trend for each of the components. An increase in humification and a decrease in fluorescence indices along the transect indicated an increase in terrestrial derived organic matter and reduced microbial activity from lower to upper basin. The use of these indices along with PARAFAC results improved dissolved organic matter characterization in the Barataria Basin. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Principal component similarity analysis of Raman spectra to study the effects of pH, heating, and kappa-carrageenan on whey protein structure.

    PubMed

    Alizadeh-Pasdar, Nooshin; Nakai, Shuryo; Li-Chan, Eunice C Y

    2002-10-09

    Raman spectroscopy was used to elucidate structural changes of beta-lactoglobulin (BLG), whey protein isolate (WPI), and bovine serum albumin (BSA), at 15% concentration, as a function of pH (5.0, 7.0, and 9.0), heating (80 degrees C, 30 min), and presence of 0.24% kappa-carrageenan. Three data-processing techniques were used to assist in identifying significant changes in Raman spectral data. Analysis of variance showed that of 12 characteristics examined in the Raman spectra, only a few were significantly affected by pH, heating, kappa-carrageenan, and their interactions. These included amide I (1658 cm(-1)) for WPI and BLG, alpha-helix for BLG and BSA, beta-sheet for BSA, CH stretching (2880 cm(-1)) for BLG and BSA, and CH stretching (2930 cm(-1)) for BSA. Principal component analysis reduced dimensionality of the characteristics. Heating and its interaction with kappa-carrageenan were identified as the most influential in overall structure of the whey proteins, using principal component similarity analysis.

  12. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  13. Instrument Psychometrics: Parental Satisfaction and Quality Indicators of Perinatal Palliative Care.

    PubMed

    Wool, Charlotte

    2015-10-01

    Despite a life-limiting fetal diagnosis, prenatal attachment often occurs in varying degrees resulting in role identification by an individual as a parent. Parents recognize quality care and report their satisfaction when interfacing with health care providers. The aim was to test an instrument measuring parental satisfaction and quality indicators with parents electing to continue a pregnancy after learning of a life-limiting fetal diagnosis. A cross sectional survey design gathered data using a computer-mediated platform. Subjects were parents (n=405) who opted to continue a pregnancy affected by a life-limiting diagnosis. Factor analysis using principal component analysis with Varimax rotation was used to validate the instrument, evaluate components, and summarize the explained variance achieved among quality indicator items. The Prenatal Scale was reduced to 37 items with a three-component solution explaining 66.19% of the variance and internal consistency reliability of 0.98. The Intrapartum Scale included 37 items with a four-component solution explaining 66.93% of the variance and a Cronbach α of 0.977. The Postnatal Scale was reduced to 44 items with a six-component solution explaining 67.48% of the variance. Internal consistency reliability was 0.975. The Parental Satisfaction and Quality Indicators of Perinatal Palliative Care Instrument is a valid and reliable measure for parent-reported quality care and satisfaction. Use of this instrument will enable clinicians and researchers to measure quality indicators and parental satisfaction. The instrument is useful for assessing, analyzing, and reporting data on quality for care delivered during the prenatal, intrapartum, and postnatal periods.

  14. Small Town Insurgency: The Struggle for Information Dominance to Reduce Gang Violence

    DTIC Science & Technology

    2010-12-01

    focuses on the importance of information dominance , there has been little research into component factors that might either promote, or inhibit, the... information dominance with respect to a counter-gang strategy. Through comparative analysis, our research suggests that improving relationships between

  15. Analysis of Wind Tunnel Lateral Oscillatory Data of the F-16XL Aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.; Szyba, Nathan M.

    2004-01-01

    Static and dynamic wind tunnel tests were performed on an 18% scale model of the F-16XL aircraft. These tests were performed over a wide range of angles of attack and sideslip with oscillation amplitudes from 5 deg. to 30 deg. and reduced frequencies from 0.073 to 0.269. Harmonic analysis was used to estimate Fourier coefficients and in-phase and out-of-phase components. For frequency dependent data from rolling oscillations, a two-step regression method was used to obtain unsteady models (indicial functions), and derivatives due to sideslip angle, roll rate and yaw rate from in-phase and out-of-phase components. Frequency dependence was found for angles of attack between 20 deg. and 50 deg. Reduced values of coefficient of determination and increased values of fit error were found for angles of attack between 35 deg. and 45 deg. An attempt to estimate model parameters from yaw oscillations failed, probably due to the low number of test cases at different frequencies.

  16. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  17. Metatarsal Shape and Foot Type: A Geometric Morphometric Analysis.

    PubMed

    Telfer, Scott; Kindig, Matthew W; Sangeorzan, Bruce J; Ledoux, William R

    2017-03-01

    Planus and cavus foot types have been associated with an increased risk of pain and disability. Improving our understanding of the geometric differences between bones in different foot types may provide insights into injury risk profiles and have implications for the design of musculoskeletal and finite-element models. In this study, we performed a geometric morphometric analysis on the geometry of metatarsal bones from 65 feet, segmented from computed tomography (CT) scans. These were categorized into four foot types: pes cavus, neutrally aligned, asymptomatic pes planus, and symptomatic pes planus. Generalized procrustes analysis (GPA) followed by permutation tests was used to determine significant shape differences associated with foot type and sex, and principal component analysis was used to find the modes of variation for each metatarsal. Significant shape differences were found between foot types for all the metatarsals (p < 0.01), most notably in the case of the second metatarsal which showed significant pairwise differences across all the foot types. Analysis of the principal components of variation showed pes cavus bones to have reduced cross-sectional areas in the sagittal and frontal planes. The first (p = 0.02) and fourth metatarsals (p = 0.003) were found to have significant sex-based differences, with first metatarsals from females shown to have reduced width, and fourth metatarsals from females shown to have reduced frontal and sagittal plane cross-sectional areas. Overall, these findings suggest that metatarsal bones have distinct morphological characteristics that are associated with foot type and sex, with implications for our understanding of anatomy and numerical modeling of the foot.

  18. Diet components can suppress inflammation and reduce cancer risk.

    PubMed

    Hardman, W Elaine

    2014-06-01

    Epidemiology studies indicate that diet or specific dietary components can reduce the risk for cancer, cardiovascular disease and diabetes. An underlying cause of these diseases is chronic inflammation. Dietary components that are beneficial against disease seem to have multiple mechanisms of action and many also have a common mechanism of reducing inflammation, often via the NFκB pathway. Thus, a plant based diet can contain many components that reduce inflammation and can reduce the risk for developing all three of these chronic diseases. We summarize dietary components that have been shown to reduce cancer risk and two studies that show that dietary walnut can reduce cancer growth and development. Part of the mechanism for the anticancer benefit of walnut was by suppressing the activation of NFκB. In this brief review, we focus on reduction of cancer risk by dietary components and the relationship to suppression of inflammation. However, it should be remembered that most dietary components have multiple beneficial mechanisms of action that can be additive and that suppression of chronic inflammation should reduce the risk for all three chronic diseases.

  19. Diet components can suppress inflammation and reduce cancer risk

    PubMed Central

    2014-01-01

    Epidemiology studies indicate that diet or specific dietary components can reduce the risk for cancer, cardiovascular disease and diabetes. An underlying cause of these diseases is chronic inflammation. Dietary components that are beneficial against disease seem to have multiple mechanisms of action and many also have a common mechanism of reducing inflammation, often via the NFκB pathway. Thus, a plant based diet can contain many components that reduce inflammation and can reduce the risk for developing all three of these chronic diseases. We summarize dietary components that have been shown to reduce cancer risk and two studies that show that dietary walnut can reduce cancer growth and development. Part of the mechanism for the anticancer benefit of walnut was by suppressing the activation of NFκB. In this brief review, we focus on reduction of cancer risk by dietary components and the relationship to suppression of inflammation. However, it should be remembered that most dietary components have multiple beneficial mechanisms of action that can be additive and that suppression of chronic inflammation should reduce the risk for all three chronic diseases. PMID:24944766

  20. Statistical techniques applied to aerial radiometric surveys (STAARS): principal components analysis user's manual. [NURE program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.

    1981-01-01

    A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From thismore » analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained.« less

  1. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1990-01-01

    A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.

  2. Development and validation of a questionnaire to evaluate patient satisfaction with diabetes disease management.

    PubMed

    Paddock, L E; Veloski, J; Chatterton, M L; Gevirtz, F O; Nash, D B

    2000-07-01

    To develop a reliable and valid questionnaire to measure patient satisfaction with diabetes disease management programs. Questions related to structure, process, and outcomes were categorized into 14 domains defining the essential elements of diabetes disease management. Health professionals confirmed the content validity. Face validity was established by a patient focus group. The questionnaire was mailed to 711 patients with diabetes who participated in a disease management program. To reduce the number of questionnaire items, a principal components analysis was performed using a varimax rotation. The Scree test was used to select significant components. To further assess reliability and validity; Cronbach's alpha and product-moment correlations were calculated for components having > or =3 items with loadings >0.50. The validated 73-item mailed satisfaction survey had a 34.1% response rate. Principal components analysis yielded 13 components with eigenvalues > 1.0. The Scree test proposed a 6-component solution (39 items), which explained 59% of the total variation. Internal consistency reliabilities computed for the first 6 components (alpha = 0.79-0.95) were acceptable. The final questionnaire, the Diabetes Management Evaluation Tool (DMET), was designed to assess patient satisfaction with diabetes disease management programs. Although more extensive testing of the questionnaire is appropriate, preliminary reliability and validity of the DMET has been demonstrated.

  3. Effects of legacy nuclear waste on the compositional diversity and distributions of sulfate-reducing bacteria in a terrestrial subsurface aquifer.

    PubMed

    Bagwell, Christopher E; Liu, Xuaduan; Wu, Liyou; Zhou, Jizhong

    2006-03-01

    The impact of legacy nuclear waste on the compositional diversity and distribution of sulfate-reducing bacteria in a heavily contaminated subsurface aquifer was examined. dsrAB clone libraries were constructed and restriction fragment length polymorphism (RFLP) analysis used to evaluate genetic variation between sampling wells. Principal component analysis identified nickel, nitrate, technetium, and organic carbon as the primary variables contributing to well-to-well geochemical variability, although comparative sequence analysis showed the sulfate-reducing bacteria community structure to be consistent throughout contaminated and uncontaminated regions of the aquifer. Only 3% of recovered dsrAB gene sequences showed apparent membership to the Deltaproteobacteria. The remainder of recovered sequences may represent novel, deep-branching lineages that, to our knowledge, do not presently contain any cultivated members; although corresponding phylotypes have recently been reported from several different marine ecosystems. These findings imply resiliency and adaptability of sulfate-reducing bacteria to extremes in environmental conditions, although the possibility for horizontal transfer of dsrAB is also discussed.

  4. New Insights into the Folding of a β-Sheet Miniprotein in a Reduced Space of Collective Hydrogen Bond Variables: Application to a Hydrodynamic Analysis of the Folding Flow

    PubMed Central

    Kalgin, Igor V.; Caflisch, Amedeo; Chekmarev, Sergei F.; Karplus, Martin

    2013-01-01

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted Principal Component Analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed and it is complemented by a hydrodynamic analysis. The latter, making use of “passive tracers” in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface, but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this “hydrogen atom” of protein folding. PMID:23621790

  5. Modal coupling procedures adapted to NASTRAN analysis of the 1/8-scale shuttle structural dynamics model. Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Zalesak, J.

    1975-01-01

    A dynamic substructuring analysis, utilizing the component modes technique, of the 1/8 scale space shuttle orbiter finite element model is presented. The analysis was accomplished in 3 phases, using NASTRAN RIGID FORMAT 3, with appropriate Alters, on the IBM 360-370. The orbiter was divided into 5 substructures, each of which was reduced to interface degrees of freedom and generalized normal modes. The reduced substructures were coupled to yield the first 23 symmetric free-free orbiter modes, and the eigenvectors in the original grid point degree of freedom lineup were recovered. A comparison was made with an analysis which was performed with the same model using the direct coordinate elimination approach. Eigenvalues were extracted using the inverse power method.

  6. How does spatial extent of fMRI datasets affect independent component analysis decomposition?

    PubMed

    Aragri, Adriana; Scarabino, Tommaso; Seifritz, Erich; Comani, Silvia; Cirillo, Sossio; Tedeschi, Gioacchino; Esposito, Fabrizio; Di Salle, Francesco

    2006-09-01

    Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity. (c) 2006 Wiley-Liss, Inc.

  7. Population Analysis of Disabled Children by Departments in France

    NASA Astrophysics Data System (ADS)

    Meidatuzzahra, Diah; Kuswanto, Heri; Pech, Nicolas; Etchegaray, Amélie

    2017-06-01

    In this study, a statistical analysis is performed by model the variations of the disabled about 0-19 years old population among French departments. The aim is to classify the departments according to their profile determinants (socioeconomic and behavioural profiles). The analysis is focused on two types of methods: principal component analysis (PCA) and multiple correspondences factorial analysis (MCA) to review which one is the best methods for interpretation of the correlation between the determinants of disability (independent variable). The PCA is the best method for interpretation of the correlation between the determinants of disability (independent variable). The PCA reduces 14 determinants of disability to 4 axes, keeps 80% of total information, and classifies them into 7 classes. The MCA reduces the determinants to 3 axes, retains only 30% of information, and classifies them into 4 classes.

  8. Comparative analysis of different weight matrices in subspace system identification for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shokravi, H.; Bakhary, NH

    2017-11-01

    Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.

  9. Detection of resting state functional connectivity using partial correlation analysis: A study using multi-distance and whole-head probe near-infrared spectroscopy.

    PubMed

    Sakakibara, Eisuke; Homae, Fumitaka; Kawasaki, Shingo; Nishimura, Yukika; Takizawa, Ryu; Koike, Shinsuke; Kinoshita, Akihide; Sakurada, Hanako; Yamagishi, Mika; Nishimura, Fumichika; Yoshikawa, Akane; Inai, Aya; Nishioka, Masaki; Eriguchi, Yosuke; Matsuoka, Jun; Satomura, Yoshihiro; Okada, Naohiro; Kakiuchi, Chihiro; Araki, Tsuyoshi; Kan, Chiemi; Umeda, Maki; Shimazu, Akihito; Uga, Minako; Dan, Ippeita; Hashimoto, Hideki; Kawakami, Norito; Kasai, Kiyoto

    2016-11-15

    Multichannel near-infrared spectroscopy (NIRS) is a functional neuroimaging modality that enables easy-to-use and noninvasive measurement of changes in blood oxygenation levels. We developed a clinically-applicable method for estimating resting state functional connectivity (RSFC) with NIRS using a partial correlation analysis to reduce the influence of extraneural components. Using a multi-distance probe arrangement NIRS, we measured resting state brain activity for 8min in 17 healthy participants. Independent component analysis was used to extract shallow and deep signals from the original NIRS data. Pearson's correlation calculated from original signals was significantly higher than that calculated from deep signals, while partial correlation calculated from original signals was comparable to that calculated from deep (cerebral-tissue) signals alone. To further test the validity of our method, we also measured 8min of resting state brain activity using a whole-head NIRS arrangement consisting of 17 cortical regions in 80 healthy participants. Significant RSFC between neighboring, interhemispheric homologous, and some distant ipsilateral brain region pairs was revealed. Additionally, females exhibited higher RSFC between interhemispheric occipital region-pairs, in addition to higher connectivity between some ipsilateral pairs in the left hemisphere, when compared to males. The combined results of the two component experiments indicate that partial correlation analysis is effective in reducing the influence of extracerebral signals, and that NIRS is able to detect well-described resting state networks and sex-related differences in RSFC. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Hyperspectral remote sensing for advanced detection of early blight (Alternaria solani) disease in potato (Solanum tuberosum) plants

    NASA Astrophysics Data System (ADS)

    Atherton, Daniel

    Early detection of disease and insect infestation within crops and precise application of pesticides can help reduce potential production losses, reduce environmental risk, and reduce the cost of farming. The goal of this study was the advanced detection of early blight (Alternaria solani) in potato (Solanum tuberosum) plants using hyperspectral remote sensing data captured with a handheld spectroradiometer. Hyperspectral reflectance spectra were captured 10 times over five weeks from plants grown to the vegetative and tuber bulking growth stages. The spectra were analyzed using principal component analysis (PCA), spectral change (ratio) analysis, partial least squares (PLS), cluster analysis, and vegetative indices. PCA successfully distinguished more heavily diseased plants from healthy and minimally diseased plants using two principal components. Spectral change (ratio) analysis provided wavelengths (490-510, 640, 665-670, 690, 740-750, and 935 nm) most sensitive to early blight infection followed by ANOVA results indicating a highly significant difference (p < 0.0001) between disease rating group means. In the majority of the experiments, comparisons of diseased plants with healthy plants using Fisher's LSD revealed more heavily diseased plants were significantly different from healthy plants. PLS analysis demonstrated the feasibility of detecting early blight infected plants, finding four optimal factors for raw spectra with the predictor variation explained ranging from 93.4% to 94.6% and the response variation explained ranging from 42.7% to 64.7%. Cluster analysis successfully distinguished healthy plants from all diseased plants except for the most mildly diseased plants, showing clustering analysis was an effective method for detection of early blight. Analysis of the reflectance spectra using the simple ratio (SR) and the normalized difference vegetative index (NDVI) was effective at differentiating all diseased plants from healthy plants, except for the most mildly diseased plants. Of the analysis methods attempted, cluster analysis and vegetative indices were the most promising. The results show the potential of hyperspectral remote sensing for the detection of early blight in potato plants.

  11. Effects of autonomic ganglion blockade on fractal and spectral components of blood pressure and heart rate variability in free-moving rats.

    PubMed

    Castiglioni, Paolo; Di Rienzo, Marco; Radaelli, Alberto

    2013-11-01

    Fractal analysis is a promising tool for assessing autonomic influences on heart rate (HR) and blood pressure (BP) variability. The temporal spectrum of scale coefficients, α(t), was recently proposed to describe the cardiovascular fractal dynamics. Aim of our work is to evaluate sympathetic influences on cardiovascular variability analyzing α(t) and spectral powers of HR and BP after ganglionic blockade. BP was recorded in 11 rats before and after autonomic blockade by hexamethonium infusion (HEX). Systolic and diastolic BP, pulse pressure and pulse interval were derived beat-by-beat. Segments longer than 5 min were selected at baseline and HEX to estimate power spectra and α(t). Comparisons were made by paired t-test. HEX reduced all spectral components of systolic and diastolic BP, the reduction being particularly significant around the frequency of Mayer waves; it induced a reduction on α(t) coefficients at t<2s and an increase on coefficients at t>8s. HEX reduced only slower components of pulse interval power spectrum, but decreased significantly faster scale coefficients (t<8s). HEX only marginally affected pulse pressure variability. Results indicate that the sympathetic outflow contributes to BP fractal dynamics with fractional Gaussian noise (α<1) at longer scales and fractional Brownian motion (α>1) at shorter scales. Ganglionic blockade also removes a fractional Brownian motion component at shorter scales from HR dynamics. Results may be explained by the characteristic time constants between sympathetic efferent activity and cardiovascular effectors. Therefore fractal analysis may complete spectral analysis with information on the correlation structure of the data. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang

    2016-12-01

    Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.

  13. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  14. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  15. Effectiveness of multi-component non-pharmacologic delirium interventions: A Meta-analysis

    PubMed Central

    Hshieh, Tammy T.; Yue, Jirong; Oh, Esther; Puelle, Margaret; Dowal, Sarah; Travison, Thomas; Inouye, Sharon K.

    2015-01-01

    Importance Delirium, an acute disorder with high morbidity and mortality, is often preventable through multi-component non-pharmacologic strategies. The efficacy of these strategies for preventing subsequent adverse outcomes has been limited to small studies. Objective Evaluate available evidence on multi-component non-pharmacologic delirium interventions in reducing incident delirium and preventing poor outcomes associated with delirium. Data Sources PubMed, Google Scholar, ScienceDirect and Cochrane Database of Systematic Reviews from January 1, 1999–December 31, 2013. Study Selection Studies examining the following outcomes were included: delirium incidence, falls, length of stay, rate of discharge to a long-term care institution, change in functional or cognitive status. Data Extraction and Synthesis Two experienced physician reviewers independently and blindly abstracted data on outcome measures using a standardized approach. The reviewers conducted quality ratings based on the Cochrane Risk of Bias criteria for each study. Main Outcomes and Measures We identified 14 interventional studies. Results for outcomes of delirium, falls, length of stay and institutionalization data were pooled for meta-analysis but heterogeneity limited meta-analysis of results for outcomes of functional and cognitive decline. Overall, eleven studies demonstrated significant reductions in delirium incidence (Odds Ratio 0.47, 95% Confidence Interval 0.38–0.58). The four randomized or matched (RMT) studies reduced delirium incidence by 44% (95% CI 0.42–0.76). Rate of falls decreased significantly among intervention patients in four studies (OR 0.38, 95% CI 0.25–0.60); in the two RMTs, the fall rate was reduced by 64% (95% CI 0.22–0.61). Lengths of stay and institutionalization rates also trended towards decreases in the intervention groups, mean difference −0.16 days shorter (95% CI −0.97–0.64) and odds of institutionalization 5% lower (OR 0.95, 95% CI 0.71–1.26) respectively. Among the higher quality RMTs, length of stay trended −0.33 days shorter (95% CI −1.38–0.72) and odds of institutionalization trended 6% lower (95% CI 0.69–1.30). Conclusions and Relevance Multi-component non-pharmacologic delirium prevention interventions are effective in reducing delirium incidence and preventing falls, with trend towards decreasing length of stay and avoiding institutionalization. Given the current focus on prevention of hospital-based complications and improved cost-effectiveness of care, this meta-analysis supports the use of these interventions to advance acute care for older persons. PMID:25643002

  16. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-01

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009–2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018

  17. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map.

    PubMed

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-08

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009-2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data.

  18. Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.

    PubMed

    Manceau, Alain; Marcus, Matthew; Lenoir, Thomas

    2014-09-01

    Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.

  19. Optimal acetabular component orientation estimated using edge-loading and impingement risk in patients with metal-on-metal hip resurfacing arthroplasty.

    PubMed

    Mellon, Stephen J; Grammatopoulos, George; Andersen, Michael S; Pandit, Hemant G; Gill, Harinderjit S; Murray, David W

    2015-01-21

    Edge-loading in patients with metal-on-metal resurfaced hips can cause high serum metal ion levels, the development of soft-tissue reactions local to the joint called pseudotumours and ultimately, failure of the implant. Primary edge-loading is where contact between the femoral and acetabular components occurs at the edge/rim of the acetabular component whereas impingement of the femoral neck on the acetabular component's edge causes secondary or contrecoup edge-loading. Although the relationship between the orientation of the acetabular component and primary edge-loading has been identified, the contribution of acetabular component orientation to impingement and secondary edge-loading is less clear. Our aim was to estimate the optimal acetabular component orientation for 16 metal-on-metal hip resurfacing arthroplasty (MoMHRA) subjects with known serum metal ion levels. Data from motion analysis, subject-specific musculoskeletal modelling and Computed Tomography (CT) measurements were used to calculate the dynamic contact patch to rim (CPR) distance and impingement risk for 3416 different acetabular component orientations during gait, sit-to-stand, stair descent and static standing. For each subject, safe zones free from impingement and edge-loading (CPR <10%) were defined and, consequently, an optimal acetabular component orientation was determined (mean inclination 39.7° (SD 6.6°) mean anteversion 14.9° (SD 9.0°)). The results of this study suggest that the optimal acetabular component orientation can be determined from a patient's motion and anatomy. However, 'safe' zones of acetabular component orientation associated with reduced risk of dislocation and pseudotumour are also associated with a reduced risk of edge-loading and impingement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A Fully Non-metallic Gas Turbine Engine Enabled by Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2014-01-01

    The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute (NARI), represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies for fabricating polymer matrix composite (PMC) and ceramic matrix composite (CMC) gas turbine engine components. The benefits of the proposed effort include: 50 weight reduction compared to metallic parts, reduced manufacturing costs due to less machining and no tooling requirements, reduced part count due to net shape single component fabrication, and rapid design change and production iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature capable polymer filaments. The first component is an acoustic panel treatment with a honeycomb structure with an integrated back sheet and perforated front sheet. The second component is a compressor inlet guide vane. The CMC effort, which is starting at a lower technology readiness level, will use a binder jet process to fabricate silicon carbide test coupons and demonstration articles. The polymer and ceramic additive manufacturing efforts will advance from monolithic materials toward silicon carbide and carbon fiber reinforced composites for improved properties. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The proposed effort will be focused on a small 7000 lbf gas turbine engine. However, the concepts are equally applicable to large gas turbine engines. The proposed effort includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.

  1. Deformation Analysis of the Main Components in a Single Screw Compressor

    NASA Astrophysics Data System (ADS)

    Liu, Feilong; Liao, Xueli; Feng, Quanke; Van Den Broek, Martijn; De Paepe, Michel

    2015-08-01

    The single screw compressor is used in many fields such as air compression, chemical industry and refrigeration. During operation, different gas pressures and temperatures applied on the components can cause different degrees of deformation, which leads to a difference between the thermally induced clearance and the designed clearance. However, limited research about clearance design is reported. In this paper, a temperature measurement instrument and a convective heat transfer model were described and used to establish the temperature of a single screw air compressor's casing, screw rotor and star wheel. 3-D models of these three main components were built. The gas force deformation, thermal- structure deformation and thermal-force coupling deformation were carried out by using a finite element simulation method. Results show that the clearance between the bottom of the groove and the top of star wheel is reduced by 0.066 mm, the clearance between the side of groove and the star wheel is reduced by 0.015 mm, and the clearance between the cylinder and the rotor is reduced by 0.01 mm. It is suggested that these deformations should be taken into account during the design of these clearances.

  2. Empathic competencies in violent offenders☆

    PubMed Central

    Seidel, Eva-Maria; Pfabigan, Daniela Melitta; Keckeis, Katinka; Wucherer, Anna Maria; Jahn, Thomas; Lamm, Claus; Derntl, Birgit

    2013-01-01

    Violent offending has often been associated with a lack of empathy, but experimental investigations are rare. The present study aimed at clarifying whether violent offenders show a general empathy deficit or specific deficits regarding the separate subcomponents. To this end, we assessed three core components of empathy (emotion recognition, perspective taking, affective responsiveness) as well as skin conductance response (SCR) in a sample of 30 male violent offenders and 30 healthy male controls. Data analysis revealed reduced accuracy in violent offenders compared to healthy controls only in emotion recognition, and that a high number of violent assaults was associated with decreased accuracy in perspective taking for angry scenes. SCR data showed reduced physiological responses in the offender group specifically for fear and disgust stimuli during emotion recognition and perspective taking. In addition, higher psychopathy scores in the violent offender group were associated with reduced accuracy in affective responsiveness. This is the first study to show that mainly emotion recognition is deficient in violent offenders whereas the other components of empathy are rather unaffected. This divergent impact of violent offending on the subcomponents of empathy suggests that all three empathy components can be targeted by therapeutic interventions separately. PMID:24035702

  3. Inferior tilt fixation of the glenoid component in reverse total shoulder arthroplasty: A biomechanical study.

    PubMed

    Chae, S W; Lee, J; Han, S H; Kim, S-Y

    2015-06-01

    Glenoid component fixation with an inferior tilt has been suggested to decrease scapular notching, but this remains controversial. We aimed here to evaluate the effect of glenoid component inferior tilt in reverse total shoulder arthroplasty (RSA) on micromotion and loss of fixation of the glenoid component by biomechanical testing. Increased inferior reaming of the glenoid for inferiorly tilted implantation of the glenoid component will decrease glenoid bone stock and compromise the fixation of RSA. The micromotions of the glenoid components attached to 14 scapulae from fresh frozen cadavers were measured and compared between neutral and 10° inferior tilts in 0.7- and 1-body weight cyclic loading tests using digital-image analysis. The incidence of bone breakage or loss of fixation was assessed in the 1-body weight fatigue-loading test. Micromotion was higher with a 10° inferior tilt than with a neutral tilt during both the 0.7-body weight (36 ± 11 μm vs. 22 ± 5 μm; P = 0.028) and 1-body weight (44 ± 16 μm vs. 28 ± 9 μm; P = 0.045) cyclic loading. The incidence of bone breakage or loss of fixation was 17% and 60% with a neutral and 10° inferior tilt, respectively. Glenoid component inferior tilt fixation in RSA may reduce primary stability and increase mechanical failure of the glenoid component, thereby reducing longevity of the prosthesis. Accordingly, we recommend careful placement of the glenoid component when an inferior tilt is used. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  4. The Cost Analysis of Corrosion Protection Solutions for Steel Components in Terms of the Object Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kowalski, Dariusz; Grzyl, Beata; Kristowski, Adam

    2017-09-01

    Steel materials, due to their numerous advantages - high availability, easiness of processing and possibility of almost any shaping are commonly applied in construction for carrying out basic carrier systems and auxiliary structures. However, the major disadvantage of this material is its high corrosion susceptibility, which depends strictly on the local conditions of the facility and the applied type of corrosion protection system. The paper presents an analysis of life cycle costs of structures installed on bridges used in the road lane conditions. Three anti-corrosion protection systems were considered, analyzing their essential cost components. The possibility of reducing significantly the costs associated with anti-corrosion protection at the stage of steel barriers maintenance over a period of 30 years has been indicated. The possibility of using a new approach based on the life cycle cost estimation in the anti-corrosion protection of steel elements is presented. The relationship between the method of steel barrier protection, the scope of repair, renewal work and costs is shown. The article proposes an optimal solution which, while reducing the cost of maintenance of road infrastructure components in the area of corrosion protection, allows to maintain certain safety standards for steel barriers that are installed on the bridge.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanlon, Edward; Capece, John

    Hendry County Sustainable Bio-Fuels Center (HCSBC) is introduced and its main components are explained. These primarily include (1) farming systems, (2) sustainability analysis, (3) economic analysis and (4) educational components. Each of these components is discussed in further details, main researchers and their responsibility areas and introduced. The main focus of this presentation is a new farming concept. The proposed new farming concept is an alternative to the current "two sides of the ditch" model, in which on one side are yield-maximizing, input-intensive, commodity price-dependent farms, while on the other side are publicly-financed, nutrient-removing treatment areas and water reservoirs tryingmore » to mitigate the externalized costs of food production systems and other human-induced problems. The proposed approach is rental of the land back to agriculture corporations during the restoration transition period in order to increase water storage (allowing for greater water flow-through and/or water storage on farms), preventing issues such as nutrients removal, using flood-tolerant crops and reducing soil subsidence. Various pros and cons of the proposed agricultural eco-services are discussed - the advantages include flexibility for participating farmers to achieve environmental outcomes with reduced costs and using innovative incentives; the minuses include the fact that the potential markets are not developed yet or that existing regulations may prevent agricultural producers from selling their services.« less

  6. Comparative proteomic analysis of normal and collagen IX null mouse cartilage reveals altered extracellular matrix composition and novel components of the collagen IX interactome.

    PubMed

    Brachvogel, Bent; Zaucke, Frank; Dave, Keyur; Norris, Emma L; Stermann, Jacek; Dayakli, Münire; Koch, Manuel; Gorman, Jeffrey J; Bateman, John F; Wilson, Richard

    2013-05-10

    Collagen IX is an integral cartilage extracellular matrix component important in skeletal development and joint function. Proteomic analysis and validation studies revealed novel alterations in collagen IX null cartilage. Matrilin-4, collagen XII, thrombospondin-4, fibronectin, βig-h3, and epiphycan are components of the in vivo collagen IX interactome. We applied a proteomics approach to advance our understanding of collagen IX ablation in cartilage. The cartilage extracellular matrix is essential for endochondral bone development and joint function. In addition to the major aggrecan/collagen II framework, the interacting complex of collagen IX, matrilin-3, and cartilage oligomeric matrix protein (COMP) is essential for cartilage matrix stability, as mutations in Col9a1, Col9a2, Col9a3, Comp, and Matn3 genes cause multiple epiphyseal dysplasia, in which patients develop early onset osteoarthritis. In mice, collagen IX ablation results in severely disturbed growth plate organization, hypocellular regions, and abnormal chondrocyte shape. This abnormal differentiation is likely to involve altered cell-matrix interactions but the mechanism is not known. To investigate the molecular basis of the collagen IX null phenotype we analyzed global differences in protein abundance between wild-type and knock-out femoral head cartilage by capillary HPLC tandem mass spectrometry. We identified 297 proteins in 3-day cartilage and 397 proteins in 21-day cartilage. Components that were differentially abundant between wild-type and collagen IX-deficient cartilage included 15 extracellular matrix proteins. Collagen IX ablation was associated with dramatically reduced COMP and matrilin-3, consistent with known interactions. Matrilin-1, matrilin-4, epiphycan, and thrombospondin-4 levels were reduced in collagen IX null cartilage, providing the first in vivo evidence for these proteins belonging to the collagen IX interactome. Thrombospondin-4 expression was reduced at the mRNA level, whereas matrilin-4 was verified as a novel collagen IX-binding protein. Furthermore, changes in TGFβ-induced protein βig-h3 and fibronectin abundance were found in the collagen IX knock-out but not associated with COMP ablation, indicating specific involvement in the abnormal collagen IX null cartilage. In addition, the more widespread expression of collagen XII in the collagen IX-deficient cartilage suggests an attempted compensatory response to the absence of collagen IX. Our differential proteomic analysis of cartilage is a novel approach to identify candidate matrix protein interactions in vivo, underpinning further analysis of mutant cartilage lacking other matrix components or harboring disease-causing mutations.

  7. Principal component analysis for designed experiments.

    PubMed

    Konishi, Tomokazu

    2015-01-01

    Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.

  8. Analysis and modification of a single-mesh gear fatigue rig for use in diagnostic studies

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.; Townsend, Dennis P.; Oswald, Fred B.; Decker, Harry J.

    1992-01-01

    A single-mesh gear fatigue rig was analyzed and modified for use in gear mesh diagnostic research. The fatigue rig allowed unwanted vibration to mask the test-gear vibration signal, making it difficult to perform diagnostic studies. Several possible sources and factors contributing to the unwanted components of the vibration signal were investigated. Sensor mounting location was found to have a major effect on the content of the vibration signal. In the presence of unwanted vibration sources, modal amplification made unwanted components strong. A sensor location was found that provided a flatter frequency response. This resulted in a more useful vibration signal. A major network was performed on the fatigue rig to reduce the influence of the most probable sources of the noise in the vibration signal. The slave gears were machined to reduce weight and increase tooth loading. The housing and the shafts were modified to reduce imbalance, looseness, and misalignment in the rotating components. These changes resulted in an improved vibration signal, with the test-gear mesh frequency now the dominant component in the signal. Also, with the unwanted sources eliminated, the sensor mounting location giving the most robust representation of the test-gear meshing energy was found to be at a point close to the test gears in the load zone of the bearings.

  9. [An analysis of nutritional and harmful components of vegetables grown in plastic greenhouses].

    PubMed

    Yao, H; Yan, W; Li, G; Chen, Y; Guo, W; Wang, G; Xu, Z; Feng, C; Liu, K; Jin, D

    1999-09-01

    To study the changes in nutritional and harmful components of vegetables grown in plastic greenhouses. In plastic greenhouses, microclimate and air concentrations of carbon monoxide, carbon dioxide, fluoride and respirable particulate were measured, and chlorophyll, total sugar, crude fiber, nitrite, fluoride, arsenic and some mineral elements in vegetables were determined as compared with those grown in the open-air fields. Greenhouse appeared a lower wind speed and darker illumination. Contents of chlorophyll a an b, total chlorophyll, reduced vitamin C, crude fiber in vegetables grown in greenhouse all were lower than those grown in open-air fields. Contents of potassium, calcium, magnesium, iron, zinc, copper and phosphorous were all lower in the vegetables grown in greenhouse than those grown in open-air fields. The contents of chlorophyll reducing Vitamin C. Lower wind speed and inadequate illumination in greenhouse affected photosynthesis and uptake of water in vegetables causing changes in their nutritional components. But, no contamination of burning coal was found in vegetables grown in greenhouse.

  10. Effects of a cognitive dual task on variability and local dynamic stability in sustained repetitive arm movements using principal component analysis: a pilot study.

    PubMed

    Longo, Alessia; Federolf, Peter; Haid, Thomas; Meulenbroek, Ruud

    2018-06-01

    In many daily jobs, repetitive arm movements are performed for extended periods of time under continuous cognitive demands. Even highly monotonous tasks exhibit an inherent motor variability and subtle fluctuations in movement stability. Variability and stability are different aspects of system dynamics, whose magnitude may be further affected by a cognitive load. Thus, the aim of the study was to explore and compare the effects of a cognitive dual task on the variability and local dynamic stability in a repetitive bimanual task. Thirteen healthy volunteers performed the repetitive motor task with and without a concurrent cognitive task of counting aloud backwards in multiples of three. Upper-body 3D kinematics were collected and postural reconfigurations-the variability related to the volunteer's postural change-were determined through a principal component analysis-based procedure. Subsequently, the most salient component was selected for the analysis of (1) cycle-to-cycle spatial and temporal variability, and (2) local dynamic stability as reflected by the largest Lyapunov exponent. Finally, end-point variability was evaluated as a control measure. The dual cognitive task proved to increase the temporal variability and reduce the local dynamic stability, marginally decrease endpoint variability, and substantially lower the incidence of postural reconfigurations. Particularly, the latter effect is considered to be relevant for the prevention of work-related musculoskeletal disorders since reduced variability in sustained repetitive tasks might increase the risk of overuse injuries.

  11. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  12. A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2015-01-01

    The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute, represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies to fabricate polymer matrix composite and ceramic matrix composite turbine engine components. The benefits include: 50 weight reduction compared to metallic parts, reduced manufacturing costs, reduced part count and rapid design iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature polymer filaments. The CMC effort uses a binder jet process to fabricate silicon carbide test coupons and demonstration articles. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The research project includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.

  13. Analysis of phenolic compounds for poultry feed by supercritical fluid chromatography

    USDA-ARS?s Scientific Manuscript database

    Phenolic compounds have generated interest as components in functional feed formulations due to their anti-oxidant, anti-microbial, and anti-fungal properties. These compounds may have greater significance in the future as the routine use of antibiotics is reduced and the prevalence of resistant bac...

  14. City of Minneapolis, Minnesota Municipal Tree Resource Analysis

    Treesearch

    E.G. McPherson; J.R. Simpson; P.J. Peper; S.E. Maco; S.L. Gardner; K.E. Vargas; S. Cozad; Q. Xiao

    2005-01-01

    Minneapolis, a vibrant city, renowned for its lakes, its livability, and its cultural wealth, maintains trees as an integral component of the urban infrastructure. Research indicates that healthy trees can mitigate impacts associated with the built environment by reducing stormwater runoff, energy consumption, and air pollutants. Trees improve urban life, making...

  15. Critical Analysis: A Comparison of Critical Thinking Changes in Psychology and Philosophy Classes

    ERIC Educational Resources Information Center

    Burke, Brian L.; Sears, Sharon R.; Kraus, Sue; Roberts-Cady, Sarah

    2014-01-01

    This study compared changes in psychology and philosophy classes in two distinct components of critical thinking (CT): general skills and personal beliefs. Participants were 128 undergraduates enrolled in CT in psychology, other psychology courses, or philosophy courses. CT and philosophy students significantly reduced beliefs in paranormal…

  16. The Importance of At-Risk Funding. Policy Analysis

    ERIC Educational Resources Information Center

    Parker, Emily; Griffith, Michael

    2016-01-01

    In recent decades, states and districts have moved toward making education more equitable. A key component of equity in education is providing additional funds for economically disadvantaged students, commonly referred to as "at-risk students." At-risk students are most often defined as students who qualify for free or reduced priced…

  17. Explaining Relationships among Student Outcomes and the School's Physical Environment

    ERIC Educational Resources Information Center

    Tanner, C. Kenneth

    2008-01-01

    This descriptive study investigated the possible effects of selected school design patterns on third-grade students' academic achievement. A reduced regression analysis revealed the effects of school design components (patterns) on ITBS achievement data, after including control variables, for a sample of third-grade students drawn from 24…

  18. The development and exploratory analysis of the Back Pain Attitudes Questionnaire (Back-PAQ)

    PubMed Central

    Darlow, Ben; Perry, Meredith; Mathieson, Fiona; Stanley, James; Melloh, Markus; Marsh, Reginald; Baxter, G David; Dowell, Anthony

    2014-01-01

    Objectives To develop an instrument to assess attitudes and underlying beliefs about back pain, and subsequently investigate its internal consistency and underlying structures. Design The instrument was developed by a multidisciplinary team of clinicians and researchers based on analysis of qualitative interviews with people experiencing acute and chronic back pain. Exploratory analysis was conducted using data from a population-based cross-sectional survey. Setting Qualitative interviews with community-based participants and subsequent postal survey. Participants Instrument development informed by interviews with 12 participants with acute back pain and 11 participants with chronic back pain. Data for exploratory analysis collected from New Zealand residents and citizens aged 18 years and above. 1000 participants were randomly selected from the New Zealand Electoral Roll. 602 valid responses were received. Measures The 34-item Back Pain Attitudes Questionnaire (Back-PAQ) was developed. Internal consistency was evaluated by the Cronbach α coefficient. Exploratory analysis investigated the structure of the data using Principal Component Analysis. Results The 34-item long form of the scale had acceptable internal consistency (α=0.70; 95% CI 0.66 to 0.73). Exploratory analysis identified five two-item principal components which accounted for 74% of the variance in the reduced data set: ‘vulnerability of the back’; ‘relationship between back pain and injury’; ‘activity participation while experiencing back pain’; ‘prognosis of back pain’ and ‘psychological influences on recovery’. Internal consistency was acceptable for the reduced 10-item scale (α=0.61; 95% CI 0.56 to 0.66) and the identified components (α between 0.50 and 0.78). Conclusions The 34-item long form of the scale may be appropriate for use in future cross-sectional studies. The 10-item short form may be appropriate for use as a screening tool, or an outcome assessment instrument. Further testing of the 10-item Back-PAQ's construct validity, reliability, responsiveness to change and predictive ability needs to be conducted. PMID:24860003

  19. [Methods of a posteriori identification of food patterns in Brazilian children: a systematic review].

    PubMed

    Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2016-01-01

    The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.

  20. Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao

    2017-12-01

    A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.

  1. Comments on settling chamber design for quiet, blowdown wind tunnels

    NASA Technical Reports Server (NTRS)

    Beckwith, I. E.

    1981-01-01

    Transfer of an existing continous circuit supersonic wind tunnel to Langley and its operation there as a blowdown tunnel is planned. Flow disturbance requirements in the supply section and methods for reducing the high level broad band acoustic disturbances present in typical blowdown tunnels are reviewed. Based on recent data and the analysis of two blowdown facilities at Langley, methods for reducing the total turbulence levels in the settling chamber, including both acoustic and vorticity modes, to less than one percent are recommended. The pertinent design details of the damping screens and honeycomb and the recommended minimum pressure drop across the porous components providing the required two orders of magnitude attenuation of acoustic noise levels are given. A suggestion for the support structure of these high pressure drop porous components is offered.

  2. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  3. Bearing monitoring

    NASA Astrophysics Data System (ADS)

    Xu, Roger; Stevenson, Mark W.; Kwan, Chi-Man; Haynes, Leonard S.

    2001-07-01

    At Ford Motor Company, thrust bearing in drill motors is often damaged by metal chips. Since the vibration frequency is several Hz only, it is very difficult to use accelerometers to pick up the vibration signals. Under the support of Ford and NASA, we propose to use a piezo film as a sensor to pick up the slow vibrations of the bearing. Then a neural net based fault detection algorithm is applied to differentiate normal bearing from bad bearing. The first step involves a Fast Fourier Transform which essentially extracts the significant frequency components in the sensor. Then Principal Component Analysis is used to further reduce the dimension of the frequency components by extracting the principal features inside the frequency components. The features can then be used to indicate the status of bearing. Experimental results are very encouraging.

  4. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabharwall, Piyush; O'Brien, James E.; McKellar, Michael G.

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performancemore » of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial application and demonstration under NHES.« less

  5. Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector

    NASA Technical Reports Server (NTRS)

    Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming

    1996-01-01

    We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.

  6. Implementation of an integrating sphere for the enhancement of noninvasive glucose detection using quantum cascade laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.

    2018-05-01

    An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.

  7. Feasibility Study of a Rotorcraft Health and Usage Monitoring System (HUMS): Usage and Structural Life Monitoring Evaluation

    NASA Technical Reports Server (NTRS)

    Dickson, B.; Cronkhite, J.; Bielefeld, S.; Killian, L.; Hayden, R.

    1996-01-01

    The objective of this study was to evaluate two techniques, Flight Condition Recognition (FCR) and Flight Load Synthesis (FIS), for usage monitoring and assess the potential benefits of extending the retirement intervals of life-limited components, thus reducing the operator's maintenance and replacement costs. Both techniques involve indirect determination of loads using measured flight parameters and subsequent fatigue analysis to calculate the life expended on the life-limited components. To assess the potential benefit of usage monitoring, the two usage techniques were compared to current methods of component retirement. In addition, comparisons were made with direct load measurements to assess the accuracy of the two techniques.

  8. The Components of Community Awareness and Preparedness; its Effects on the Reduction of Tsunami Vulnerability and Risk

    NASA Astrophysics Data System (ADS)

    Tufekci, Duygu; Lutfi Suzen, Mehmet; Cevdet Yalciner, Ahmet

    2017-04-01

    The resilience of coastal communities against tsunamis are dependent on preparedness of the communities. Preparedness covers social and structural components which increases with the awareness in the community against tsunamis. Therefore, proper evaluation of all components of preparedness will help communities to reduce the adverse effects of tsunamis and increase the overall resilience of communities. On the other hand, the complexity of the metropolitan life with its social and structural components necessitates explicit vulnerability assessments for proper determination of tsunami risk, and development of proper mitigation strategies and recovery plans. Assessing the vulnerability and resilience level of a region against tsunamis and efforts for reducing the tsunami risk are the key components of disaster management. Since increasing the awareness of coastal communities against tsunamis is one of the main objectives of disaster management, then it should be considered as one of the parameter in tsunami risk analysis. In the method named MetHuVA (METU - Metropolitan Human Tsunami Vulnerability Assessment) proposed by Cankaya et al., (2016) and Tufekci et al., (2016), the awareness and preparedness level of the community is revealed to be an indispensable parameter with a great effect on tsunami risk. According to the results obtained from those studies, it becomes important that the awareness and preparedness parameter (n) must be analyzed by considering their interaction and all related components. While increasing awareness can be achieved, vulnerability and risk will be reduced. In this study the components of awareness and preparedness parameter (n) is analyzed in different categories by considering administrative, social, educational, economic and structural preparedness of the coastal communities. Hence the proposed awareness and preparedness parameter can properly be analyzed and further improvements can be achieved in vulnerability and risk analysis. Furthermore, the components of the awareness and preparedness parameter n, is widely investigated in global and local practices by using the method of categorization to determine different levels for different coastal metropolitan areas with different cultures and with different hazard perception. Moreover, consistency between the theoretical maximum and practical applications of parameter n is estimated, discussed and presented. In the applications mainly the Bakirkoy district of Istanbul is analyzed and the results are presented. Acknowledgements: Partial support by 603839 ASTARTE Project of EU, UDAPC-12-14 project of AFAD, Turkey, 213M534 projects of TUBITAK, Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region in (JICA SATREPS - MarDiM Project), and Istanbul Metropolitan Municipality are acknowledged.

  9. Isotope Brayton electric power system for the 500 to 2500 watt range

    NASA Technical Reports Server (NTRS)

    Macosko, R. P.; Barna, G. J.; Block, H. B.; Ingle, B. D.

    1972-01-01

    An extensive study was conducted at the Lewis Research Center to evaluate an isotope Brayton electric power system for use in the 500 to 2500 W power range. Overall system simplicity was emphasized in order to reduce parasitic power losses and improve system reliability. Detailed parametric cycle analysis, conceptual component designs, and evaluation of system packaging were included. A single-loop system (gas) with six major components including one rotating unit was selected. Calculated net system efficiency varies from 23 to 28 percent over the power range.

  10. Fabrication and Testing of Ceramic Matrix Composite Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Effinger, M. R.; Clinton, R. C., Jr.; Dennis, J.; Elam, S.; Genge, G.; Eckel, A.; Jaskowiak, M. H.; Kiser, J. D.; Lang, J.

    2001-01-01

    NASA has established goals for Second and Third Generation Reusable Launch Vehicles. Emphasis has been placed on significantly improving safety and decreasing the cost of transporting payloads to orbit. Ceramic matrix composites (CMC) components are being developed by NASA to enable significant increases in safety and engineer performance, while reducing costs. The development of the following CMC components are being pursued by NASA: (1) Simplex CMC Blisk; (2) Cooled CMC Nozzle Ramps; (3) Cooled CMC Thrust Chambers; and (4) CMC Gas Generator. These development efforts are application oriented, but have a strong underpinning of fundamental understanding of processing-microstructure-property relationships relative to structural analyses, nondestructive characterization, and material behavior analysis at the coupon and component and system operation levels. As each effort matures, emphasis will be placed on optimizing and demonstrating material/component durability, ideally using a combined Building Block Approach and Build and Bust Approach.

  11. Effect of supercritical carbon dioxide decaffeination on volatile components of green teas.

    PubMed

    Lee, S; Park, M K; Kim, K H; Kim, Y-S

    2007-09-01

    Volatile components in regular and decaffeinated green teas were isolated by simultaneous steam distillation and solvent extraction (SDE), and then analyzed by GC-MS. A total of 41 compounds, including 8 alcohols, 15 terpene-type compounds, 10 carbonyls, 4 N-containing compounds, and 4 miscellaneous compounds, were found in regular and decaffeinated green teas. Among them, linalool and phenylacetaldehyde were quantitatively dominant in both regular and decaffeinated green teas. By a decaffeination process using supercritical carbon dioxide, most volatile components decreased. The more caffeine was removed, the more volatile components were reduced in green teas. In particular, relatively nonpolar components such as terpene-type compounds gradually decreased according to the decaffeination process. Aroma-active compounds in regular and decaffeinated green teas were also determined and compared by aroma extract dilution analysis (AEDA). Most greenish and floral flavor compounds such as hexanal, (E)-2-hexenal, and some unknown compounds disappeared or decreased after the decaffeination process.

  12. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Analysis of self-recording in self-management interventions for stereotypy.

    PubMed

    Fritz, Jennifer N; Iwata, Brian A; Rolider, Natalie U; Camp, Erin M; Neidert, Pamela L

    2012-01-01

    Most treatments for stereotypy involve arrangements of antecedent or consequent events that are imposed entirely by a therapist. By contrast, results of some studies suggest that self-recording, a common component of self-management interventions, might be an effective and efficient way to reduce stereotypy. Because the procedure typically has included instructions to refrain from stereotypy, self-recording of the absence of stereotypy, and differential reinforcement of accurate recording, it is unclear which element or combination of elements produces reductions in stereotypy. We conducted a component analysis of a self-management intervention and observed that decreases in stereotypy might be attributable to instructional control or to differential reinforcement, but that self-recording per se had little effect on stereotypy.

  14. PVD TBC experience on GE aircraft engines

    NASA Astrophysics Data System (ADS)

    Maricocchi, A.; Bartz, A.; Wortman, D.

    1997-06-01

    The higher performance levels of modern gas turbine engines present significant challenges in the reli-ability of materials in the turbine. The increased engine temperatures required to achieve the higher per-formance levels reduce the strength of the materials used in the turbine sections of the engine. Various forms of thermal barrier coatings have been used for many years to increase the reliability of gas turbine engine components. Recent experience with the physical vapor deposition process using ceramic material has demonstrated success in extending the service life of turbine blades and nozzles. Engine test results of turbine components with a 125 μm (0.005 in.) PVD TBC have demonstrated component operating tem-peratures of 56 to 83 °C (100 to 150 °F) lower than non-PVD TBC components. Engine testing has also revealed that TBCs are susceptible to high angle particle impact damage. Sand particles and other engine debris impact the TBC surface at the leading edge of airfoils and fracture the PVD columns. As the impacting continues, the TBC erodes in local areas. Analysis of the eroded areas has shown a slight increase in temperature over a fully coated area ; however, a significant temperature reduc-tion was realized over an airfoil without TBC.

  15. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  16. Automatic Analyzers and Signal Indicators of Toxic and Dangerously Explosive Substances in Air,

    DTIC Science & Technology

    1980-01-09

    of air are used also thermo- conductometry and electroconductometric methods. The thermo- conductometry method of analysis is based on a change of the... conductometry gas analyzers is very limited and is reduced in essence to the analysis of two-component mixtures or multicomponent ones, all whose...differs. Rain disadvantage in tae tnormo- conductometry gas analyzers - increased sensitivity to a change in the ambient conditions, in consequence of

  17. The effects of awareness training on tics in a young boy with Tourette syndrome, Asperger syndrome, and attention deficit hyperactivity disorder.

    PubMed

    Wiskow, Katie M; Klatt, Kevin P

    2013-01-01

    Previous research has shown habit reversal training (HRT) to be effective in reducing tics. In some studies, tics have been reduced by implementing only a few components of HRT. The current study investigated the first step, awareness training, for treating tics in a young boy with Asperger syndrome, Tourette syndrome, and attention deficit hyperactivity disorder. The results showed a reduction in all tics. © Society for the Experimental Analysis of Behavior.

  18. A Combined Finite-Element/Discrete-Particle Analysis of a Side-Vent-Channel-Based Concept for Improved Blast-Survivability of Light Tactical Vehicles

    DTIC Science & Technology

    2013-01-01

    design of side- vent-channels. The results obtained confirmed the beneficial effects of the side-vent-channels in reducing the blast momentum , although...confirmed the beneficial effects of the side-vent-channels in reducing the blast momentum , although the extent of these effects is relatively small (3...products against the surrounding medium is associated with exchange of linear momentum and various energy components (e.g. potential, thermal

  19. Liquid chromatography-diode array detection-mass spectrometry for compositional analysis of low molecular weight heparins.

    PubMed

    Wang, Zhangjie; Li, Daoyuan; Sun, Xiaojun; Bai, Xue; Jin, Lan; Chi, Lianli

    2014-04-15

    Low molecular weight heparins (LMWHs) are important artificial preparations from heparin polysaccharide and are widely used as anticoagulant drugs. To analyze the structure and composition of LMWHs, identification and quantitation of their natural and modified building blocks are indispensable. We have established a novel reversed-phase high-performance liquid chromatography-diode array detection-electrospray ionization-mass spectrometry approach for compositional analysis of LMWHs. After being exhaustively digested and labeled with 2-aminoacridone, the structural motifs constructing LMWHs, including 17 components from dalteparin and 15 components from enoxaparin, were well separated, identified, and quantified. Besides the eight natural heparin disaccharides, many characteristic structures from dalteparin and enoxaparin, such as modified structures from the reducing end and nonreducing end, 3-O-sulfated tetrasaccharides, and trisaccharides, have been unambiguously identified based on their retention time and mass spectra. Compared with the traditional heparin compositional analysis methods, the approach described here is not only robust but also comprehensive because it is capable of identifying and quantifying nearly all components from lyase digests of LMWHs. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    PubMed

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  1. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide–Semiconductor Image Sensors

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324

  2. Dimensionality Reduction Through Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Lu; Albright, Austin P; Rahimpour, Alireza

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-areamore » oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements« less

  4. Use of multivariate statistics to identify unreliable data obtained using CASA.

    PubMed

    Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón

    2013-06-01

    In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.

  5. Chemical and bioanalytical assessments on drinking water treatments by quaternized magnetic microspheres.

    PubMed

    Shi, Peng; Ma, Rong; Zhou, Qing; Li, Aimin; Wu, Bing; Miao, Yu; Chen, Xun; Zhang, Xuxiang

    2015-03-21

    This study aimed to compare the toxicity reduction performance of conventional drinking water treatment (CT) and a treatment (NT) with quaternized magnetic microspheres (NDMP) based on chemical analyses. Fluorescence excitation-emission-matrix combined with parallel factor analysis identified four components in source water of different rivers or lake, and the abundance of each component differed greatly among the different samples. Compared with the CT, the NT evidently reduced the concentrations of dissolved organic carbon, adsorbable organic halogens (AOX), bromide and disinfection by-products. Toxicological evaluation indicated that the NT completely eliminated the cytotoxicity, and greatly reduced the genotoxicity and oxidative stress of all raw water. In contrast, the CT increased the cytotoxicity of Taihu Lake and the Zhongshan River water, genotoxicity of Taihu Lake and the Mangshe River water, as well as the levels of superoxide dismutase and malondialdehyde of the Mangshe River water. Correlation analysis indicated that the AOX of the treated samples was significantly correlated with the genotoxicity and glutathione concentration, but exhibited no correlation with either of them for all the samples. As it can effectively reduce pollutant levels and the toxicities of drinking water, NDMP might be widely used for drinking water treatment in future. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Microbial-mediated method for metal oxide nanoparticle formation

    DOEpatents

    Rondinone, Adam J.; Moon, Ji Won; Love, Lonnie J.; Yeary, Lucas W.; Phelps, Tommy J.

    2015-09-08

    The invention is directed to a method for producing metal oxide nanoparticles, the method comprising: (i) subjecting a combination of reaction components to conditions conducive to microbial-mediated formation of metal oxide nanoparticles, wherein said combination of reaction components comprise: metal-reducing microbes, a culture medium suitable for sustaining said metal-reducing microbes, an effective concentration of one or more surfactants, a reducible metal oxide component containing one or more reducible metal species, and one or more electron donors that provide donatable electrons to said metal-reducing microbes during consumption of the electron donor by said metal-reducing microbes; and (ii) isolating said metal oxide nanoparticles, which contain a reduced form of said reducible metal oxide component. The invention is also directed to metal oxide nanoparticle compositions produced by the inventive method.

  7. The Panamanian health research system: a baseline analysis for the construction of a new phase

    PubMed Central

    2013-01-01

    Background In Panama, the health research system has been strengthened during recent years by the development of new financing opportunities, promotion of scientific and technological activities, and initiation of human capital training to ultimately improve competitiveness. However, aligning this system with the population’s health needs is a significant challenge. This study was designed to characterize the National Health Research System in Panama, aiming to understand it within a local context to facilitate policymaking. Methods The study was based on the analysis of operative and functional components of the National Health Research System, characterized by four specific components: stewardship, financing, creation and maintenance of resources, and production and use of research results. The analysis was based on official documents from key local institutions in the areas of science, technology and innovation management, and health and health research, as well as bibliographic databases. Results Panama’s National Health Research System is characterized by the presence of only two biomedical research institutes and reduced research activity in hospitals and universities, ambivalent governance, a low critical mass of researchers, reduced capacity to recruit new researchers, poor scientific production, and insufficient investment in science and technology. Conclusions The present study illustrates an approach to the context of the Panamanian Health Research System which characterizes the system as insufficient to accomplish its operative role of generating knowledge for new health interventions and input for innovations. In turn, this analysis emphasizes the need to develop a National Health Research Policy, which should include longer-term plans and a strategy to overcome the asymmetries and gaps between the different actors and components of the current system. PMID:24007409

  8. Low power pulsed MPD thruster system analysis and applications

    NASA Astrophysics Data System (ADS)

    Myers, Roger M.; Domonkos, Matthew; Gilland, James H.

    1993-06-01

    Pulsed MPD thruster systems were analyzed for application to solar-electric orbit transfer vehicles at power levels ranging from 10 to 40 kW. Potential system level benefits of pulsed propulsion technology include ease of power scaling without thruster performance changes, improved transportability from low power flight experiments to operational systems, and reduced ground qualification costs. Required pulsed propulsion system components include a pulsed applied-field MPD thruster, a pulse-forming network, a charge control unit, a cathode heater supply, and high speed valves. Mass estimates were obtained for each propulsion subsystem and spacecraft component. Results indicate that for payloads of 1000 and 2000 kg, pulsed MPD thrusters can reduce launch mass by between 1000 and 2500 kg relative to hydrogen arcjets, reducing launch vehicle class and launch cost. While the achievable mass savings depends on the trip time allowed for the mission, cases are shown in which the launch vehicle required for a mission is decreased from an Atlas IIAS to an Atlas I or Delta 7920.

  9. Does a minimally invasive approach affect positioning of components in unicompartmental knee arthroplasty? Early results with survivorship analysis.

    PubMed

    Cool, Steve; Victor, Jan; De Baets, Thierry

    2006-12-01

    Fifty unicompartmental knee arthroplasties (UKAs) were performed through a minimally invasive approach and were reviewed with an average follow-up of 3.7 years. This technique leads to reduced access to surgical landmarks. The purpose of this study was to evaluate whether correct component positioning is possible through this less invasive approach. Component positioning, femorotibial alignment and early outcomes were evaluated. We observed perfect tibial component position, but femoral component position was less consistent, especially in the sagittal plane. Femorotibial alignment in the coronal plane was within 2.5 degrees of the desired axis for 80% of the cases. Femoral component position in the sagittal plane was within a 10 degrees range of the ideal for 70% of the cases. The mean IKS Knee Function Score and Knee Score were 89/100 and 91/100 respectively. We observed two polyethylene dislocations, and one revision was performed for progressive patellofemoral arthrosis. According to our data, minimally invasive UKA does not conflict with component positioning although a learning curve needs to be respected, with femoral component positioning as the major obstacle.

  10. Use of principal components analysis and protein microarray to explore the association of HIV-1-specific IgG responses with disease progression.

    PubMed

    Gerns Storey, Helen L; Richardson, Barbra A; Singa, Benson; Naulikha, Jackie; Prindle, Vivian C; Diaz-Ochoa, Vladimir E; Felgner, Phil L; Camerini, David; Horton, Helen; John-Stewart, Grace; Walson, Judd L

    2014-01-01

    The role of HIV-1-specific antibody responses in HIV disease progression is complex and would benefit from analysis techniques that examine clusterings of responses. Protein microarray platforms facilitate the simultaneous evaluation of numerous protein-specific antibody responses, though excessive data are cumbersome in analyses. Principal components analysis (PCA) reduces data dimensionality by generating fewer composite variables that maximally account for variance in a dataset. To identify clusters of antibody responses involved in disease control, we investigated the association of HIV-1-specific antibody responses by protein microarray, and assessed their association with disease progression using PCA in a nested cohort design. Associations observed among collections of antibody responses paralleled protein-specific responses. At baseline, greater antibody responses to the transmembrane glycoprotein (TM) and reverse transcriptase (RT) were associated with higher viral loads, while responses to the surface glycoprotein (SU), capsid (CA), matrix (MA), and integrase (IN) proteins were associated with lower viral loads. Over 12 months greater antibody responses were associated with smaller decreases in CD4 count (CA, MA, IN), and reduced likelihood of disease progression (CA, IN). PCA and protein microarray analyses highlighted a collection of HIV-specific antibody responses that together were associated with reduced disease progression, and may not have been identified by examining individual antibody responses. This technique may be useful to explore multifaceted host-disease interactions, such as HIV coinfections.

  11. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  12. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  13. Analysis of Self-Recording in Self-Management Interventions for Stereotypy

    ERIC Educational Resources Information Center

    Fritz, Jennifer N.; Iwata, Brian A.; Rolider, Natalie U.; Camp, Erin M.; Neidert, Pamela L.

    2012-01-01

    Most treatments for stereotypy involve arrangements of antecedent or consequent events that are imposed entirely by a therapist. By contrast, results of some studies suggest that self-recording, a common component of self-management interventions, might be an effective and efficient way to reduce stereotypy. Because the procedure typically has…

  14. City of Charleston, South Carolina Municipal Forest Resource Analysis

    Treesearch

    E.G. McPherson; J.R. Simpson; P.J. Peper; S.L. Gardner; K.E. Vargas; S.E. Maco; Q. Xiao

    2006-01-01

    Charleston, a charming Southern city appreciated for its rich history and culture, maintains trees as an integral component of the urban infrastructure (Figure 1). Research indicates that healthy trees can lessen impacts associated with the built environment by reducing stormwater runoff, energy consumption, and air pollutants. Trees improve urban life, making...

  15. City of Charlotte, North Carolina Municipal Forest Resource Analysis

    Treesearch

    E.G. McPherson; J.R. Simpson; P.J. Peper; S.L. Gardner; K.E. Vargas; S.E. Maco; Q. Xiao

    2005-01-01

    Charlotte, a vibrant Southern city appreciated for its rich history and cultural wealth, maintains trees as an integral component of the urban infrastructure (Figure 1). Research indicates that healthy trees can lessen impacts associated with the built environment by reducing stormwater runoff, energy consumption, and air pollutants. Trees improve urban life, making...

  16. VEHICLE MASS REDUCTION STUDY | Science Inventory ...

    EPA Pesticide Factsheets

    Analysis of the potential to reduce light-duty vehicle mass through the application of low density or high strength materials, component consolidation, and changes to vehicle architecture. Find a holistic vehicle design approach that establishes a potential path for future feasible vehicle mass reduction in light-duty vehicles to meet more stringent GHG and Fuel Economy Standards.

  17. City of Berkeley, California Municipal Tree Resource Analysis

    Treesearch

    S.E. Maco; E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao

    2005-01-01

    Vibrant, renowned for its livability and cultural wealth, the city of Berkeley maintains trees as an integral component of the urban infrastructure. Research indicates that healthy trees can mitigate impacts associated with the built environment by reducing stormwater runoff, energy consumption, and air pollutants. Put simply, trees improve urban life, making Berkeley...

  18. Validation of the Marijuana Effect Expectancy Questionnaire-Brief

    ERIC Educational Resources Information Center

    Torrealday, O.; Stein, L. A. R.; Barnett, N.; Golembeske, C.; Lebeau, R.; Colby, S. M.; Monti, P. M.

    2008-01-01

    The purpose of this study was to evaluate a brief version of the Marijuana Effect Expectancy Questionnaire (MEEQ; Schafer & Brown, 1991). The original MEEQ was reduced to 6 items (MEEQ-B). Principal component analysis (PCA) was performed and two factors were identified (positive effects and negative effects) accounting for 52.3% of the variance.…

  19. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China.

    PubMed

    Zhang, Yuji; Li, Xiaoju; Mao, Lu; Zhang, Mei; Li, Ke; Zheng, Yinxia; Cui, Wangfei; Yin, Hongpo; He, Yanli; Jing, Mingxia

    2018-01-01

    The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis. A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ 2 -test and a binary logistic regression model. This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications. Community management plays an important role in improving the patients' medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers.

  20. Theoretical and experimental analysis of a linear accelerator endowed with single feed coupler with movable short-circuit.

    PubMed

    Dal Forno, Massimo; Craievich, Paolo; Penco, Giuseppe; Vescovo, Roberto

    2013-11-01

    The front-end injection systems of the FERMI@Elettra linac produce high brightness electron beams that define the performance of the Free Electron Laser. The photoinjector mainly consists of the radiofrequency (rf) gun and of two S-band rf structures which accelerate the beam. Accelerating structures endowed with a single feed coupler cause deflection and degradation of the electron beam properties, due to the asymmetry of the electromagnetic field. In this paper, a new type of single feed structure with movable short-circuit is proposed. It has the advantage of having only one waveguide input, but we propose a novel design where the dipolar component is reduced. Moreover, the racetrack geometry allows to reduce the quadrupolar component. This paper presents the microwave design and the analysis of the particle motion inside the linac. A prototype has been machined at the Elettra facility to verify the new coupler design and the rf field has been measured by adopting the bead-pull method. The results are here presented, showing good agreement with the expectations.

  1. Enhanced performance for differential detection in coherent Brillouin optical time-domain analysis sensors

    NASA Astrophysics Data System (ADS)

    Shao, Liyang; Zhang, Yunpeng; Li, Zonglei; Zhang, Zhiyong; Zou, Xihua; Luo, Bin; Pan, Wei; Yan, Lianshan

    2016-11-01

    Logarithmic detectors (LogDs) have been used in coherent Brillouin optical time-domain analysis (BOTDA) sensors to reduce the effect of phase fluctuation, demodulation complexities, and measurement time. However, because of the inherent properties of LogDs, a DC component at the level of hundreds of millivolts that prohibits high-gain signal amplification (SA) could be generated, resulting in unacceptable data acquisition (DAQ) inaccuracies and decoding errors in the process of prototype integration. By generating a reference light at a level similar to the probe light, differential detection can be applied to remove the DC component automatically using a differential amplifier before the DAQ process. Therefore, high-gain SA can be employed to reduce quantization errors. The signal-to-noise ratio of the weak Brillouin gain signal is improved from ˜11.5 to ˜21.8 dB. A BOTDA prototype is implemented based on the proposed scheme. The experimental results show that the measurement accuracy of the Brillouin frequency shift (BFS) is improved from ±1.9 to ±0.8 MHz at the end of a 40-km sensing fiber.

  2. Component analysis and heavy metal adsorption ability of extracellular polymeric substances (EPS) from sulfate reducing bacteria.

    PubMed

    Yue, Zheng-Bo; Li, Qing; Li, Chuan-chuan; Chen, Tian-hu; Wang, Jin

    2015-10-01

    Extracellular polymeric substances (EPS) play an important role in the treatment of acid mine drainage (AMD) by sulfate-reducing bacteria (SRB). In this paper, Desulfovibrio desulfuricans was used as the test strain to explore the effect of heavy metals on the components and adsorption ability of EPS. Fourier-transform infrared (FTIR) spectroscopy analysis results showed that heavy metals did not influence the type of functional groups of EPS. Potentiometric titration results indicated that the acidic constants (pKa) of the EPS fell into three ranges of 3.5-4.0, 5.9-6.7, and 8.9-9.8. The adsorption site concentrations of the surface functional groups also increased. Adsorption results suggested that EPS had a specific binding affinity for the dosed heavy metal, and that EPS extracted from the Zn(2+)-dosed system had a higher binding affinity for all heavy metals. Additionally, Zn(2+) decreased the inhibitory effects of Cd(2+) and Cu(2+) on the SRB. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Retrospective case-control non-inferiority analysis of intravenous lidocaine in a colorectal surgery enhanced recovery program.

    PubMed

    Naik, Bhiken I; Tsang, Siny; Knisely, Anne; Yerra, Sandeep; Durieux, Marcel E

    2017-01-31

    Enhanced recovery after surgery (ERAS) programs typically utilizes multi-modal analgesia to reduce perioperative opioid consumption. Systemic lidocaine is used in several of these ERAS algorithms and has been shown to reduce opioid use after colorectal surgery. However it is unclear how much the other components of an ERAS protocol contribute to the final outcome. Using a noninferiority analysis we sought to assess the role of perioperative lidocaine in an ERAS program for colorectal surgery, using pain and opioid consumption as outcomes. We conducted a retrospective review of patients who had received intravenous lidocaine perioperatively during colorectal surgery. We matched them with patients who were managed using a multi-component ERAS protocol, which included perioperative lidocaine. We tested a joint hypothesis of noninferiority of lidocaine infusion to ERAS protocol in postoperative pain scores and opioid consumption. We assigned a noninferiority margin of 1 point (on an 11-point numerical rating scale) difference in pain and a ratio [mean (lidocaine) / mean (ERAS)] of 1.2 in opioid consumption, respectively. Fifty-two patients in the lidocaine group were matched with patients in the ERAS group. With regards to opioid consumption, in the overall [1.68 (1.43-1.98)] [odds ratio (95% confidence interval)] analysis and on postoperative day (POD) 1 [2.38 (1.74-3.31)] lidocaine alone was inferior to multi-modal analgesia. On POD 2 and beyond, although the mean odds ratio for opioid consumption was 1.43 [1.43 (1.17-1.73)], the lower limit extended beyond the pre-defined cut-off of 1.2, rendering the outcome inconclusive. For pain scores lidocaine is non-inferior to ERAS [-0.17 (-1.08-0.74)] on POD 2 and beyond. Pain scores on POD 1 and in the overall cohort were inconclusive based on the noninferiority analysis. The addition of a multi-component ERAS protocol to intravenous lidocaine incrementally reduces opioid consumption, most evident on POD 1. For pain scores the data is inconclusive on POD 1, however on POD 2 and beyond lidocaine alone is non-inferior to an ERAS program with lidocaine. Opioid-related complications, including return of bowel function, were not different between the groups despite reduced opioid use in the ERAS group.

  4. The correlation between biofilm biopolymer composition and membrane fouling in submerged membrane bioreactors.

    PubMed

    Luo, Jinxue; Zhang, Jinsong; Tan, Xiaohui; McDougald, Diane; Zhuang, Guoqiang; Fane, Anthony G; Kjelleberg, Staffan; Cohen, Yehuda; Rice, Scott A

    2014-10-01

    Biofouling, the combined effect of microorganism and biopolymer accumulation, significantly reduces the process efficiency of membrane bioreactors (MBRs). Here, four biofilm components, alpha-polysaccharides, beta-polysaccharides, proteins and microorganisms, were quantified in MBRs. The biomass of each component was positively correlated with the transmembrane pressure increase in MBRs. Proteins were the most abundant biopolymer in biofilms and showed the fastest rate of increase. The spatial distribution and co-localization analysis of the biofouling components indicated at least 60% of the extracellular polysaccharide (EPS) components were associated with the microbial cells when the transmembrane pressure (TMP) entered the jump phase, suggesting that the EPS components were either secreted by the biofilm cells or that the deposition of these components facilitated biofilm formation. It is suggested that biofilm formation and the accumulation of EPS are intrinsically coupled, resulting in biofouling and loss of system performance. Therefore, strategies that control biofilm formation on membranes may result in a significant improvement of MBR performance.

  5. An In-Depth Cost Analysis for New Light-Duty Vehicle ...

    EPA Pesticide Factsheets

    Within the transportation sector, light-duty vehicles are the predominant source of greenhouse gas (GHG) emissions, principally exhaust CO2 and refrigerant leakage from vehicle air conditioners. EPA has contracted with FEV to estimate the costs of technologies that may be employed to reduce these emissions. The purpose of this work is to determine accurate costs for GHG-reducing technologies. This is of paramount importance in setting the appropriate GHG standards. EPA has contracted with FEV to perform this cost analysis through tearing down vehicles, engines and components, both with and without these technologies, and evaluating, part by part, the observed differences in size, weight, materials, machining steps, and other cost-affecting parameters.

  6. Using kinematic reduction for studying grasping postures. An application to power and precision grasp of cylinders.

    PubMed

    Jarque-Bou, N; Gracia-Ibáñez, V; Sancho-Bru, J L; Vergara, M; Pérez-González, A; Andrés, F J

    2016-09-01

    The kinematic analysis of human grasping is challenging because of the high number of degrees of freedom involved. The use of principal component and factorial analyses is proposed in the present study to reduce the hand kinematics dimensionality in the analysis of posture for ergonomic purposes, allowing for a comprehensive study without losing accuracy while also enabling velocity and acceleration analyses to be performed. A laboratory study was designed to analyse the effect of weight and diameter in the grasping posture for cylinders. This study measured the hand posture from six subjects when transporting cylinders of different weights and diameters with precision and power grasps. The hand posture was measured using a Vicon(®) motion-tracking system, and the principal component analysis was applied to reduce the kinematics dimensionality. Different ANOVAs were performed on the reduced kinematic variables to check the effect of weight and diameter of the cylinders, as well as that of the subject. The results show that the original twenty-three degrees of freedom of the hand were reduced to five, which were identified as digit arching, closeness, palmar arching, finger adduction and thumb opposition. Both cylinder diameter and weight significantly affected the precision grasping posture: diameter affects closeness, palmar arching and opposition, while weight affects digit arching, palmar arching and closeness. The power-grasping posture was mainly affected by the cylinder diameter, through digit arching, closeness and opposition. The grasping posture was largely affected by the subject factor and this effect couldn't be attributed only to hand size. In conclusion, this kinematic reduction allowed identifying the effect of the diameter and weight of the cylinders in a comprehensive way, being diameter more important than weight. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A probability index for surface zonda wind occurrence at Mendoza city through vertical sounding principal components analysis

    NASA Astrophysics Data System (ADS)

    Otero, Federico; Norte, Federico; Araneo, Diego

    2018-01-01

    The aim of this work is to obtain an index for predicting the probability of occurrence of zonda event at surface level from sounding data at Mendoza city, Argentine. To accomplish this goal, surface zonda wind events were previously found with an objective classification method (OCM) only considering the surface station values. Once obtained the dates and the onset time of each event, the prior closest sounding for each event was taken to realize a principal component analysis (PCA) that is used to identify the leading patterns of the vertical structure of the atmosphere previously to a zonda wind event. These components were used to construct the index model. For the PCA an entry matrix of temperature ( T) and dew point temperature (Td) anomalies for the standard levels between 850 and 300 hPa was build. The analysis yielded six significant components with a 94 % of the variance explained and the leading patterns of favorable weather conditions for the development of the phenomenon were obtained. A zonda/non-zonda indicator c can be estimated by a logistic multiple regressions depending on the PCA component loadings, determining a zonda probability index \\widehat{c} calculable from T and Td profiles and it depends on the climatological features of the region. The index showed 74.7 % efficiency. The same analysis was performed by adding surface values of T and Td from Mendoza Aero station increasing the index efficiency to 87.8 %. The results revealed four significantly correlated PCs with a major improvement in differentiating zonda cases and a reducing of the uncertainty interval.

  8. Diagnostics of wear in aeronautical systems

    NASA Technical Reports Server (NTRS)

    Wedeven, L. D.

    1979-01-01

    The use of appropriate diagnostic tools for aircraft oil wetted components is reviewed, noting that it can reduce direct operating costs through reduced unscheduled maintenance, particularly in helicopter engine and transmission systems where bearing failures are a significant cost factor. Engine and transmission wear modes are described, and diagnostic methods for oil and wet particle analysis, the spectrometric oil analysis program, chip detectors, ferrography, in-line oil monitor and radioactive isotope tagging are discussed, noting that they are effective over a limited range of particle sizes but compliment each other if used in parallel. Fine filtration can potentially increase time between overhauls, but reduces the effectiveness of conventional oil monitoring techniques so that alternative diagnostic techniques must be used. It is concluded that the development of a diagnostic system should be parallel and integral with the development of a mechanical system.

  9. Bipolar electrode selection for a motor imagery based brain computer interface

    NASA Astrophysics Data System (ADS)

    Lou, Bin; Hong, Bo; Gao, Xiaorong; Gao, Shangkai

    2008-09-01

    A motor imagery based brain-computer interface (BCI) provides a non-muscular communication channel that enables people with paralysis to control external devices using their motor imagination. Reducing the number of electrodes is critical to improving the portability and practicability of the BCI system. A novel method is proposed to reduce the number of electrodes to a total of four by finding the optimal positions of two bipolar electrodes. Independent component analysis (ICA) is applied to find the source components of mu and alpha rhythms, and optimal electrodes are chosen by comparing the projection weights of sources on each channel. The results of eight subjects demonstrate the better classification performance of the optimal layout compared with traditional layouts, and the stability of this optimal layout over a one week interval was further verified.

  10. Untargeted Identification of Wood Type-Specific Markers in Particulate Matter from Wood Combustion.

    PubMed

    Weggler, Benedikt A; Ly-Verdu, Saray; Jennerwein, Maximilian; Sippula, Olli; Reda, Ahmed A; Orasche, Jürgen; Gröger, Thomas; Jokiniemi, Jorma; Zimmermann, Ralf

    2016-09-20

    Residential wood combustion emissions are one of the major global sources of particulate and gaseous organic pollutants. However, the detailed chemical compositions of these emissions are poorly characterized due to their highly complex molecular compositions, nonideal combustion conditions, and sample preparation steps. In this study, the particulate organic emissions from a masonry heater using three types of wood logs, namely, beech, birch, and spruce, were chemically characterized using thermal desorption in situ derivatization coupled to a GCxGC-ToF/MS system. Untargeted data analyses were performed using the comprehensive measurements. Univariate and multivariate chemometric tools, such as analysis of variance (ANOVA), principal component analysis (PCA), and ANOVA simultaneous component analysis (ASCA), were used to reduce the data to highly significant and wood type-specific features. This study reveals substances not previously considered in the literature as meaningful markers for differentiation among wood types.

  11. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  12. Nonlocal stability analysis of the MHD Kelvin-Helmholtz instability in a compressible plasma. [solar wind-magnetosphere interaction

    NASA Technical Reports Server (NTRS)

    Miura, A.; Pritchett, P. L.

    1982-01-01

    A general stability analysis is given of the Kevin-Helmholtz instability, for the case of sheared MHD flow of finite thickness in a compressible plasma which allows for the arbitrary orientation of the magnetic field, velocity flow, and wave vector in the plane perpendicular to the velocity gradient. The stability problem is reduced to the solution of a single second-order differential equation including a gravitational term to represent the coupling between the Kelvin-Helmholtz mode and the interchange mode. Compressibility and a magnetic field component parallel to the flow are found to be stabilizing effects, with destabilization of only the fast magnetosonic mode in the transverse case, and the presence of both Alfven and slow magnetosonic components in the parallel case. Analysis results are used in a discussion of the stability of sheared plasma flow at the magnetopause boundary and in the solar wind.

  13. PM 2.5 and other pollutants -- Reduction of health impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrack, D.

    The 1990 CAA projected a need to reduce the adverse human health and environmental impacts of exposures to particulates by regulatory reduction of anthropomorphic emissions, solely on the basis of mass reductions, at point and area sources. Ozone reduction would be by reduction of total VOC's and NO{sub x} emissions. The assumptions made about ambient air pollution's biological effects were: The observed health effect was the consequence of a measured single air pollutant treated as independent entities and that their selective reduction would have a specific identifiable health impact reduction. That within the regulated classes PM-10, PM-2.5 and VOC's allmore » components have equal biological impacts. Neither of the assumptions appears to be true. If the assumptions are not true then potentially the same reductions in health impacts could be achieved by reducing the most offensive components at possibly less cost than that required for reducing them all. Ambient pollutants are a complex matrix of dynamically interacting chemical and particle species. Their interactions are going on as they are inhaled. Pollutant measurement systems measure the predominant stable components only. Small amounts of more reactive chemicals and radicals initially present in inhaled air that contacts respiratory tract lining cells and contribute to the bio-effects are lost by the time pollutant analysis is attempted. Identification of some of the specific anthropomorphic emissions components contributing to adverse health effects are known. Methods for reducing their presence in anthropomorphic processes' emissions or their effects will be considered. Their significant role in triggering cardio-pulmonary dysfunction has now been elucidated. Reductions in specific reactive VOC species is another option. The basis for potential actions and their related biological processes will be discussed.« less

  14. PSD-95 regulates synaptic kainate receptors at mouse hippocampal mossy fiber-CA3 synapses.

    PubMed

    Suzuki, Etsuko; Kamiya, Haruyuki

    2016-06-01

    Kainate-type glutamate receptors (KARs) are the third class of ionotropic glutamate receptors whose activation leads to the unique roles in regulating synaptic transmission and circuit functions. In contrast to AMPA receptors (AMPARs), little is known about the mechanism of synaptic localization of KARs. PSD-95, a major scaffold protein of the postsynaptic density, is a candidate molecule that regulates the synaptic KARs. Although PSD-95 was shown to bind directly to KARs subunits, it has not been tested whether PSD-95 regulates synaptic KARs in intact synapses. Using PSD-95 knockout mice, we directly investigated the role of PSD-95 in the KARs-mediated components of synaptic transmission at hippocampal mossy fiber-CA3 synapse, one of the synapses with the highest density of KARs. Mossy fiber EPSCs consist of AMPA receptor (AMPAR)-mediated fast component and KAR-mediated slower component, and the ratio was significantly reduced in PSD-95 knockout mice. The size of KARs-mediated field EPSP reduced in comparison with the size of the fiber volley. Analysis of KARs-mediated miniature EPSCs also suggested reduced synaptic KARs. All the evidence supports critical roles of PSD-95 in regulating synaptic KARs. Copyright © 2015 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.

  15. Intelligent technologies in process of highly-precise products manufacturing

    NASA Astrophysics Data System (ADS)

    Vakhidova, K. L.; Khakimov, Z. L.; Isaeva, M. R.; Shukhin, V. V.; Labazanov, M. A.; Ignatiev, S. A.

    2017-10-01

    One of the main control methods of the surface layer of bearing parts is the eddy current testing method. Surface layer defects of bearing parts, like burns, cracks and some others, are reflected in the results of the rolling surfaces scan. The previously developed method for detecting defects from the image of the raceway was quite effective, but the processing algorithm is complicated and lasts for about 12 ... 16 s. The real non-stationary signals from an eddy current transducer (ECT) consist of short-time high-frequency and long-time low-frequency components, therefore a transformation is used for their analysis, which provides different windows for different frequencies. The wavelet transform meets these conditions. Based on aforesaid, a methodology for automatically detecting and recognizing local defects in bearing parts surface layer has been developed on the basis of wavelet analysis using integral estimates. Some of the defects are recognized by the amplitude component, otherwise an automatic transition to recognition by the phase component of information signals (IS) is carried out. The use of intelligent technologies in the manufacture of bearing parts will, firstly, significantly improve the quality of bearings, and secondly, significantly improve production efficiency by reducing (eliminating) rejections in the manufacture of products, increasing the period of normal operation of the technological equipment (inter-adjustment period), the implementation of the system of Flexible facilities maintenance, as well as reducing production costs.

  16. Functional analysis and treatment of the diurnal bruxism of a 16-year-old girl with autism.

    PubMed

    Armstrong, Amy; Knapp, Vicki Madaus; McAdam, David B

    2014-01-01

    Bruxism is defined as the clenching and grinding of teeth. This study used a functional analysis to examine whether the bruxism of a 16-year-old girl with autism was maintained by automatic reinforcement or social consequences. A subsequent component analysis of the intervention package described by Barnoy, Najdowski, Tarbox, Wilke, and Nollet (2009) showed that a vocal reprimand (e.g., "stop grinding") effectively reduced the participant's bruxism. Results were maintained across time, and effects extended to novel staff members. © Society for the Experimental Analysis of Behavior.

  17. Lax representations for matrix short pulse equations

    NASA Astrophysics Data System (ADS)

    Popowicz, Z.

    2017-10-01

    The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.

  18. The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.

    PubMed

    Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan

    2018-05-04

    Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.

  19. Effects of cumulative illness severity on hippocampal gray matter volume in major depression: a voxel-based morphometry study.

    PubMed

    Zaremba, Dario; Enneking, Verena; Meinert, Susanne; Förster, Katharina; Bürger, Christian; Dohm, Katharina; Grotegerd, Dominik; Redlich, Ronny; Dietsche, Bruno; Krug, Axel; Kircher, Tilo; Kugel, Harald; Heindel, Walter; Baune, Bernhard T; Arolt, Volker; Dannlowski, Udo

    2018-02-08

    Patients with major depression show reduced hippocampal volume compared to healthy controls. However, the contribution of patients' cumulative illness severity to hippocampal volume has rarely been investigated. It was the aim of our study to find a composite score of cumulative illness severity that is associated with hippocampal volume in depression. We estimated hippocampal gray matter volume using 3-tesla brain magnetic resonance imaging in 213 inpatients with acute major depression according to DSM-IV criteria (employing the SCID interview) and 213 healthy controls. Patients' cumulative illness severity was ascertained by six clinical variables via structured clinical interviews. A principal component analysis was conducted to identify components reflecting cumulative illness severity. Regression analyses and a voxel-based morphometry approach were used to investigate the influence of patients' individual component scores on hippocampal volume. Principal component analysis yielded two main components of cumulative illness severity: Hospitalization and Duration of Illness. While the component Hospitalization incorporated information from the intensity of inpatient treatment, the component Duration of Illness was based on the duration and frequency of illness episodes. We could demonstrate a significant inverse association of patients' Hospitalization component scores with bilateral hippocampal gray matter volume. This relationship was not found for Duration of Illness component scores. Variables associated with patients' history of psychiatric hospitalization seem to be accurate predictors of hippocampal volume in major depression and reliable estimators of patients' cumulative illness severity. Future studies should pay attention to these measures when investigating hippocampal volume changes in major depression.

  20. Chemical compositions, chromatographic fingerprints and antioxidant activities of Andrographis Herba.

    PubMed

    Zhao, Yang; Kao, Chun-Pin; Wu, Kun-Chang; Liao, Chi-Ren; Ho, Yu-Ling; Chang, Yuan-Shiun

    2014-11-10

    This paper describes the development of an HPLC-UV-MS method for quantitative determination of andrographolide and dehydroandrographolide in Andrographis Herba and establishment of its chromatographic fingerprint. The method was validated for linearity, limit of detection and quantification, inter- and intra-day precisions, repeatability, stability and recovery. All the validation results of quantitative determination and fingerprinting methods were satisfactory. The developed method was then applied to assay the contents of andrographolide and dehydroandrographolide and to acquire the fingerprints of all the collected Andrographis Herba samples. Furthermore, similarity analysis and principal component analysis were used to reveal the similarities and differences between the samples on the basis of the characteristic peaks. More importantly, the DPPH free radical-scavenging and ferric reducing capacities of the Andrographis Herba samples were assayed. By bivariate correlation analysis, we found that six compounds are positively correlated to DPPH free radical scavenging and ferric reducing capacities, and four compounds are negatively correlated to DPPH free radical scavenging and ferric reducing capacities.

  1. Going beyond Clustering in MD Trajectory Analysis: An Application to Villin Headpiece Folding

    PubMed Central

    Rajan, Aruna; Freddolino, Peter L.; Schulten, Klaus

    2010-01-01

    Recent advances in computing technology have enabled microsecond long all-atom molecular dynamics (MD) simulations of biological systems. Methods that can distill the salient features of such large trajectories are now urgently needed. Conventional clustering methods used to analyze MD trajectories suffer from various setbacks, namely (i) they are not data driven, (ii) they are unstable to noise and changes in cut-off parameters such as cluster radius and cluster number, and (iii) they do not reduce the dimensionality of the trajectories, and hence are unsuitable for finding collective coordinates. We advocate the application of principal component analysis (PCA) and a non-metric multidimensional scaling (nMDS) method to reduce MD trajectories and overcome the drawbacks of clustering. To illustrate the superiority of nMDS over other methods in reducing data and reproducing salient features, we analyze three complete villin headpiece folding trajectories. Our analysis suggests that the folding process of the villin headpiece is structurally heterogeneous. PMID:20419160

  2. Going beyond clustering in MD trajectory analysis: an application to villin headpiece folding.

    PubMed

    Rajan, Aruna; Freddolino, Peter L; Schulten, Klaus

    2010-04-15

    Recent advances in computing technology have enabled microsecond long all-atom molecular dynamics (MD) simulations of biological systems. Methods that can distill the salient features of such large trajectories are now urgently needed. Conventional clustering methods used to analyze MD trajectories suffer from various setbacks, namely (i) they are not data driven, (ii) they are unstable to noise and changes in cut-off parameters such as cluster radius and cluster number, and (iii) they do not reduce the dimensionality of the trajectories, and hence are unsuitable for finding collective coordinates. We advocate the application of principal component analysis (PCA) and a non-metric multidimensional scaling (nMDS) method to reduce MD trajectories and overcome the drawbacks of clustering. To illustrate the superiority of nMDS over other methods in reducing data and reproducing salient features, we analyze three complete villin headpiece folding trajectories. Our analysis suggests that the folding process of the villin headpiece is structurally heterogeneous.

  3. Evaluation of automated sample preparation, retention time locked gas chromatography-mass spectrometry and data analysis methods for the metabolomic study of Arabidopsis species.

    PubMed

    Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat

    2011-05-27

    In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Design and optimization of the CFRP mirror components

    NASA Astrophysics Data System (ADS)

    Wei, Lei; Zhang, Lei; Gong, Xiaoxue

    2017-09-01

    As carbon fiber reinforced polymer (CFRP) material has been developed and demonstrated as an effective material in lightweight telescope reflector manufacturing recently, the authors of this article have extended to apply this material on the lightweight space camera mirror design and fabrication. By CFRP composite laminate design and optimization using finite element method (FEM) analysis, a spherical mirror with φ316 mm diameter whose core cell reinforcement is an isogrid configuration is fabricated. Compared with traditional ways of applying ultra-low-expansion glass (ULE) on the CFRP mirror surface, the method of nickel electroplating on the surface effectively reduces the processing cost and difficulty of the CFRP mirror. Through the FEM analysis, the first order resonance frequency of the CFRP mirror components reaches up to 652.3 Hz. Under gravity affection coupling with +5°C temperature rising, the mirror surface shape root-mean-square values (RMS) at the optical axis horizontal state is 5.74 nm, which meets mechanical and optical requirements of the mirror components on space camera.

  5. Autonomous learning in gesture recognition by using lobe component analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  6. Improving the Reliability of Technological Subsystems Equipment for Steam Turbine Unit in Operation

    NASA Astrophysics Data System (ADS)

    Brodov, Yu. M.; Murmansky, B. E.; Aronson, R. T.

    2017-11-01

    The authors’ conception is presented of an integrated approach to reliability improving of the steam turbine unit (STU) state along with its implementation examples for the various STU technological subsystems. Basing on the statistical analysis of damage to turbine individual parts and components, on the development and application of modern methods and technologies of repair and on operational monitoring techniques, the critical components and elements of equipment are identified and priorities are proposed for improving the reliability of STU equipment in operation. The research results are presented of the analysis of malfunctions for various STU technological subsystems equipment operating as part of power units and at cross-linked thermal power plants and resulting in turbine unit shutdown (failure). Proposals are formulated and justified for adjustment of maintenance and repair for turbine components and parts, for condenser unit equipment, for regeneration subsystem and oil supply system that permit to increase the operational reliability, to reduce the cost of STU maintenance and repair and to optimize the timing and amount of repairs.

  7. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  8. Advanced oxide dispersion strengthened sheet alloys for improved combustor durability

    NASA Technical Reports Server (NTRS)

    Henricks, R. J.

    1981-01-01

    Burner design modifications that will take advantage of the improved creep and cyclic oxidation resistance of oxide dispersion strengthened (ODS) alloys while accommodating the reduced fatigue properties of these materials were evaluated based on preliminary analysis and life predictions, on construction and repair feasibility, and on maintenance and direct operating costs. Two designs - the film cooled, segmented louver and the transpiration cooled, segmented twin Wall - were selected for low cycle fatigue (LCF) component testing. Detailed thermal and structural analysis of these designs established the strain range and temprature at critical locations resulting in predicted lives of 10,000 cycles for MA 956 alloy. The ODs alloys, MA 956 and HDA 8077, demonstrated a 167 C (300 F) temperature advantage over Hastelloy X alloy in creep strength and oxidation resistance. The MA 956 alloy was selected for mechanical property and component test evaluations. The MA 956 alloy was superior to Hastelloy X in LCF component testing of the film cooled, segmented louver design.

  9. Some Interesting Applications of Probabilistic Techiques in Structural Dynamic Analysis of Rocket Engines

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.

    2014-01-01

    Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.

  10. Retrieval analysis of ceramic-coated metal-on-polyethylene total hip replacements.

    PubMed

    Khatkar, Harman; Hothi, Harry; de Villiers, Danielle; Lausmann, Christian; Kendoff, Daniel; Gehrke, Thorsten; Skinner, John; Hart, Alister

    2017-06-01

    Ceramic coatings have been used in metal-on-polyethylene (MOP) hips to reduce the risk of wear and also infection; the clinical efficacy of this remains unclear. This retrieval study sought to better understand the performance of coated bearing surfaces. Forty-three coated MOP components were analysed post-retrieval for evidence of coating loss and gross polyethylene wear. Coating loss was graded using a visual semi-quantitative protocol. Evidence of gross polyethylene wear was determined by radiographic analysis and visual inspection of the retrieved implants. All components with gross polyethylene wear (n = 10) were revised due to a malfunctioning acetabular component; 35 % (n = 15) of implants exhibited visible coating loss and the incidence of polyethylene wear in samples with coating loss was 54 %, significantly (p = 0.02) elevated compared to samples with intact coatings (14 %). In this study we found evidence of coating loss on metal femoral heads which was associated with increased wear of the corresponding polyethylene acetabular cups.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn D.

    Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less

  12. Residual stress prediction in a powder bed fusion manufactured Ti6Al4V hip stem

    NASA Astrophysics Data System (ADS)

    Barrett, Richard A.; Etienne, Titouan; Duddy, Cormac; Harrison, Noel M.

    2017-10-01

    Powder bed fusion (PBF) is a category of additive manufacturing (AM) that is particularly suitable for the production of 3D metallic components. In PBF, only material in the current build layer is at the required melt temperature, with the previously melted and solidified layers reducing in temperature, thus generating a significant thermal gradient within the metallic component, particularly for laser based PBF components. The internal thermal stresses are subsequently relieved in a post-processing heat-treatment step. Failure to adequately remove these stresses can result in cracking and component failure. A prototype hip stem was manufactured from Ti6Al4V via laser PBF but was found to have fractured during over-seas shipping. This study examines the evolution of thermal stresses during the laser PBF manufacturing and heat treatment processes of the hip stem in a 2D finite element analysis (FEA) and compares it to an electron beam PBF process. A custom written script for the automatic conversion of a gross geometry finite element model into a thin layer- by-layer finite element model was developed. The build process, heat treatment (for laser PBF) and the subsequent cooling were simulated at the component level. The results demonstrate the effectiveness of the heat treatment in reducing PBF induced thermal stresses, and the concentration of stresses in the region that fractured.

  13. Structural Connectivity Changes Underlying Altered Working Memory Networks in Mild Cognitive Impairment: A Three-Way Image Fusion Analysis.

    PubMed

    Teipel, Stefan; Ehlers, Inga; Erbe, Anna; Holzmann, Carsten; Lau, Esther; Hauenstein, Karlheinz; Berger, Christoph

    2015-01-01

    Working memory impairment is among the earliest signs of cognitive decline in Alzheimer's disease (AD) and mild cognitive impairment (MCI). We aimed to study the functional and structural substrate of working memory impairment in early AD dementia and MCI. We studied a group of 12 MCI and AD subjects compared to 12 age- and gender-matched healthy elderly controls using diffusion tensor imaging (DTI), and functional magnetic resonance imaging (fMRI) during a 2-back versus 1-back letter recognition task. We performed a three-way image fusion analysis with joint independent component analysis of cortical activation during working memory, and DTI derived measures of fractional anisotropy (FA) and the mode of anisotropy. We found significant hypoactivation in posterior brain areas and relative hyperactivation in anterior brain areas during working memory in AD/MCI subjects compared to controls. Corresponding independent components from DTI data revealed reduced FA and reduced mode of anisotropy in intracortical projecting fiber tracts with posterior predominance and increased FA and increased mode along the corticospinal tract in AD/MCI compared to controls. Our findings suggest that impairments of structural fiber tract integrity accompany breakdown of posterior and relatively preserved anterior cortical activation during working memory performance in MCI/AD subjects. Copyright © 2014 by the American Society of Neuroimaging.

  14. Multidisciplinary Tool for Systems Analysis of Planetary Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2011-01-01

    Systems analysis of a planetary entry (SAPE), descent, and landing (EDL) is a multidisciplinary activity in nature. SAPE improves the performance of the systems analysis team by automating and streamlining the process, and this improvement can reduce the errors that stem from manual data transfer among discipline experts. SAPE is a multidisciplinary tool for systems analysis of planetary EDL for Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Titan. It performs EDL systems analysis for any planet, operates cross-platform (i.e., Windows, Mac, and Linux operating systems), uses existing software components and open-source software to avoid software licensing issues, performs low-fidelity systems analysis in one hour on a computer that is comparable to an average laptop, and keeps discipline experts in the analysis loop. SAPE uses Python, a platform-independent, open-source language, for integration and for the user interface. Development has relied heavily on the object-oriented programming capabilities that are available in Python. Modules are provided to interface with commercial and government off-the-shelf software components (e.g., thermal protection systems and finite-element analysis). SAPE currently includes the following analysis modules: geometry, trajectory, aerodynamics, aerothermal, thermal protection system, and interface for structural sizing.

  15. Observation and Analysis of In Situ Carbonaceous Matter in Naklha. Part 2

    NASA Technical Reports Server (NTRS)

    Gibson, E. K., Jr.; Clemett, S. J.; Thomas-Kerpta, K. L.; McKay, D. S.; Wentworth, S. J.; Robert, F.; Verchovsky, A. B.; Wright, I. P.; Pillinger, C. T.; Rice, T.; hide

    2006-01-01

    The search for indigenous carbon components on Mars has been a challenge. The first attempt was the Viking GC-MS in situ experiment which gave inconclusive results at two sites on Mars [1]. After the discovery that the SNC meteorites were from Mars [2], [3-5] reported C isotopic compositional information which suggested a reduced C component present in the martian meteorites. [6 & 7] reported the presence of reduced C components (i.e., polycyclic aromatic hydrocarbons) associated with the carbonate globules in ALH84001. Jull et al. [8] noted in Nakhla there was an acid insoluble C component present with more than 75% of its C lacking any C-14, which is modern-day carbon. This C fraction was believed to be either indigenous martian or ancient meteoritic carbon. Fisk et al. [9, 10] have shown textural evidence along with C-enriched areas within fractures in Nakhla and ALH84001. To further understand the nature of possible indigenous reduced C components, we have carried out a variety of measurements on martian meteorites. For this presentation we will discuss only the Nakhla results. Interior samples from the Nakhla SNC meteorite, recently made available by the British Museum of Natural History, were analyzed. Petrographic examination [11, McKay et al., this volume] of Nakhla showed evidence of fractures (approx.0.5 micron wide) filled with dark brown to black dendritic material [Fig. 1] with characteristics similar to those observed by [10]. Iddingsite is also present along fractures in olivine. Fracture filling and dendritic material was examined by SEM-EDX, TEM-EDX, Focused Electron Beam microscopy, Laser Raman Spectroscopy, Nano-SIMS Ion Micro-probe, and Stepped-Combustion Static Mass Spectrometry.

  16. Structural Analysis of Pressurized Small Diameter Lines in a Random Vibration Environment

    NASA Technical Reports Server (NTRS)

    Davis, Mark; Ridnour, Andrew; Brethen, Mark

    2011-01-01

    The pressurization and propellant feed lines for the Ares 1 Upper Stage Reaction and Roll Control Systems (ReCS and RoCS) were required to be in a high g-load random vibration flight environment. The lines connected the system components and were filled with both liquid hydrazine and gaseous helium. They are considered small and varied between one fourth to one inch in diameter. The random vibration of the lines was considered to be base excitation through the mating components and mounting hardware. It was found that reducing the amount of support structure for the lines added flexibility to the system and improved the line stresses from random vibration, but caused higher stresses from the static g-loads. The locations and number of brackets were optimized by analyzing the mode shapes of the lines causing high stresses. The use of brackets that only constrain motion in the direction of concern further reduced the stresses in the lines. Finite element analysis was used to perform the analysis. The lines were pre-stressed by temperature and internal pressure with fluid and insulation included as non-structural mass. Base excitation was added to the model using Power Spectral Density (PSD) data for the expected flight loads. The random vibration and static g-load cases were combined to obtain the total stress in the lines. This approach advances the state of the art in line analysis by using FEA to predict the stresses in the lines and to optimize the entire system based on the expected flight environment. Adding flexibility to lines has been used in piping system for temperature loads, but in flight environments flexibility has been limited for the static stresses. Adding flexibility to the system in a flight environment by reducing brackets has the benefit of reducing stresses and weight

  17. [CoCuMnOx Photocatalyzed Oxidation of Multi-component VOCs and Kinetic Analysis].

    PubMed

    Meng, Hai-long; Bo, Long-li; Liu, Jia-dong; Gao, Bo; Feng, Qi-qi; Tan, Na; Xie, Shuai

    2016-05-15

    Solar energy absorption coating CoCuMnOx was prepared by co-precipitation method and applied to photodegrade multi- component VOCs including toluene, ethyl acetate and acetone under visible light irradiation. The photocatalytic oxidation performance of toluene, ethyl acetate and acetone was analyzed and reaction kinetics of VOCs were investigated synchronously. The research indicated that removal rates of single-component toluene, ethyl acetate and acetone were 57%, 62% and 58% respectively under conditions of 400 mg · m⁻³ initial concentration, 120 mm illumination distance, 1 g/350 cm² dosage of CoCuMnOx and 6 h of irradiation time by 100 W tungsten halogen lamp. Due to the competition among different VOCs, removal efficiencies in three-component mixture were reduced by 5%-26% as compared with single VOC. Degradation processes of single-component VOC and three-component VOCs both fitted pseudo first order reaction kinetics, and kinetic constants of toluene, ethyl acetate and acetone were 0.002, 0.002 8 and 0.002 33 min⁻¹ respectively under single-component condition. Reaction rates of VOCs in three-component mixture were 0.49-0.88 times of single components.

  18. Structural and mechanical heterogeneity of the erythrocyte membrane reveals hallmarks of membrane stability.

    PubMed

    Picas, Laura; Rico, Félix; Deforet, Maxime; Scheuring, Simon

    2013-02-26

    The erythrocyte membrane, a metabolically regulated active structure that comprises lipid molecules, junctional complexes, and the spectrin network, enables the cell to undergo large passive deformations when passing through the microvascular system. Here we use atomic force microscopy (AFM) imaging and quantitative mechanical mapping at nanometer resolution to correlate structure and mechanics of key components of the erythrocyte membrane, crucial for cell integrity and function. Our data reveal structural and mechanical heterogeneity modulated by the metabolic state at unprecedented nanometer resolution. ATP-depletion, reducing skeletal junction phosphorylation in RBC cells, leads to membrane stiffening. Analysis of ghosts and shear-force opened erythrocytes show that, in the absence of cytosolic kinases, spectrin phosphorylation results in membrane stiffening at the extracellular face and a reduced junction remodeling in response to loading forces. Topography and mechanical mapping of single components at the cytoplasmic face reveal that, surprisingly, spectrin phosphorylation by ATP softens individual filaments. Our findings suggest that, besides the mechanical signature of each component, the RBC membrane mechanics is regulated by the metabolic state and the assembly of its structural elements.

  19. Characterization of an activation-tagged mutant uncovers a role of GLABRA2 in anthocyanin biosynthesis in Arabidopsis

    DOE PAGES

    Wang, Xiaoyu; Wang, Xianling; Hu, Qingnan; ...

    2015-06-17

    In Arabidopsis, anthocyanin biosynthesis is controlled by a MYB-bHLH-WD40 (MBW) transcriptional activator complex. The MBW complex activates the transcription of late biosynthesis genes in the flavonoid pathway, leading to the production of anthocyanins. A similar MBW complex regulates epidermal cell fate by activating the transcription of GLABRA2 (GL2), a homeodomain transcription factor required for trichome formation in shoots and non-hair cell formation in roots. Here we provide experimental evidence to show that GL2 also plays a role in regulating anthocyanin biosynthesis in Arabidopsis. From an activation-tagged mutagenized population of Arabidopsis plants, we isolated a dominant, gain-of-function mutant with reduced anthocyanins.more » Molecular cloning revealed that this phenotype is caused by an elevated expression of GL2, thus the mutant was named gl2-1D. Consistent with the view that GL2 acts as a negative regulator of anthocyanin biosynthesis, gl2-1D seedlings accumulated less whereas gl2-3 seedlings accumulated more anthocyanins in response to sucrose. Gene expression analysis indicated that expression of late, but not early, biosynthesis genes in the flavonoid pathway was dramatically reduced in gl2-1D but elevated in gl2-3 mutants. Further analysis showed that expression of some MBW component genes involved in the regulation of late biosynthesis genes was reduced in gl2-1D but elevated in gl2-3 mutants, and chromatin immunoprecipitation results indicated that some MBW component genes are targets of GL2. We also showed that GL2 functions as a transcriptional repressor. Altogether, these results indicate that GL2 negatively regulates anthocyanin biosynthesis in Arabidopsis by directly repressing the expression of some MBW component genes.« less

  20. Characterization of an activation-tagged mutant uncovers a role of GLABRA2 in anthocyanin biosynthesis in Arabidopsis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoyu; Wang, Xianling; Hu, Qingnan

    In Arabidopsis, anthocyanin biosynthesis is controlled by a MYB-bHLH-WD40 (MBW) transcriptional activator complex. The MBW complex activates the transcription of late biosynthesis genes in the flavonoid pathway, leading to the production of anthocyanins. A similar MBW complex regulates epidermal cell fate by activating the transcription of GLABRA2 (GL2), a homeodomain transcription factor required for trichome formation in shoots and non-hair cell formation in roots. Here we provide experimental evidence to show that GL2 also plays a role in regulating anthocyanin biosynthesis in Arabidopsis. From an activation-tagged mutagenized population of Arabidopsis plants, we isolated a dominant, gain-of-function mutant with reduced anthocyanins.more » Molecular cloning revealed that this phenotype is caused by an elevated expression of GL2, thus the mutant was named gl2-1D. Consistent with the view that GL2 acts as a negative regulator of anthocyanin biosynthesis, gl2-1D seedlings accumulated less whereas gl2-3 seedlings accumulated more anthocyanins in response to sucrose. Gene expression analysis indicated that expression of late, but not early, biosynthesis genes in the flavonoid pathway was dramatically reduced in gl2-1D but elevated in gl2-3 mutants. Further analysis showed that expression of some MBW component genes involved in the regulation of late biosynthesis genes was reduced in gl2-1D but elevated in gl2-3 mutants, and chromatin immunoprecipitation results indicated that some MBW component genes are targets of GL2. We also showed that GL2 functions as a transcriptional repressor. Altogether, these results indicate that GL2 negatively regulates anthocyanin biosynthesis in Arabidopsis by directly repressing the expression of some MBW component genes.« less

  1. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  2. Texture-dependent motion signals in primate middle temporal area

    PubMed Central

    Gharaei, Saba; Tailby, Chris; Solomon, Selina S; Solomon, Samuel G

    2013-01-01

    Neurons in the middle temporal (MT) area of primate cortex provide an important stage in the analysis of visual motion. For simple stimuli such as bars and plaids some neurons in area MT – pattern cells – seem to signal motion independent of contour orientation, but many neurons – component cells – do not. Why area MT supports both types of receptive field is unclear. To address this we made extracellular recordings from single units in area MT of anaesthetised marmoset monkeys and examined responses to two-dimensional images with a large range of orientations and spatial frequencies. Component and pattern cell response remained distinct during presentation of these complex spatial textures. Direction tuning curves were sharpest in component cells when a texture contained a narrow range of orientations, but were similar across all neurons for textures containing all orientations. Response magnitude of pattern cells, but not component cells, increased with the spatial bandwidth of the texture. In addition, response variability in all neurons was reduced when the stimulus was rich in spatial texture. Fisher information analysis showed that component cells provide more informative responses than pattern cells when a texture contains a narrow range of orientations, but pattern cells had more informative responses for broadband textures. Component cells and pattern cells may therefore coexist because they provide complementary and parallel motion signals. PMID:24000175

  3. Analysis of the Laser Drilling Process for the Combination with a Single-Lip Deep Hole Drilling Process with Small Diameters

    NASA Astrophysics Data System (ADS)

    Biermann, Dirk; Heilmann, Markus

    Due to the tendency of downsizing of components, also the industrial relevance of bore holes with small diameters and high length-to-diameter ratios rises with the growing requirements on parts. In these applications, the combination of laser pre-drilling and single-lip deep hole drilling can shorten the process chain in machining components with non-planar surfaces, or can reduce tool wear in machining case-hardened materials. In this research, the combination of these processes was realized and investigated for the very first time.

  4. Histological method for evaluation of the efficiency of Enerlit-Clima.

    PubMed

    Gol'dshtein, D V; Vikhlyantseva, E V; Sakharova, N K; Maevskii, E I; Pogorelov, A G; Uchitel', M L

    2004-08-01

    We propose a method of evaluation of anticlimacteric efficiency of a drug by its effect on the estrous cycle. The study was carried out on 9-month-old mice with retained, but notably reduced reproductive function. Analysis of the cell components of the estrous cycle was carried out on histological preparations of vaginal smears.

  5. High variable mixture ratio oxygen/hydrogen engine

    NASA Technical Reports Server (NTRS)

    Erickson, C. M.; Tu, W. H.; Weiss, A. H.

    1988-01-01

    The ability of an O2/H2 engine to operate over a range of high-propellant mixture ratios was previously shown to be advantageous in single stage to orbit (SSTO) vehicles. The results are presented for the analysis of high-performance engine power cycles operating over propellant mixture ratio ranges of 12 to 6 and 9 to 6. A requirement to throttle up to 60 percent of nominal thrust was superimposed as a typical throttle range to limit vehicle acceleration as propellant is expended. The object of the analysis was to determine areas of concern relative to component and engine operability or potential hazards resulting from the operating requirements and ranges of conditions that derive from the overall engine requirements. The SSTO mission necessitates a high-performance, lightweight engine. Therefore, staged combustion power cycles employing either dual fuel-rich preburners or dual mixed (fuel-rich and oxygen-rich) preburners were examined. Engine mass flow and power balances were made and major component operating ranges were defined. Component size and arrangement were determined through engine layouts for one of the configurations evaluated. Each component is being examined to determine if there are areas of concern with respect to component efficiency, operability, reliability, or hazard. The effects of reducing the maximum chamber pressure were investigated for one of the cycles.

  6. Use of Principal Components Analysis to Explain Controls on Nutrient Fluxes to the Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Rice, K. C.; Mills, A. L.

    2017-12-01

    The Chesapeake Bay watershed, on the east coast of the United States, encompasses about 166,000-square kilometers (km2) of diverse land use, which includes a mixture of forested, agricultural, and developed land. The watershed is now managed under a Total Daily Maximum Load (TMDL), which requires implementation of management actions by 2025 that are sufficient to reduce nitrogen, phosphorus, and suspended-sediment fluxes to the Chesapeake Bay and restore the bay's water quality. We analyzed nutrient and sediment data along with land-use and climatic variables in nine sub watersheds to better understand the drivers of flux within the watershed and to provide relevant management implications. The nine sub watersheds range in area from 300 to 30,000 km2, and the analysis period was 1985-2014. The 31 variables specific to each sub watershed were highly statistically significantly correlated, so Principal Components Analysis was used to reduce the dimensionality of the dataset. The analysis revealed that about 80% of the variability in the whole dataset can be explained by discharge, flux, and concentration of nutrients and sediment. The first two principal components (PCs) explained about 68% of the total variance. PC1 loaded strongly on discharge and flux, and PC2 loaded on concentration. The PC scores of both PC1 and PC2 varied by season. Subsequent analysis of PC1 scores versus PC2 scores, broken out by sub watershed, revealed management implications. Some of the largest sub watersheds are largely driven by discharge, and consequently large fluxes. In contrast, some of the smaller sub watersheds are more variable in nutrient concentrations than discharge and flux. Our results suggest that, given no change in discharge, a reduction in nutrient flux to the streams in the smaller watersheds could result in a proportionately larger decrease in fluxes of nutrients down the river to the bay, than in the larger watersheds.

  7. Assessment of the LC-2 Prelaunch Fatigue Spectra of the CM-to-SM Flange Weld

    NASA Technical Reports Server (NTRS)

    Dawicke, David S.; Newman, John A.

    2008-01-01

    The pad stay and rollout components of the Ares I-X life cycle can generate cyclic stress oscillations to the vehicle that could initiate and grow fatigue cracks from weld defects. The Ares I-X Project requested that a study be performed to determine if stabilization of the vehicle is required to reduce the stresses that could initiate and grow fatigue cracks at the flange-to-skin weld of the Command Module (CM) and Service Module (SM) interface. A fatigue crack growth analysis was conducted that used loads (LC-2) and stress analyses developed by the Ares I-X Project and utilized material data and analysis methods developed by a critical initial flaw size (CIFS) analysis conducted by NASA Engineering and Safety Center (NESC) for the Upper Stage Simulator (USS) of the Ares I-X vehicle. A full CIFS analysis for the CM-to-SM flange-to-skin weld was not performed because the full flight spectrum was not provided and was not necessary to answer the question posed by the Ares I-X Project. Instead, an approach was developed to determine if the crack growth due to the pad stay and rollout components of the flight spectrum would adversely influence the CIFS. The approach taken used a number of conservative assumptions that eliminated the need for high-fidelity analyses and additional material testing, but still provided a bounding solution for the uncertainties of the problem. The results from this analysis indicate that the LC-2 pad stay and rollout spectrum components would not produce significant fatigue crack growth on the CM-to-SM flange-to-skin weld. Thus, from a fatigue crack growth standpoint, no stabilization is required to reduce the LC-2 pad stay and rollout cyclic stresses on the CM-to-SM flange-to-skin weld.

  8. High wear resistance of femoral components coated with titanium nitride: a retrieval analysis.

    PubMed

    Fabry, Christian; Zietz, Carmen; Baumann, Axel; Ehall, Reinhard; Bader, Rainer

    2017-05-20

    The objective of this study was to evaluate the in vivo wear resistance of cobalt-chromium femoral components coated with titanium nitride (TiN). Our null hypothesis was that the surface damage and the thickness of the TiN coating do not correlate with the time in vivo. Twenty-five TiN-coated bicondylar femoral retrievals with a mean implantation period of 30.7 ± 11.7 months were subjected to an objective surface damage analysis with a semi-quantitative assessment method. A visual examination of scratches, indentations, notches and coating breakthroughs of the surfaces was performed. The roughness and the coating thickness of the TiN coating were evaluated in the main articulation regions. Narrow scratches and indentations in the range of low flexion angles on the retrieval surfaces were the most common modes of damage. There was no evidence of delamination on the articulation surface but rather at the bottom of isolated severe indentations or notches. An analysis of three retrievals revealed a coating breakthrough in the patellofemoral joint region, resulting from patella maltracking and a dislocation. The arithmetical mean roughness of the TiN surface slightly increased with the implantation period. In contrast, the maximum peak height of the roughness profile was reduced at the condyles of the retrieved components in comparison with new, unused surfaces. No significant association between the coating thickness and implantation period was determined. Moreover, the measured values were retained in the range of the initial coating thickness even after several years of in vivo service. As was demonstrated by the results of this study, the surface damage to the TiN coating did not deteriorate with the implantation period. The calculated damage scores and the measured coating thickness in particular both confirmed that the TiN coating provides low wear rates. Our findings support the use of wear-resistant TiN-coated components in total knee arthroplasty with the objective of reducing the risk of aseptic loosening. However, in terms of TiN-coated femoral components, particular attention should be paid to a correct patellar tracking in order to avoid wear propagation at the implant.

  9. Principal Component Analysis for Enhancement of Infrared Spectra Monitoring

    NASA Astrophysics Data System (ADS)

    Haney, Ricky Lance

    The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air contaminants in an aircraft cabin. In addition, experimental data sets are analyzed for a hydrogen peroxide (H2O2) aqueous solution mixture to determine H2O2 concentrations at various levels that could be produced during use of a vapor phase hydrogen peroxide (VPHP) decontamination system. After the PCA application to two and three component systems, the analysis technique is further expanded to include the monitoring of potential bleed air contaminants from engine oil combustion. Simulation data sets created from database spectra were utilized to predict gas components and concentrations in unknown engine oil samples at high temperatures as well as time-evolved gases from the heating of engine oils.

  10. Computer-Delivered Interventions to Reduce College Student Drinking: A Meta-Analysis

    PubMed Central

    Carey, Kate B.; Scott-Sheldon, Lori A. J.; Elliott, Jennifer C.; Bolles, Jamie R.; Carey, Michael P.

    2009-01-01

    Aims This meta-analysis evaluates the efficacy and moderators of computer-delivered interventions (CDIs) to reduce alcohol use among college students. Methods We included 35 manuscripts with 43 separate interventions, and calculated both between-group and within-group effect sizes for alcohol consumption and alcohol-related problems. Effects sizes were calculated for short-term (≤ 5 weeks) and longer-term (≥ 6 weeks) intervals. All studies were coded for study descriptors, participant characteristics, and intervention components. Results The effects of CDIs depended on the nature of the comparison condition: CDIs reduced quantity and frequency measures relative to assessment-only controls, but rarely differed from comparison conditions that included alcohol content. Small-to-medium within-group effect sizes can be expected for CDIs at short- and longer-term follow-ups; these changes are less than or equivalent to the within-group effect sizes observed for more intensive interventions. Conclusions CDIs reduce the quantity and frequency of drinking among college students. CDIs are generally equivalent to alternative alcohol-related comparison interventions. PMID:19744139

  11. Analysis and comparison of sleeping posture classification methods using pressure sensitive bed system.

    PubMed

    Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J

    2009-01-01

    Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.

  12. Efficient 3-D finite element failure analysis of compression loaded angle-ply plates with holes

    NASA Technical Reports Server (NTRS)

    Burns, S. W.; Herakovich, C. T.; Williams, J. G.

    1987-01-01

    Finite element stress analysis and the tensor polynomial failure criterion predict that failure always initiates at the interface between layers on the hole edge for notched angle-ply laminates loaded in compression. The angular location of initial failure is a function of the fiber orientation in the laminate. The dominant stress components initiating failure are shear. It is shown that approximate symmetry can be used to reduce the computer resources required for the case of unaxial loading.

  13. Associations between Caries among Children and Household Sugar Procurement, Exposure to Fluoridated Water and Socioeconomic Indicators in the Brazilian Capital Cities

    PubMed Central

    Gonçalves, Michele Martins; Leles, Cláudio Rodrigues; Freire, Maria do Carmo Matias

    2013-01-01

    The objective of this ecological study was to investigate the association between caries experience in 5- and 12-year-old Brazilian children in 2010 and household sugar procurement in 2003 and the effects of exposure to water fluoridation and socioeconomic indicators. Sample units were all 27 Brazilian capital cities. Data were obtained from the National Surveys of Oral Health; the National Household Food Budget Survey; and the United Nations Program for Development. Data analysis included correlation coefficients, exploratory factor analysis, and linear regression. There were significant negative associations between caries experience and procurement of confectionery, fluoridated water, HDI, and per capita income. Procurement of confectionery and soft drinks was positively associated with HDI and per capita income. Exploratory factor analysis grouped the independent variables by reducing highly correlated variables into two uncorrelated component variables that explained 86.1% of total variance. The first component included income, HDI, water fluoridation, and procurement of confectionery, while the second included free sugar and procurement of soft drinks. Multiple regression analysis showed that caries is associated with the first component. Caries experience was associated with better socioeconomic indicators of a city and exposure to fluoridated water, which may affect the impact of sugars on the disease. PMID:24307900

  14. Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test.

    PubMed

    Palmerini, Luca; Mellone, Sabato; Rocchi, Laura; Chiari, Lorenzo

    2011-01-01

    The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.

  15. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  16. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  17. The facial massage reduced anxiety and negative mood status, and increased sympathetic nervous activity.

    PubMed

    Hatayama, Tomoko; Kitamura, Shingo; Tamura, Chihiro; Nagano, Mayumi; Ohnuki, Koichiro

    2008-12-01

    The aim of this study was to clarify the effects of 45 min of facial massage on the activity of autonomic nervous system, anxiety and mood in 32 healthy women. Autonomic nervous activity was assessed by heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, we evaluated the high-frequency components (HF) and the low- to high-frequency ratio (LF/HF ratio), reflecting parasympathetic nervous activity and sympathetic nervous activity, respectively. The State Trait Anxiety Inventory (STAI) and the Profile of Mood Status (POMS) were administered to evaluate psychological status. The score of STAI and negative scale of POMS were significantly reduced following the massage, and only the LF/HF ratio was significantly enhanced after the massage. It was concluded that the facial massage might refresh the subjects by reducing their psychological distress and activating the sympathetic nervous system.

  18. Biomechanical comparison of component position and hardware failure in the reverse shoulder prosthesis.

    PubMed

    Gutiérrez, Sergio; Greiwe, R Michael; Frankle, Mark A; Siegal, Steven; Lee, William E

    2007-01-01

    There has been renewed interest in reverse shoulder arthroplasty for the treatment of glenohumeral arthritis with concomitant rotator cuff deficiency. Failure of the prosthesis at the glenoid attachment site remains a concern. The purpose of this study was to examine glenoid component stability with regard to the angle of implantation. This investigation entailed a biomechanical analysis to evaluate forces and micromotion in glenoid components attached to 12 polyurethane blocks at -15 degrees, 0 degrees, and +15 degrees of superior and inferior tilt. The 15 degrees inferior tilt had the most uniform compressive forces and the least amount of tensile forces and micromotion when compared with the 0 degrees and 15 degrees superiorly tilted baseplate. Our results suggest that implantation with an inferior tilt will reduce the incidence of mechanical failure of the glenoid component in a reverse shoulder prosthesis.

  19. Reducing the Read Noise of the James Webb Space Telescope Near Infrared Spectrograph Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Rauscher, Bernard; Arendt, Richard G.; Fixsen, D. J.; Lindler, Don; Loose, Markus; Moseley, S. H.; Wilson, D. V.

    2012-01-01

    We describe a Wiener optimal approach to using the reference output and reference pixels that are built into Teledyne's HAWAII-2RG detector arrays. In this way, we are reducing the total noise per approximately 1000 second 88 frame up-the-ramp dark integration from about 6.5 e- rms to roughly 5 e- rms. Using a principal components analysis formalism, we achieved these noise improvements without altering the hardware in any way. In addition to being lower, the noise is also cleaner with much less visible correlation. For example, the faint horizontal banding that is often seen in HAWAII-2RG images is almost completely removed. Preliminary testing suggests that the relative gains are even higher when using non flight grade components. We believe that these techniques are applicable to most HAWAII-2RG based instruments.

  20. Analysis of switching surges generated by current interruption in an energy-storge coil

    NASA Astrophysics Data System (ADS)

    Chowdhuri, P.

    1981-10-01

    The transient voltages which are generated when the current in a large magnetic energy storage coil is interruped by a dc vacuum circuit breaker is analyzed. The effect of the various parameters in the circuit on the transient voltage is dicussed. The self inductance of the dump resistor must be minimized to control the generated transient. Contrary to general belief, a capacitor across the coil is not an effective surge suppressor. In fact, the capacitor may excite oscillations of higher magnitude. However, a capacitor, in addition to a surge suppressor, may be used to modify the frequency components of the transient voltage so that these frequency components are not coincident with the natural frequencies of the coil. Otherwise, resonant oscillations inside the coil may attain damaging magnitudes. The capacitor would also reduce the steepness of the wavefront of the transient across the coil, thus reducing the nonlinear voltage distribution inside the coil.

  1. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  2. The development and exploratory analysis of the Back Pain Attitudes Questionnaire (Back-PAQ).

    PubMed

    Darlow, Ben; Perry, Meredith; Mathieson, Fiona; Stanley, James; Melloh, Markus; Marsh, Reginald; Baxter, G David; Dowell, Anthony

    2014-05-23

    To develop an instrument to assess attitudes and underlying beliefs about back pain, and subsequently investigate its internal consistency and underlying structures. The instrument was developed by a multidisciplinary team of clinicians and researchers based on analysis of qualitative interviews with people experiencing acute and chronic back pain. Exploratory analysis was conducted using data from a population-based cross-sectional survey. Qualitative interviews with community-based participants and subsequent postal survey. Instrument development informed by interviews with 12 participants with acute back pain and 11 participants with chronic back pain. Data for exploratory analysis collected from New Zealand residents and citizens aged 18 years and above. 1000 participants were randomly selected from the New Zealand Electoral Roll. 602 valid responses were received. The 34-item Back Pain Attitudes Questionnaire (Back-PAQ) was developed. Internal consistency was evaluated by the Cronbach α coefficient. Exploratory analysis investigated the structure of the data using Principal Component Analysis. The 34-item long form of the scale had acceptable internal consistency (α=0.70; 95% CI 0.66 to 0.73). Exploratory analysis identified five two-item principal components which accounted for 74% of the variance in the reduced data set: 'vulnerability of the back'; 'relationship between back pain and injury'; 'activity participation while experiencing back pain'; 'prognosis of back pain' and 'psychological influences on recovery'. Internal consistency was acceptable for the reduced 10-item scale (α=0.61; 95% CI 0.56 to 0.66) and the identified components (α between 0.50 and 0.78). The 34-item long form of the scale may be appropriate for use in future cross-sectional studies. The 10-item short form may be appropriate for use as a screening tool, or an outcome assessment instrument. Further testing of the 10-item Back-PAQ's construct validity, reliability, responsiveness to change and predictive ability needs to be conducted. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Autonomous Aerobraking Using Thermal Response Surface Analysis

    NASA Technical Reports Server (NTRS)

    Prince, Jill L.; Dec, John A.; Tolson, Robert H.

    2007-01-01

    Aerobraking is a proven method of significantly increasing the science payload that can be placed into low Mars orbits when compared to an all propulsive capture. However, the aerobraking phase is long and has mission cost and risk implications. The main cost benefit is that aerobraking permits the use of a smaller and cheaper launch vehicle, but additional operational costs are incurred during the long aerobraking phase. Risk is increased due to the repeated thermal loading of spacecraft components and the multiple attitude and propulsive maneuvers required for successful aerobraking. Both the cost and risk burdens can be significantly reduced by automating the aerobraking operations phase. All of the previous Mars orbiter missions that have utilized aerobraking have increasingly relied on onboard calculations during aerobraking. Even though the temperature of spacecraft components has been the limiting factor, operational methods have relied on using a surrogate variable for mission control. This paper describes several methods, based directly on spacecraft component maximum temperature, for autonomously predicting the subsequent aerobraking orbits and prescribing apoapsis propulsive maneuvers to maintain the spacecraft within specified temperature limits. Specifically, this paper describes the use of thermal response surface analysis in predicting the temperature of the spacecraft components and the corresponding uncertainty in this temperature prediction.

  4. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  5. Modeling and Analysis of the Hurricane Imaging Radiometer (HIRAD)

    NASA Technical Reports Server (NTRS)

    Mauro, Stephanie

    2013-01-01

    The Hurricane Imaging Radiometer (HIRad) is a payload carried by an unmanned aerial vehicle (UAV) at altitudes up to 60,000 ft with the purpose of measuring ocean surface wind speeds and near ocean surface rain rates in hurricanes. The payload includes several components that must maintain steady temperatures throughout the flight. Minimizing the temperature drift of these components allows for accurate data collection and conclusions to be drawn concerning the behavior of hurricanes. HIRad has flown on several different UAVs over the past two years during the fall hurricane season. Based on the data from the 2011 flight, a Thermal Desktop model was created to simulate the payload and reproduce the temperatures. Using this model, recommendations were made to reduce the temperature drift through the use of heaters controlled by resistance temperature detector (RTD) sensors. The suggestions made were implemented for the 2012 hurricane season and further data was collected. The implementation of the heaters reduced the temperature drift for a portion of the flight, but after a period of time, the temperatures rose. With this new flight data, the thermal model was updated and correlated. Detailed analysis was conducted to determine a more effective way to reduce the temperature drift. The final recommendations made were to adjust the set temperatures of the heaters for 2013 flights and implement hardware changes for flights beyond 2013.

  6. Thermal Modeling and Analysis of the Hurricane Imaging Radiometer (HIRad)

    NASA Technical Reports Server (NTRS)

    Mauro, Stephanie

    2013-01-01

    The Hurricane Imaging Radiometer (HIRad) is a payload carried by an unmanned aerial vehicle (UAV) at altitudes up to 60,000 ft with the purpose of measuring ocean surface wind speeds and near ocean surface rain rates in hurricanes. The payload includes several components that must maintain steady temperatures throughout the flight. Minimizing the temperature drift of these components allows for accurate data collection and conclusions to be drawn concerning the behavior of hurricanes. HIRad has flown on several different UAVs over the past two years during the fall hurricane season. Based on the data from the 2011 flight, a Thermal Desktop model was created to simulate the payload and reproduce the temperatures. Using this model, recommendations were made to reduce the temperature drift through the use of heaters controlled by resistance temperature detector (RTD) sensors. The suggestions made were implemented for the 2012 hurricane season and further data was collected. The implementation of the heaters reduced the temperature drift for a portion of the flight, but after a period of time, the temperatures rose. With this new flight data, the thermal model was updated and correlated. Detailed analysis was conducted to determine a more effective way to reduce the temperature drift. The final recommendations made were to adjust the set temperatures of the heaters for 2013 flights and implement hardware changes for flights beyond 2013.

  7. Factors Underlying Bursting Behavior in a Network of Cultured Hippocampal Neurons Exposed to Zero Magnesium

    PubMed Central

    Mangan, Patrick S.; Kapur, Jaideep

    2010-01-01

    Factors contributing to reduced magnesium-induced neuronal action potential bursting were investigated in primary hippocampal cell culture at high and low culture density. In nominally zero external magnesium medium, pyramidal neurons from high-density cultures produced recurrent spontaneous action potential bursts superimposed on prolonged depolarizations. These bursts were partially attenuated by the NMDA receptor antagonist D-APV. Pharmacological analysis of miniature excitatory postsynaptic currents (EPSCs) revealed 2 components: one sensitive to D-APV and another to the AMPA receptor antagonist DNQX. The components were kinetically distinct. Participation of NMDA receptors in reduced magnesium-induced synaptic events was supported by the localization of the NR1 subunit of the NMDA receptor with the presynaptic vesicular protein synaptophysin. Presynaptically, zero magnesium induced a significant increase in EPSC frequency likely attributable to increased neuronal hyperexcitability induced by reduced membrane surface charge screening. Mean quantal content was significantly increased in zero magnesium. Cells from low-density cultures did not exhibit action potential bursting in zero magnesium but did show increased EPSC frequency. Low-density neurons had less synaptophysin immunofluorescence and fewer active synapses as determined by FM1-43 analysis. These results demonstrate that multiple factors are involved in network bursting. Increased probability of transmitter release presynaptically, enhanced NMDA receptor-mediated excitability postsynaptically, and extent of neuronal interconnectivity contribute to initiation and maintenance of elevated network excitability. PMID:14534286

  8. Shuttle filter study. Volume 2: Contaminant generation and sensitivity studies

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Contaminant generation studies were conducted at the component level using two different methods, radioactive tracer technique and gravimetric analysis test procedure. Both of these were reduced to practice during this program. In the first of these methods, radioactively tagged components typical of those used in spacecraft were studied to determine their contaminant generation characteristics under simulated operating conditions. Because the purpose of the work was: (1) to determine the types and quantities of contaminants generated; and (2) to evaluate improved monitoring and detection schemes, no attempt was made to evaluate or qualify specific components. The components used in this test program were therefore not flight hardware items. Some of them had been used in previous tests; some were obsolete; one was an experimental device. In addition to the component tests, various materials of interest to contaminant and filtration studies were irradiated and evaluated for use as autotracer materials. These included test dusts, plastics, valve seat materials, and bearing cage materials.

  9. Directional connectivity of resting state human fMRI data using cascaded ICA-PDC analysis.

    PubMed

    Silfverhuth, Minna J; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Veijola, Juha; Tervonen, Osmo; Kiviniemi, Vesa

    2011-11-01

    Directional connectivity measures, such as partial directed coherence (PDC), give us means to explore effective connectivity in the human brain. By utilizing independent component analysis (ICA), the original data-set reduction was performed for further PDC analysis. To test this cascaded ICA-PDC approach in causality studies of human functional magnetic resonance imaging (fMRI) data. Resting state group data was imaged from 55 subjects using a 1.5 T scanner (TR 1800 ms, 250 volumes). Temporal concatenation group ICA in a probabilistic ICA and further repeatability runs (n = 200) were overtaken. The reduced data-set included the time series presentation of the following nine ICA components: secondary somatosensory cortex, inferior temporal gyrus, intracalcarine cortex, primary auditory cortex, amygdala, putamen and the frontal medial cortex, posterior cingulate cortex and precuneus, comprising the default mode network components. Re-normalized PDC (rPDC) values were computed to determine directional connectivity at the group level at each frequency. The integrative role was suggested for precuneus while the role of major divergence region may be proposed to primary auditory cortex and amygdala. This study demonstrates the potential of the cascaded ICA-PDC approach in directional connectivity studies of human fMRI.

  10. Dynamic analysis of Space Shuttle/RMS configuration using continuum approach

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Taylor, Lawrence W., Jr.

    1994-01-01

    The initial assembly of Space Station Freedom involves the Space Shuttle, its Remote Manipulation System (RMS) and the evolving Space Station Freedom. The dynamics of this coupled system involves both the structural and the control system dynamics of each of these components. The modeling and analysis of such an assembly is made even more formidable by kinematic and joint nonlinearities. The current practice of modeling such flexible structures is to use finite element modeling in which the mass and interior dynamics is ignored between thousands of nodes, for each major component. The model characteristics of only tens of modes are kept out of thousands which are calculated. The components are then connected by approximating the boundary conditions and inserting the control system dynamics. In this paper continuum models are used instead of finite element models because of the improved accuracy, reduced number of model parameters, the avoidance of model order reduction, and the ability to represent the structural and control system dynamics in the same system of equations. Dynamic analysis of linear versions of the model is performed and compared with finite element model results. Additionally, the transfer matrix to continuum modeling is presented.

  11. Technical note: validation of a motion analysis system for measuring the relative motion of the intermediate component of a tripolar total hip arthroplasty prosthesis.

    PubMed

    Chen, Qingshan; Lazennec, Jean Yves; Guyen, Olivier; Kinbrum, Amy; Berry, Daniel J; An, Kai-Nan

    2005-07-01

    Tripolar total hip arthroplasty (THA) prosthesis had been suggested as a method to reduce the occurrence of hip dislocation and microseparation. Precisely measuring the motion of the intermediate component in vitro would provide fundamental knowledge for understanding its mechanism. The present study validates the accuracy and repeatability of a three-dimensional motion analysis system to quantitatively measure the relative motion of the intermediate component of tripolar total hip arthroplasty prostheses. Static and dynamic validations of the system were made by comparing the measurement to that of a potentiometer. Differences between the mean system-calculated angle and the angle measured by the potentiometer were within +/-1 degrees . The mean within-trial variability was less than 1 degrees . The mean slope was 0.9-1.02 for different angular velocities. The dynamic noise was within 1 degrees . The system was then applied to measure the relative motion of an eccentric THA prosthesis. The study shows that this motion analysis system provides an accurate and practical method for measuring the relative motion of the tripolar THA prosthesis in vitro, a necessary first step towards the understanding of its in vivo kinematics.

  12. Diffusion Modelling Reveals the Decision Making Processes Underlying Negative Judgement Bias in Rats.

    PubMed

    Hales, Claire A; Robinson, Emma S J; Houghton, Conor J

    2016-01-01

    Human decision making is modified by emotional state. Rodents exhibit similar biases during interpretation of ambiguous cues that can be altered by affective state manipulations. In this study, the impact of negative affective state on judgement bias in rats was measured using an ambiguous-cue interpretation task. Acute treatment with an anxiogenic drug (FG7142), and chronic restraint stress and social isolation both induced a bias towards more negative interpretation of the ambiguous cue. The diffusion model was fit to behavioural data to allow further analysis of the underlying decision making processes. To uncover the way in which parameters vary together in relation to affective state manipulations, independent component analysis was conducted on rate of information accumulation and distances to decision threshold parameters for control data. Results from this analysis were applied to parameters from negative affective state manipulations. These projected components were compared to control components to reveal the changes in decision making processes that are due to affective state manipulations. Negative affective bias in rodents induced by either FG7142 or chronic stress is due to a combination of more negative interpretation of the ambiguous cue, reduced anticipation of the high reward and increased anticipation of the low reward.

  13. Urea, the most abundant component in urine, cross-reacts with a commercial 8-OH-dG ELISA kit and contributes to overestimation of urinary 8-OH-dG.

    PubMed

    Song, Ming-Fen; Li, Yun-Shan; Ootsuyama, Yuko; Kasai, Hiroshi; Kawai, Kazuaki; Ohta, Masanori; Eguchi, Yasumasa; Yamato, Hiroshi; Matsumoto, Yuki; Yoshida, Rie; Ogawa, Yasutaka

    2009-07-01

    Urinary 8-OH-dG is commonly analyzed as a marker of oxidative stress. For its analysis, ELISA and HPLC methods are generally used, although discrepancies in the data obtained by these methods have often been discussed. To clarify this problem, we fractionated human urine by reverse-phase HPLC and assayed each fraction by the ELISA method. In addition to the 8-OH-dG fraction, a positive reaction was observed in the first eluted fraction. The components in this fraction were examined by the ELISA. Urea was found to be the responsible component in this fraction. Urea is present in high concentrations in the urine of mice, rats, and humans, and its level is influenced by many factors. Therefore, certain improvements, such as a correction based on urea content or urease treatment, are required for the accurate analysis of urinary 8-OH-dG by the ELISA method. In addition, performance of the ELISA at 4 degrees C reduced the recognition of urea considerably and improved the 8-OH-dG analysis.

  14. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  15. Prediction of Knee Joint Contact Forces From External Measures Using Principal Component Prediction and Reconstruction.

    PubMed

    Saliba, Christopher M; Clouthier, Allison L; Brandon, Scott C E; Rainbow, Michael J; Deluzio, Kevin J

    2018-05-29

    Abnormal loading of the knee joint contributes to the pathogenesis of knee osteoarthritis. Gait retraining is a non-invasive intervention that aims to reduce knee loads by providing audible, visual, or haptic feedback of gait parameters. The computational expense of joint contact force prediction has limited real-time feedback to surrogate measures of the contact force, such as the knee adduction moment. We developed a method to predict knee joint contact forces using motion analysis and a statistical regression model that can be implemented in near real-time. Gait waveform variables were deconstructed using principal component analysis and a linear regression was used to predict the principal component scores of the contact force waveforms. Knee joint contact force waveforms were reconstructed using the predicted scores. We tested our method using a heterogenous population of asymptomatic controls and subjects with knee osteoarthritis. The reconstructed contact force waveforms had mean (SD) RMS differences of 0.17 (0.05) bodyweight compared to the contact forces predicted by a musculoskeletal model. Our method successfully predicted subject-specific shape features of contact force waveforms and is a potentially powerful tool in biofeedback and clinical gait analysis.

  16. Automated Classification and Analysis of Non-metallic Inclusion Data Sets

    NASA Astrophysics Data System (ADS)

    Abdulsalam, Mohammad; Zhang, Tongsheng; Tan, Jia; Webler, Bryan A.

    2018-05-01

    The aim of this study is to utilize principal component analysis (PCA), clustering methods, and correlation analysis to condense and examine large, multivariate data sets produced from automated analysis of non-metallic inclusions. Non-metallic inclusions play a major role in defining the properties of steel and their examination has been greatly aided by automated analysis in scanning electron microscopes equipped with energy dispersive X-ray spectroscopy. The methods were applied to analyze inclusions on two sets of samples: two laboratory-scale samples and four industrial samples from a near-finished 4140 alloy steel components with varying machinability. The laboratory samples had well-defined inclusions chemistries, composed of MgO-Al2O3-CaO, spinel (MgO-Al2O3), and calcium aluminate inclusions. The industrial samples contained MnS inclusions as well as (Ca,Mn)S + calcium aluminate oxide inclusions. PCA could be used to reduce inclusion chemistry variables to a 2D plot, which revealed inclusion chemistry groupings in the samples. Clustering methods were used to automatically classify inclusion chemistry measurements into groups, i.e., no user-defined rules were required.

  17. Reference Models for Structural Technology Assessment and Weight Estimation

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd

    2005-01-01

    Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.

  18. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  19. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  20. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  1. Does Kinematic Alignment and Flexion of a Femoral Component Designed for Mechanical Alignment Reduce the Proximal and Lateral Reach of the Trochlea?

    PubMed

    Brar, Abheetinder S; Howell, Stephen M; Hull, Maury L; Mahfouz, Mohamed R

    2016-08-01

    Kinematically aligned total knee arthroplasty uses a femoral component designed for mechanical alignment (MA) and sets the component in more internal, valgus, and flexion rotation than MA. It is unknown how much kinematic alignment (KA) and flexion of the femoral component reduce the proximal and lateral reach of the trochlea; two reductions that could increase the risk of abnormal patella tracking. We simulated MA and KA of the femoral component in 0° of flexion on 20 3-dimensional bone models of normal femurs. The mechanically and kinematically aligned components were then aligned in 5°, 10°, and 15° of flexion and downsized until the flange contacted the anterior femur. The reductions in the proximal and lateral reach from the proximal point of the trochlea of the MA component set in 0° of flexion were computed. KA at 0° of flexion did not reduce the proximal reach and reduced the lateral reach an average of 3 mm. Flexion of the MA and KA femoral component 5°, 10°, and 15° reduced the proximal reach an average of 4 mm, 8 mm, and 12 mm, respectively (0.8 mm/degree of flexion), and reduced the lateral reach an average of 1 mm and 4 mm regardless of the degree of flexion, respectively. Arthroplasty surgeons and biomechanical engineers striving to optimize patella tracking might consider developing surgical techniques to minimize flexion of the femoral component when performing KA and MA total knee arthroplasty to promote early patella engagement and consider designing a femoral component with a trochlea shaped specifically for KA. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Preservation of micro-architecture and angiogenic potential in a pulmonary acellular matrix obtained using intermittent intra-tracheal flow of detergent enzymatic treatment.

    PubMed

    Maghsoudlou, Panagiotis; Georgiades, Fanourios; Tyraskis, Athanasios; Totonelli, Giorgia; Loukogeorgakis, Stavros P; Orlando, Giuseppe; Shangaris, Panicos; Lange, Peggy; Delalande, Jean-Marie; Burns, Alan J; Cenedese, Angelo; Sebire, Neil J; Turmaine, Mark; Guest, Brogan N; Alcorn, John F; Atala, Anthony; Birchall, Martin A; Elliott, Martin J; Eaton, Simon; Pierro, Agostino; Gilbert, Thomas W; De Coppi, Paolo

    2013-09-01

    Tissue engineering of autologous lung tissue aims to become a therapeutic alternative to transplantation. Efforts published so far in creating scaffolds have used harsh decellularization techniques that damage the extracellular matrix (ECM), deplete its components and take up to 5 weeks to perform. The aim of this study was to create a lung natural acellular scaffold using a method that will reduce the time of production and better preserve scaffold architecture and ECM components. Decellularization of rat lungs via the intratracheal route removed most of the nuclear material when compared to the other entry points. An intermittent inflation approach that mimics lung respiration yielded an acellular scaffold in a shorter time with an improved preservation of pulmonary micro-architecture. Electron microscopy demonstrated the maintenance of an intact alveolar network, with no evidence of collapse or tearing. Pulsatile dye injection via the vasculature indicated an intact capillary network in the scaffold. Morphometry analysis demonstrated a significant increase in alveolar fractional volume, with alveolar size analysis confirming that alveolar dimensions were maintained. Biomechanical testing of the scaffolds indicated an increase in resistance and elastance when compared to fresh lungs. Staining and quantification for ECM components showed a presence of collagen, elastin, GAG and laminin. The intratracheal intermittent decellularization methodology could be translated to sheep lungs, demonstrating a preservation of ECM components, alveolar and vascular architecture. Decellularization treatment and methodology preserves lung architecture and ECM whilst reducing the production time to 3 h. Cell seeding and in vivo experiments are necessary to proceed towards clinical translation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Lessons Learned from Application of System and Software Level RAMS Analysis to a Space Control System

    NASA Astrophysics Data System (ADS)

    Silva, N.; Esper, A.

    2012-01-01

    The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.

  4. Continental hydrology loading observed by VLBI measurements

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, D. S.

    2014-07-01

    Variations in continental water storage lead to loading deformation of the crust with typical peak-to-peak variations at very long baseline interferometry (VLBI) sites of 3-15 mm in the vertical component and 1-2 mm in the horizontal component. The hydrology signal at VLBI sites has annual and semi-annual components and clear interannual variations. We have calculated the hydrology loading series using mass loading distributions derived from the global land data assimilation system (GLDAS) hydrology model and alternatively from a global grid of equal-area gravity recovery and climate experiment (GRACE) mascons. In the analysis of the two weekly VLBI 24-h R1 and R4 network sessions from 2003 to 2010 the baseline length repeatabilities are reduced in 79 % (80 %) of baselines when GLDAS (GRACE) loading corrections are applied. Site vertical coordinate repeatabilities are reduced in about 80 % of the sites when either GLDAS or GRACE loading is used. In the horizontal components, reduction occurs in 70-80 % of the sites. Estimates of the annual site vertical amplitudes were reduced for 16 out of 18 sites if either loading series was applied. We estimated loading admittance factors for each site and found that the average admittances were 1.01 0.05 for GRACE and 1.39 0.07 for GLDAS. The standard deviations of the GRACE admittances and GLDAS admittances were 0.31 and 0.68, respectively. For sites that have been observed in a set of sufficiently temporally dense daily sessions, the average correlation between VLBI vertical monthly averaged series and GLDAS or GRACE loading series was 0.47 and 0.43, respectively.

  5. Neurophysiological correlates of abnormal somatosensory temporal discrimination in dystonia.

    PubMed

    Antelmi, Elena; Erro, Roberto; Rocchi, Lorenzo; Liguori, Rocco; Tinazzi, Michele; Di Stasio, Flavio; Berardelli, Alfredo; Rothwell, John C; Bhatia, Kailash P

    2017-01-01

    Somatosensory temporal discrimination threshold is often prolonged in patients with dystonia. Previous evidence suggested that this might be caused by impaired somatosensory processing in the time domain. Here, we tested if other markers of reduced inhibition in the somatosensory system might also contribute to abnormal somatosensory temporal discrimination in dystonia. Somatosensory temporal discrimination threshold was measured in 19 patients with isolated cervical dystonia and 19 age-matched healthy controls. We evaluated temporal somatosensory inhibition using paired-pulse somatosensory evoked potentials, spatial somatosensory inhibition by measuring the somatosensory evoked potentials interaction between simultaneous stimulation of the digital nerves in thumb and index finger, and Gamma-aminobutyric acid-ergic (GABAergic) sensory inhibition using the early and late components of high-frequency oscillations in digital nerves somatosensory evoked potentials. When compared with healthy controls, dystonic patients had longer somatosensory temporal discrimination thresholds, reduced suppression of cortical and subcortical paired-pulse somatosensory evoked potentials, less spatial inhibition of simultaneous somatosensory evoked potentials, and a smaller area of the early component of the high-frequency oscillations. A logistic regression analysis found that paired pulse suppression of the N20 component at an interstimulus interval of 5 milliseconds and the late component of the high-frequency oscillations were independently related to somatosensory temporal discrimination thresholds. "Dystonia group" was also a predictor of enhanced somatosensory temporal discrimination threshold, indicating a dystonia-specific effect that independently influences this threshold. Increased somatosensory temporal discrimination threshold in dystonia is related to reduced activity of inhibitory circuits within the primary somatosensory cortex. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  6. Effect of Valsartan on Cerebellar Adrenomedullin System Dysregulation During Hypertension.

    PubMed

    Figueira, Leticia; Israel, Anita

    2017-02-01

    Adrenomedullin (AM) and its receptors components, calcitonin-receptor-like receptor (CRLR), and receptor activity-modifying protein (RAMP1, RAMP2, and RAMP3) are expressed in cerebellum. Cerebellar AM, AM binding sites and receptor components are altered during hypertension, suggesting a role for cerebellar AM in blood pressure regulation. Thus, we assessed the effect of valsartan, on AM and its receptor components expression in the cerebellar vermis of Wistar Kyoto (WKY) and spontaneously hypertensive (SHR) rats. Additionally, we evaluated AM action on superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) activity, and thiobarbituric acid reactive substances (TBARS) production in cerebellar vermis. Animals were treated with valsartan or vehicle for 11 days. Rats were sacrificed by decapitation; cerebellar vermis was dissected; and AM, CRLR, RAMP1, RAMP2, and RAMP3 expression was quantified by Western blot analysis. CAT, SOD, and GPx activity was determined spectrophotometrically and blood pressure by non-invasive plethysmography. We demonstrate that AM and RAMP2 expression was lower in cerebellum of SHR rats, while CRLR, RAMP1, and RAMP3 expression was higher than those of WKY rats. AM reduced cerebellar CAT, SOD, GPx activities, and TBARS production in WKY rats, but not in SHR rats. Valsartan reduced blood pressure and reversed the altered expression of AM and its receptors components, as well the loss of AM capacity to reduce antioxidant enzyme activity and TBARS production in SHR rats. These findings demonstrate that valsartan is able to reverse the dysregulation of cerebellar adrenomedullinergic system; and they suggest that altered AM system in the cerebellum could represent the primary abnormality leading to hypertension.

  7. Velo and REXAN - Integrated Data Management and High Speed Analysis for Experimental Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Carson, James P.; Corrigan, Abigail L.

    2013-01-10

    The Chemical Imaging Initiative at the Pacific Northwest National Laboratory (PNNL) is creating a ‘Rapid Experimental Analysis’ (REXAN) Framework, based on the concept of reusable component libraries. REXAN allows developers to quickly compose and customize high throughput analysis pipelines for a range of experiments, as well as supporting the creation of multi-modal analysis pipelines. In addition, PNNL has coupled REXAN with its collaborative data management and analysis environment Velo to create an easy to use data management and analysis environments for experimental facilities. This paper will discuss the benefits of Velo and REXAN in the context of three examples: PNNLmore » High Resolution Mass Spectrometry - reducing analysis times from hours to seconds, and enabling the analysis of much larger data samples (100KB to 40GB) at the same time · ALS X-Ray tomography - reducing analysis times of combined STXM and EM data collected at the ALS from weeks to minutes, decreasing manual work and increasing data volumes that can be analysed in a single step ·Multi-modal nano-scale analysis of STXM and TEM data - providing a semi automated process for particle detection The creation of REXAN has significantly shortened the development time for these analysis pipelines. The integration of Velo and REXAN has significantly increased the scientific productivity of the instruments and their users by creating easy to use data management and analysis environments with greatly reduced analysis times and improved analysis capabilities.« less

  8. Low power pulsed MPD thruster system analysis and applications

    NASA Astrophysics Data System (ADS)

    Myers, Roger M.; Domonkos, Matthew; Gilland, James H.

    1993-09-01

    Pulsed magnetoplasmadynamic (MPD) thruster systems were analyzed for application to solar-electric orbit transfer vehicles at power levels ranging from 10 to 40 kW. Potential system level benefits of pulsed propulsion technology include ease of power scaling without thruster performance changes, improved transportability from low power flight experiments to operational systems, and reduced ground qualification costs. Required pulsed propulsion system components include a pulsed applied-field MPD thruster, a pulse-forming network, a charge control unit, a cathode heater supply, and high speed valves. Mass estimates were obtained for each propulsion subsystem and spacecraft component using off-the-shelf technology whenever possible. Results indicate that for payloads of 1000 and 2000 kg pulsed MPD thrusters can reduce launch mass by between 1000 and 2500 kg over those achievable with hydrogen arcjets, which can be used to reduce launch vehicle class and the associated launch cost. While the achievable mass savings depends on the trip time allowed for the mission, cases are shown in which the launch vehicle required for a mission is decreased from an Atlas IIAS to an Atlas I or Delta 7920.

  9. Low power pulsed MPD thruster system analysis and applications

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.; Domonkos, Matthew; Gilland, James H.

    1993-01-01

    Pulsed magnetoplasmadynamic (MPD) thruster systems were analyzed for application to solar-electric orbit transfer vehicles at power levels ranging from 10 to 40 kW. Potential system level benefits of pulsed propulsion technology include ease of power scaling without thruster performance changes, improved transportability from low power flight experiments to operational systems, and reduced ground qualification costs. Required pulsed propulsion system components include a pulsed applied-field MPD thruster, a pulse-forming network, a charge control unit, a cathode heater supply, and high speed valves. Mass estimates were obtained for each propulsion subsystem and spacecraft component using off-the-shelf technology whenever possible. Results indicate that for payloads of 1000 and 2000 kg pulsed MPD thrusters can reduce launch mass by between 1000 and 2500 kg over those achievable with hydrogen arcjets, which can be used to reduce launch vehicle class and the associated launch cost. While the achievable mass savings depends on the trip time allowed for the mission, cases are shown in which the launch vehicle required for a mission is decreased from an Atlas IIAS to an Atlas I or Delta 7920.

  10. Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.

    PubMed

    Nguyen, Phuong H

    2006-12-01

    Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.

  11. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  12. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  13. Job characteristics and burnout: The moderating roles of emotional intelligence, motivation and pay among bank employees.

    PubMed

    Salami, Samuel O; Ajitoni, Sunday O

    2016-10-01

    This study investigated the prediction of burnout from job characteristics, emotional intelligence, motivation and pay among bank employees. It also examined the interactions of emotional intelligence, motivation, pay and job characteristics in the prediction of burnout. Data obtained from 230 (Males = 127, Females = 103) bank employees were analysed using Pearson's Product Moment Correlation and multiple regression analysis. Results showed that theses variables jointly and separately negatively predicted burnout components. The results further indicated that emotional intelligence, motivation and pay separately interacted with some job characteristic components to negatively predict some burnout components. The findings imply that emotional intelligence, motivation and pay could be considered by counsellors when designing interventions to reduce burnout among bank employees. © 2015 International Union of Psychological Science.

  14. Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.

    PubMed

    Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin

    2017-11-01

    Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Maximizing efficiency on trauma surgeon rounds.

    PubMed

    Ramaniuk, Aliaksandr; Dickson, Barbara J; Mahoney, Sean; O'Mara, Michael S

    2017-01-01

    Rounding by trauma surgeons is a complex multidisciplinary team-based process in the inpatient setting. Implementation of lean methodology aims to increase understanding of the value stream and eliminate nonvalue-added (NVA) components. We hypothesized that analysis of trauma rounds with education and intervention would improve surgeon efficacy. Level 1 trauma center with 4300 admissions per year. Average non-intensive care unit census was 55. Five full-time attending trauma surgeons were evaluated. Value-added (VA) and NVA components of rounding were identified. The components of each patient interaction during daily rounds were documented. Summary data were presented to the surgeons. An action plan of improvement was provided at group and individual interventions. Change plans were presented to the multidisciplinary team. Data were recollected 6 mo after intervention. The percent of interactions with NVA components decreased (16.0% to 10.7%, P = 0.0001). There was no change between the two periods in time of evaluation of individual patients (4.0 and 3.5 min, P = 0.43). Overall time to complete rounds did not change. There was a reduction in the number of interactions containing NVA components (odds ratio = 2.5). The trauma surgeons were able to reduce the NVA components of rounds. We did not see a decrease in rounding time or individual patient time. This implies that surgeons were able to reinvest freed time into patient care, or that the NVA components were somehow not increasing process time. Direct intervention for isolated improvements can be effective in the rounding process, and efforts should be focused upon improving the value of time spent rather than reducing time invested. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Reducing occupational sitting: Workers' perspectives on participation in a multi-component intervention.

    PubMed

    Hadgraft, Nyssa T; Willenberg, Lisa; LaMontagne, Anthony D; Malkoski, Keti; Dunstan, David W; Healy, Genevieve N; Moodie, Marj; Eakin, Elizabeth G; Owen, Neville; Lawler, Sheleigh P

    2017-05-30

    Office workers spend much of their time sitting, which is now understood to be a risk factor for several chronic diseases. This qualitative study examined participants' perspectives following their involvement in a cluster randomised controlled trial of a multi-component intervention targeting prolonged workplace sitting (Stand Up Victoria). The intervention incorporated a sit-stand workstation, individual health coaching and organisational support strategies. The aim of the study was to explore the acceptability of the intervention, barriers and facilitators to reducing workplace sitting, and perceived effects of the intervention on workplace culture, productivity and health-related outcomes. Semi-structured interviews (n = 21 participants) and two focus groups (n = 7) were conducted with intervention participants at the conclusion of the 12 month trial and thematic analysis was used to analyse the data. Questions covered intervention acceptability, overall impact, barriers and facilitators to reducing workplace sitting, and perceived impact on productivity and workplace culture. Overall, participants had positive intervention experiences, perceiving that reductions in workplace sitting were associated with improved health and well-being with limited negative impact on work performance. While sit-stand workstations appeared to be the primary drivers of change, workstation design and limited suitability of standing for some job tasks and situations were perceived as barriers to their use. Social support from team leaders and other participants was perceived to facilitate behavioural changes and a shift in norms towards increased acceptance of standing in the workplace. Multi-component interventions to reduce workplace sitting, incorporating sit-stand workstations, are acceptable and feasible; however, supportive social and environmental conditions are required to support participant engagement. Best practice approaches to reduce workplace sitting should address the multiple levels of influence on behaviour, including factors that may act as barriers to behavioural change.

  17. Discovery and Characterization of the 3-Hydroxyacyl-ACP Dehydratase Component of the Plant Mitochondrial Fatty Acid Synthase System1[OPEN

    PubMed Central

    Okazaki, Yozo; Lithio, Andrew; Jin, Huanan

    2017-01-01

    We report the characterization of the Arabidopsis (Arabidopsis thaliana) 3-hydroxyacyl-acyl carrier protein dehydratase (mtHD) component of the mitochondrial fatty acid synthase (mtFAS) system, encoded by AT5G60335. The mitochondrial localization and catalytic capability of mtHD were demonstrated with a green fluorescent protein transgenesis experiment and by in vivo complementation and in vitro enzymatic assays. RNA interference (RNAi) knockdown lines with reduced mtHD expression exhibit traits typically associated with mtFAS mutants, namely a miniaturized morphological appearance, reduced lipoylation of lipoylated proteins, and altered metabolomes consistent with the reduced catalytic activity of lipoylated enzymes. These alterations are reversed when mthd-rnai mutant plants are grown in a 1% CO2 atmosphere, indicating the link between mtFAS and photorespiratory deficiency due to the reduced lipoylation of glycine decarboxylase. In vivo biochemical feeding experiments illustrate that sucrose and glycolate are the metabolic modulators that mediate the alterations in morphology and lipid accumulation. In addition, both mthd-rnai and mtkas mutants exhibit reduced accumulation of 3-hydroxytetradecanoic acid (i.e. a hallmark of lipid A-like molecules) and abnormal chloroplastic starch granules; these changes are not reversible by the 1% CO2 atmosphere, demonstrating two novel mtFAS functions that are independent of photorespiration. Finally, RNA sequencing analysis revealed that mthd-rnai and mtkas mutants are nearly equivalent to each other in altering the transcriptome, and these analyses further identified genes whose expression is affected by a functional mtFAS system but independent of photorespiratory deficiency. These data demonstrate the nonredundant nature of the mtFAS system, which contributes unique lipid components needed to support plant cell structure and metabolism. PMID:28202596

  18. Glutathione and glutamate in schizophrenia: a 7T MRS study.

    PubMed

    Kumar, Jyothika; Liddle, Elizabeth B; Fernandes, Carolina C; Palaniyappan, Lena; Hall, Emma L; Robson, Siân E; Simmonite, Molly; Fiesal, Jan; Katshu, Mohammad Z; Qureshi, Ayaz; Skelton, Michael; Christodoulou, Nikolaos G; Brookes, Matthew J; Morris, Peter G; Liddle, Peter F

    2018-06-22

    In schizophrenia, abnormal neural metabolite concentrations may arise from cortical damage following neuroinflammatory processes implicated in acute episodes. Inflammation is associated with increased glutamate, whereas the antioxidant glutathione may protect against inflammation-induced oxidative stress. We hypothesized that patients with stable schizophrenia would exhibit a reduction in glutathione, glutamate, and/or glutamine in the cerebral cortex, consistent with a post-inflammatory response, and that this reduction would be most marked in patients with "residual schizophrenia", in whom an early stage with positive psychotic symptoms has progressed to a late stage characterized by long-term negative symptoms and impairments. We recruited 28 patients with stable schizophrenia and 45 healthy participants matched for age, gender, and parental socio-economic status. We measured glutathione, glutamate and glutamine concentrations in the anterior cingulate cortex (ACC), left insula, and visual cortex using 7T proton magnetic resonance spectroscopy (MRS). Glutathione and glutamate were significantly correlated in all three voxels. Glutamine concentrations across the three voxels were significantly correlated with each other. Principal components analysis (PCA) produced three clear components: an ACC glutathione-glutamate component; an insula-visual glutathione-glutamate component; and a glutamine component. Patients with stable schizophrenia had significantly lower scores on the ACC glutathione-glutamate component, an effect almost entirely leveraged by the sub-group of patients with residual schizophrenia. All three metabolite concentration values in the ACC were significantly reduced in this group. These findings are consistent with the hypothesis that excitotoxicity during the acute phase of illness leads to reduced glutathione and glutamate in the residual phase of the illness.

  19. Civilian Surge: Key to Complex Operations

    DTIC Science & Technology

    2008-12-01

    Division, the unit’s combat operations were reduced by 60 percent over a period of 8 months, enabling Soldiers to focus on improving security, health ... improving the usefulness of existing conflict early warning tools and integrating them with the analysis, prevention, and response components of S /CRS...force protection procedures. Integrated Stabilization Assistance Programs Since 2005, S /CRS has provided technical assistance consultations to

  20. Principal Cluster Axes: A Projection Pursuit Index for the Preservation of Cluster Structures in the Presence of Data Reduction

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.; Henson, Robert

    2012-01-01

    A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…

  1. Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation

    NASA Astrophysics Data System (ADS)

    Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno

    2014-05-01

    A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.

  2. Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing.

    PubMed

    Grant, Aileen; Dreischulte, Tobias; Guthrie, Bruce

    2017-01-07

    Two to 4% of emergency hospital admissions are caused by preventable adverse drug events. The estimated costs of such avoidable admissions in England were £530 million in 2015. The data-driven quality improvement in primary care (DQIP) intervention was designed to prompt review of patients vulnerable from currently prescribed non-steroidal anti-inflammatory drugs (NSAIDs) and anti-platelets and was found to be effective at reducing this prescribing. A process evaluation was conducted parallel to the trial, and this paper reports the analysis which aimed to explore response to the intervention delivered to clusters in relation to participants' perceptions about which intervention elements were active in changing their practice. Data generation was by in-depth interview with key staff exploring participant's perceptions of the intervention components. Analysis was iterative using the framework technique and drawing on normalisation process theory. All the primary components of the intervention were perceived as active, but at different stages of implementation: financial incentives primarily supported recruitment; education motivated the GPs to initiate implementation; the informatics tool facilitated sustained implementation. Participants perceived the primary components as interdependent. Intervention subcomponents also varied in whether and when they were active. For example, run charts providing feedback of change in prescribing over time were ignored in the informatics tool, but were motivating in some practices in the regular e-mailed newsletter. The high-risk NSAID and anti-platelet prescribing targeted was accepted as important by all interviewees, and this shared understanding was a key wider context underlying intervention effectiveness. This was a novel use of process evaluation data which examined whether and how the individual intervention components were effective from the perspective of the professionals delivering changed care to patients. These findings are important for reproducibility and roll-out of the intervention. ClinicalTrials.gov, NCT01425502 .

  3. The Effect of Variable End of Charge Battery Management on Small-Cell Batteries

    NASA Technical Reports Server (NTRS)

    Neubauer, Jeremy S.; Bennetti, Andrea; Pearson, Chris; Simmons, Nick; Reid, Concha; Manzo, Michelle

    2007-01-01

    Batteries are critical components for spacecraft, supplying power to all electrical systems during solar eclipse. These components must be lightweight due to launch vehicle limitations and the desire to fly heavier, more capable payloads, and must show excellent capacity retention with age to support the ever growing durations of space missions. ABSL's heritage Lithium Ion cell, the ABSL 18650HC, is an excellent low mass solution to this problem that has been proven capable of supporting long mission durations. The NASA Glenn Research Center recently proposed and initiated a test to study the effects of reduced end of charge voltage on aging of the ABSL 18650HC and other Lithium Ion cells. This paper presents the testing details, a method to analyze and compare capacity fade between the different cases, and a preliminary analysis of the to-date performance of ABSL s cells. This initial analysis indicates that employing reduced end of charge techniques could double the life capabilities of the ABSL 18650HC cell. Accordingly, continued investigation is recommended, particularly at higher depths of discharge to better assess the method s potential mass savings for short duration missions.

  4. [An ADAA model and its analysis method for agronomic traits based on the double-cross mating design].

    PubMed

    Xu, Z C; Zhu, J

    2000-01-01

    According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.

  5. Spectroscopic study of honey from Apis mellifera from different regions in Mexico

    NASA Astrophysics Data System (ADS)

    Frausto-Reyes, C.; Casillas-Peñuelas, R.; Quintanar-Stephano, JL; Macías-López, E.; Bujdud-Pérez, JM; Medina-Ramírez, I.

    2017-05-01

    The objective of this study was to analyze by Raman and UV-Vis-NIR Spectroscopic techniques, Mexican honey from Apis Mellífera, using representative samples with different botanic origins (unifloral and multifloral) and diverse climates. Using Raman spectroscopy together with principal components analysis, the results obtained represent the possibility to use them for determination of floral origin of honey, independently of the region of sampling. For this, the effect of heat up the honey was analyzed in relation that it was possible to greatly reduce the fluorescence background in Raman spectra, which allowed the visualization of fructose and glucose peaks. Using UV-Vis-NIR, spectroscopy, a characteristic spectrum profile of transmittance was obtained for each honey type. In addition, to have an objective characterization of color, a CIE Yxy and CIE L*a*b* colorimetric register was realized for each honey type. Applying the principal component analysis and their correlation with chromaticity coordinates allowed classifying the honey samples in one plot as: cutoff wavelength, maximum transmittance, tones and lightness. The results show that it is possible to obtain a spectroscopic record of honeys with specific characteristics by reducing the effects of fluorescence.

  6. Evaluation of a Voluntary Worksite Weight Loss Program on Metabolic Syndrome.

    PubMed

    Earnest, Conrad P; Church, Timothy S

    2015-11-01

    Health care costs increase with the presence of metabolic syndrome and present a significant burden to companies throughout the world. Identifying effective behavioral programs within the workplace can reduce health care costs. We examined the effect of a voluntary worksite program on weight loss and metabolic syndrome. Participants (N = 3880, from 93 companies) volunteered within their workplaces to participate in a 10-week weight loss program (Naturally Slim) focused on self-monitoring, eating behaviors, understanding hunger signals, reducing refined carbohydrate and sugar intake, and increasing protein intake to 25%-30%. Primary outcomes included weight loss and metabolic syndrome prevalence. Secondary analyses examined the individual components of metabolic syndrome and a categorical analysis within each World Health Organization body mass index category. Overall, women and men lost 9.4 (-4.8%) and 13.2 pounds (-5.8%), respectively. Each metabolic risk factor for both genders had a significant improvement but men exhibited the largest relative improvement for each risk factor. At baseline, 43% of women and 52% of men presented with metabolic syndrome, which was reduced to 30% in women and 26% in men (P < 0.001 for each) at the conclusion of the program. Secondary analysis demonstrated that individuals with greater baseline levels of metabolic dysfunction had larger metabolic improvements, similar benefits to risk factors across baseline body mass index categories, and the greater the weight loss, the greater the metabolic benefit. Our results demonstrate that a worksite program targeting core behavioral skills associated with weight loss is an effective strategy to reduce weight and improve the components of metabolic syndrome amongst at-risk employees.

  7. [The influence of oil heat treatment on wood decay resistance by Fourier infrared spectrum analysis].

    PubMed

    Wang, Ya-Mei; Ma, Shu-Ling; Feng, Li-Qun

    2014-03-01

    Wood preservative treatment can improve defects of plantation wood such as easy to corrupt and moth eaten. Among them heat-treatment is not only environmental and no pollution, also can improve the corrosion resistance and dimension stability of wood. In this test Poplar and Mongolian Seoteh Pine was treated by soybean oil as heat-conducting medium, and the heat treatment wood was studied for indoor decay resistance; wood chemical components before and after treatment, the effect of heat treatment on wood decay resistance performance and main mechanism of action were analysed by Fourier infrared spectrometric. Results showed that the mass loss rate of poplar fell from 19.37% to 5% and Mongolian Seoteh Pine's fell from 8.23% to 3.15%, so oil heat treatment can effectively improve the decay resistance. Infrared spectrum analysis shows that the heat treatment made wood's hydrophilic groups such as hydroxyl groups in largely reduced, absorbing capacity decreased and the moisture of wood rotting fungi necessary was reduced; during the heat treatment wood chemical components such as cellulose, hemicellu lose were degraded, and the nutrient source of wood rotting fungi growth necessary was reduced. Wood decay fungi can grow in the wood to discredit wood is because of that wood can provide better living conditions for wood decay fungi, such as nutrients, water, oxygen, and so on. The cellulose and hemicellulose in wood is the main nutrition source of wood decay fungi. So the oil heat-treatment can reduce the cellulose, hemicellulose nutrition source of wood decay fungi so as to improve the decay resistance of wood.

  8. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  9. High efficiency direct detection of ions from resonance ionization of sputtered atoms

    DOEpatents

    Gruen, Dieter M.; Pellin, Michael J.; Young, Charles E.

    1986-01-01

    A method and apparatus are provided for trace and other quantitative analysis with high efficiency of a component in a sample, with the analysis involving the removal by ion or other bombardment of a small quantity of ion and neutral atom groups from the sample, the conversion of selected neutral atom groups to photoions by laser initiated resonance ionization spectroscopy, the selective deflection of the photoions for separation from original ion group emanating from the sample, and the detection of the photoions as a measure of the quantity of the component. In some embodiments, the original ion group is accelerated prior to the RIS step for separation purposes. Noise and other interference are reduced by shielding the detector from primary and secondary ions and deflecting the photoions sufficiently to avoid the primary and secondary ions.

  10. High efficiency direct detection of ions from resonance ionization of sputtered atoms

    DOEpatents

    Gruen, D.M.; Pellin, M.J.; Young, C.E.

    1985-01-16

    A method and apparatus are provided for trace and other quantitative analysis with high efficiency of a component in a sample, with the analysis involving the removal by ion or other bombardment of a small quantity of ion and neutral atom groups from the sample, the conversion of selected neutral atom groups to photoions by laser initiated resonance ionization spectroscopy, the selective deflection of the photoions for separation from original ion group emanating from the sample, and the detection of the photoions as a measure of the quantity of the component. In some embodiments, the original ion group is accelerated prior to the RIS step for separation purposes. Noise and other interference are reduced by shielding the detector from primary and secondary ions and deflecting the photoions sufficiently to avoid the primary and secondary ions.

  11. Coupled parametric design of flow control and duct shape

    NASA Technical Reports Server (NTRS)

    Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)

    2009-01-01

    A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.

  12. Separation of the long-term thermal effects from the strain measurements in the Geodynamics Laboratory of Lanzarote

    NASA Astrophysics Data System (ADS)

    Venedikov, A. P.; Arnoso, J.; Cai, W.; Vieira, R.; Tan, S.; Velez, E. J.

    2006-01-01

    A 12-year series (1992-2004) of strain measurements recorded in the Geodynamics Laboratory of Lanzarote is investigated. Through a tidal analysis the non-tidal component of the data is separated in order to use it for studying signals, useful for monitoring of the volcanic activity on the island. This component contains various perturbations of meteorological and oceanic origin, which should be eliminated in order to make the useful signals discernible. The paper is devoted to the estimation and elimination of the effect of the air temperature inside the station, which strongly dominates the strainmeter data. For solving this task, a regression model is applied, which includes a linear relation with the temperature and time-dependant polynomials. The regression includes nonlinearly a set of parameters, which are estimated by a properly applied Bayesian approach. The results obtained are: the regression coefficient of the strain data on temperature is equal to (-367.4 ± 0.8) × 10 -9 °C -1, the curve of the non-tidal component reduced by the effect of the temperature and a polynomial approximation of the reduced curve. The technique used here can be helpful to investigators in the domain of the earthquake and volcano monitoring. However, the fundamental and extremely difficult problem of what kind of signals in the reduced curves might be useful in this field is not considered here.

  13. A modular method for evaluating the performance of picture archiving and communication systems.

    PubMed

    Sanders, W H; Kant, L A; Kudrimoti, A

    1993-08-01

    Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.

  14. Facile hydrothermal preparation of titanium dioxide decorated reduced graphene oxide nanocomposite

    PubMed Central

    Chang, Betty Yea Sze; Huang, Nay Ming; An’amt, Mohd Nor; Marlinda, Abdul Rahman; Norazriena, Yusoff; Muhamad, Muhamad Rasat; Harrison, Ian; Lim, Hong Ngee; Chia, Chin Hua

    2012-01-01

    A simple single-stage approach, based on the hydrothermal technique, has been introduced to synthesize reduced graphene oxide/titanium dioxide nanocomposites. The titanium dioxide nanoparticles are formed at the same time as the graphene oxide is reduced to graphene. The triethanolamine used in the process has two roles. It acts as a reducing agent for the graphene oxide as well as a capping agent, allowing the formation of titanium dioxide nanoparticles with a narrow size distribution (~20 nm). Transmission electron micrographs show that the nanoparticles are uniformly distributed on the reduced graphene oxide nanosheet. Thermogravimetric analysis shows the nanocomposites have an enhanced thermal stability over the original components. The potential applications for this technology were demonstrated by the use of a reduced graphene oxide/titanium dioxide nanocomposite-modified glassy carbon electrode, which enhanced the electrochemical performance compared to a conventional glassy carbon electrode when interacting with mercury(II) ions in potassium chloride electrolyte. PMID:22848166

  15. Automated urinalysis: first experiences and a comparison between the Iris iQ200 urine microscopy system, the Sysmex UF-100 flow cytometer and manual microscopic particle counting.

    PubMed

    Shayanfar, Noushin; Tobler, Ulrich; von Eckardstein, Arnold; Bestmann, Lukas

    2007-01-01

    Automated analysis of insoluble urine components can reduce the workload of conventional microscopic examination of urine sediment and is possibly helpful for standardization. We compared the diagnostic performance of two automated urine sediment analyzers and combined dipstick/automated urine analysis with that of the traditional dipstick/microscopy algorithm. A total of 332 specimens were collected and analyzed for insoluble urine components by microscopy and automated analyzers, namely the Iris iQ200 (Iris Diagnostics) and the UF-100 flow cytometer (Sysmex). The coefficients of variation for day-to-day quality control of the iQ200 and UF-100 analyzers were 6.5% and 5.5%, respectively, for red blood cells. We reached accuracy ranging from 68% (bacteria) to 97% (yeast) for the iQ200 and from 42% (bacteria) to 93% (yeast) for the UF-100. The combination of dipstick and automated urine sediment analysis increased the sensitivity of screening to approximately 98%. We conclude that automated urine sediment analysis is sufficiently precise and improves the workflow in a routine laboratory. In addition, it allows sediment analysis of all urine samples and thereby helps to detect pathological samples that would have been missed in the conventional two-step procedure according to the European guidelines. Although it is not a substitute for microscopic sediment examination, it can, when combined with dipstick testing, reduce the number of specimens submitted to microscopy. Visual microscopy is still required for some samples, namely, dysmorphic erythrocytes, yeasts, Trichomonas, oval fat bodies, differentiation of casts and certain crystals.

  16. Prediction of genomic breeding values for dairy traits in Italian Brown and Simmental bulls using a principal component approach.

    PubMed

    Pintus, M A; Gaspa, G; Nicolazzi, E L; Vicario, D; Rossoni, A; Ajmone-Marsan, P; Nardone, A; Dimauro, C; Macciotta, N P P

    2012-06-01

    The large number of markers available compared with phenotypes represents one of the main issues in genomic selection. In this work, principal component analysis was used to reduce the number of predictors for calculating genomic breeding values (GEBV). Bulls of 2 cattle breeds farmed in Italy (634 Brown and 469 Simmental) were genotyped with the 54K Illumina beadchip (Illumina Inc., San Diego, CA). After data editing, 37,254 and 40,179 single nucleotide polymorphisms (SNP) were retained for Brown and Simmental, respectively. Principal component analysis carried out on the SNP genotype matrix extracted 2,257 and 3,596 new variables in the 2 breeds, respectively. Bulls were sorted by birth year to create reference and prediction populations. The effect of principal components on deregressed proofs in reference animals was estimated with a BLUP model. Results were compared with those obtained by using SNP genotypes as predictors with either the BLUP or Bayes_A method. Traits considered were milk, fat, and protein yields, fat and protein percentages, and somatic cell score. The GEBV were obtained for prediction population by blending direct genomic prediction and pedigree indexes. No substantial differences were observed in squared correlations between GEBV and EBV in prediction animals between the 3 methods in the 2 breeds. The principal component analysis method allowed for a reduction of about 90% in the number of independent variables when predicting direct genomic values, with a substantial decrease in calculation time and without loss of accuracy. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Nonlinear transient chirp signal modeling of the aortic and pulmonary components of the second heart sound.

    PubMed

    Xu, J; Durand, L G; Pibarot, P

    2000-10-01

    This paper describes a new approach based on the time-frequency representation of transient nonlinear chirp signals for modeling the aortic (A2) and the pulmonary (P2) components of the second heart sound (S2). It is demonstrated that each component is a narrow-band signal with decreasing instantaneous frequency defined by its instantaneous amplitude and its instantaneous phase. Each component is also a polynomial phase signal, the instantaneous phase of which can be accurately represented by a polynomial having an order of thirty. A dechirping approach is used to obtain the instantaneous amplitude of each component while reducing the effect of the background noise. The analysis-synthesis procedure is applied to 32 isolated A2 and 32 isolated P2 components recorded in four pigs with pulmonary hypertension. The mean +/- standard deviation of the normalized root-mean-squared error (NRMSE) and the correlation coefficient (rho) between the original and the synthesized signal components were: NRMSE = 2.1 +/- 0.3% and rho = 0.97 +/- 0.02 for A2 and NRMSE = 2.52 +/- 0.5% and rho = 0.96 +/- 0.02 for P2. These results confirm that each component can be modeled as mono-component nonlinear chirp signals of short duration with energy distributions concentrated along its decreasing instantaneous frequency.

  18. An evaluation of multiple-schedule variations to reduce high-rate requests in the picture exchange communication system.

    PubMed

    Landa, Robin; Hanley, Gregory P

    2016-06-01

    Using procedures similar to those of Tiger, Hanley, and Heal (2006), we compared two multiple-schedule variations (S+/S- and S+ only) to treat high-rate requests for edible items in the Picture Exchange Communication System (PECS). Two individuals with autism participated, after they showed persistent requests for edible items after PECS training. Stimulus control was achieved only with the multiple schedule that involved presentation of a discriminative stimulus during reinforcement components and its removal during extinction components (S+ only). Discriminated requests were maintained for the 1 participant who experienced schedule thinning. © 2016 Society for the Experimental Analysis of Behavior.

  19. Ceramic applications in turbine engines. [for improved component performance and reduced fuel usage

    NASA Technical Reports Server (NTRS)

    Hudson, M. S.; Janovicz, M. A.; Rockwood, F. A.

    1980-01-01

    Ceramic material characterization and testing of ceramic nozzle vanes, turbine tip shrouds, and regenerators disks at 36 C above the baseline engine TIT and the design, analysis, fabrication and development activities are described. The design of ceramic components for the next generation engine to be operated at 2070 F was completed. Coupons simulating the critical 2070 F rotor blade was hot spin tested for failure with sufficient margin to quality sintered silicon nitride and sintered silicon carbide, validating both the attachment design and finite element strength. Progress made in increasing strength, minimizing variability, and developing nondestructive evaluation techniques is reported.

  20. Use of sEMG in identification of low level muscle activities: features based on ICA and fractal dimension.

    PubMed

    Naik, Ganesh R; Kumar, Dinesh K; Arjunan, Sridhar

    2009-01-01

    This paper has experimentally verified and compared features of sEMG (Surface Electromyogram) such as ICA (Independent Component Analysis) and Fractal Dimension (FD) for identification of low level forearm muscle activities. The fractal dimension was used as a feature as reported in the literature. The normalized feature values were used as training and testing vectors for an Artificial neural network (ANN), in order to reduce inter-experimental variations. The identification accuracy using FD of four channels sEMG was 58%, and increased to 96% when the signals are separated to their independent components using ICA.

  1. Modality-Spanning Deficits in Attention-Deficit/Hyperactivity Disorder in Functional Networks, Gray Matter, and White Matter

    PubMed Central

    Kessler, Daniel; Angstadt, Michael; Welsh, Robert C.

    2014-01-01

    Previous neuroimaging investigations in attention-deficit/hyperactivity disorder (ADHD) have separately identified distributed structural and functional deficits, but interconnections between these deficits have not been explored. To unite these modalities in a common model, we used joint independent component analysis, a multivariate, multimodal method that identifies cohesive components that span modalities. Based on recent network models of ADHD, we hypothesized that altered relationships between large-scale networks, in particular, default mode network (DMN) and task-positive networks (TPNs), would co-occur with structural abnormalities in cognitive regulation regions. For 756 human participants in the ADHD-200 sample, we produced gray and white matter volume maps with voxel-based morphometry, as well as whole-brain functional connectomes. Joint independent component analysis was performed, and the resulting transmodal components were tested for differential expression in ADHD versus healthy controls. Four components showed greater expression in ADHD. Consistent with our a priori hypothesis, we observed reduced DMN-TPN segregation co-occurring with structural abnormalities in dorsolateral prefrontal cortex and anterior cingulate cortex, two important cognitive control regions. We also observed altered intranetwork connectivity in DMN, dorsal attention network, and visual network, with co-occurring distributed structural deficits. There was strong evidence of spatial correspondence across modalities: For all four components, the impact of the respective component on gray matter at a region strongly predicted the impact on functional connectivity at that region. Overall, our results demonstrate that ADHD involves multiple, cohesive modality spanning deficits, each one of which exhibits strong spatial overlap in the pattern of structural and functional alterations. PMID:25505309

  2. COMPUTATIONAL ANALYSIS OF SWALLOWING MECHANICS UNDERLYING IMPAIRED EPIGLOTTIC INVERSION

    PubMed Central

    Pearson, William G.; Taylor, Brandon K; Blair, Julie; Martin-Harris, Bonnie

    2015-01-01

    Objective Determine swallowing mechanics associated with the first and second epiglottic movements, that is, movement to horizontal and full inversion respectively, in order to provide a clinical interpretation of impaired epiglottic function. Study Design Retrospective cohort study. Methods A heterogeneous cohort of patients with swallowing difficulties was identified (n=92). Two speech-language pathologists reviewed 5ml thin and 5ml pudding videofluoroscopic swallow studies per subject, and assigned epiglottic component scores of 0=complete inversion, 1=partial inversion, and 2=no inversion forming three groups of videos for comparison. Coordinates mapping minimum and maximum excursion of the hyoid, pharynx, larynx, and tongue base during pharyngeal swallowing were recorded using ImageJ software. A canonical variate analysis with post-hoc discriminant function analysis of coordinates was performed using MorphoJ software to evaluate mechanical differences between groups. Eigenvectors characterizing swallowing mechanics underlying impaired epiglottic movements were visualized. Results Nineteen of 184 video-swallows were rejected for poor quality (n=165). A Goodman-Kruskal index of predictive association showed no correlation between epiglottic component scores and etiologies of dysphagia (λ=.04). A two-way analysis of variance by epiglottic component scores showed no significant interaction effects between sex and age (f=1.4, p=.25). Discriminant function analysis demonstrated statistically significant mechanical differences between epiglottic component scores: 1&2, representing the first epiglottic movement (Mahalanobis distance=1.13, p=.0007); and, 0&1, representing the second epiglottic movement (Mahalanobis distance=0.83, p=.003). Eigenvectors indicate that laryngeal elevation and tongue base retraction underlie both epiglottic movements. Conclusion Results suggest that reduced tongue base retraction and laryngeal elevation underlie impaired first and second epiglottic movements. The styloglossus, hyoglossus and long pharyngeal muscles are implicated as targets for rehabilitation in dysphagic patients with impaired epiglottic inversion. PMID:27426940

  3. Ambient Ultrafine Particle Ingestion Alters Gut Microbiota in Association with Increased Atherogenic Lipid Metabolites

    PubMed Central

    Li, Rongsong; Yang, Jieping; Saffari, Arian; Jacobs, Jonathan; Baek, Kyung In; Hough, Greg; Larauche, Muriel H.; Ma, Jianguo; Jen, Nelson; Moussaoui, Nabila; Zhou, Bill; Kang, Hanul; Reddy, Srinivasa; Henning, Susanne M.; Campen, Matthew J.; Pisegna, Joseph; Li, Zhaoping; Fogelman, Alan M.; Sioutas, Constantinos; Navab, Mohamad; Hsiai, Tzung K.

    2017-01-01

    Ambient particulate matter (PM) exposure is associated with atherosclerosis and inflammatory bowel disease. Ultrafine particles (UFP, dp < 0.1–0.2 μm) are redox active components of PM. We hypothesized that orally ingested UFP promoted atherogenic lipid metabolites in both the intestine and plasma via altered gut microbiota composition. Low density lipoprotein receptor-null (Ldlr−/−) mice on a high-fat diet were orally administered with vehicle control or UFP (40 μg/mouse/day) for 3 days a week. After 10 weeks, UFP ingested mice developed macrophage and neutrophil infiltration in the intestinal villi, accompanied by elevated cholesterol but reduced coprostanol levels in the cecum, as well as elevated atherogenic lysophosphatidylcholine (LPC 18:1) and lysophosphatidic acids (LPAs) in the intestine and plasma. At the phylum level, Principle Component Analysis revealed significant segregation of microbiota compositions which was validated by Beta diversity analysis. UFP-exposed mice developed increased abundance in Verrocomicrobia but decreased Actinobacteria, Cyanobacteria, and Firmicutes as well as a reduced diversity in microbiome. Spearman’s analysis negatively correlated Actinobacteria with cecal cholesterol, intestinal and plasma LPC18:1, and Firmicutes and Cyanobacteria with plasma LPC 18:1. Thus, ultrafine particles ingestion alters gut microbiota composition, accompanied by increased atherogenic lipid metabolites. These findings implicate the gut-vascular axis in a atherosclerosis model. PMID:28211537

  4. Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components

    PubMed Central

    Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah

    2017-01-01

    Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627

  5. A cyclostationary multi-domain analysis of fluid instability in Kaplan turbines

    NASA Astrophysics Data System (ADS)

    Pennacchi, P.; Borghesani, P.; Chatterton, S.

    2015-08-01

    Hydraulic instabilities represent a critical problem for Francis and Kaplan turbines, reducing their useful life due to increase of fatigue on the components and cavitation phenomena. Whereas an exhaustive list of publications on computational fluid-dynamic models of hydraulic instability is available, the possibility of applying diagnostic techniques based on vibration measurements has not been investigated sufficiently, also because the appropriate sensors seldom equip hydro turbine units. The aim of this study is to fill this knowledge gap and to exploit fully, for this purpose, the potentiality of combining cyclostationary analysis tools, able to describe complex dynamics such as those of fluid-structure interactions, with order tracking procedures, allowing domain transformations and consequently the separation of synchronous and non-synchronous components. This paper will focus on experimental data obtained on a full-scale Kaplan turbine unit, operating in a real power plant, tackling the issues of adapting such diagnostic tools for the analysis of hydraulic instabilities and proposing techniques and methodologies for a highly automated condition monitoring system.

  6. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  7. Visible micro-Raman spectroscopy of single human mammary epithelial cells exposed to x-ray radiation.

    PubMed

    Delfino, Ines; Perna, Giuseppe; Lasalvia, Maria; Capozzi, Vito; Manti, Lorenzo; Camerlingo, Carlo; Lepore, Maria

    2015-03-01

    A micro-Raman spectroscopy investigation has been performed in vitro on single human mammary epithelial cells after irradiation by graded x-ray doses. The analysis by principal component analysis (PCA) and interval-PCA (i-PCA) methods has allowed us to point out the small differences in the Raman spectra induced by irradiation. This experimental approach has enabled us to delineate radiation-induced changes in protein, nucleic acid, lipid, and carbohydrate content. In particular, the dose dependence of PCA and i-PCA components has been analyzed. Our results have confirmed that micro-Raman spectroscopy coupled to properly chosen data analysis methods is a very sensitive technique to detect early molecular changes at the single-cell level following exposure to ionizing radiation. This would help in developing innovative approaches to monitor radiation cancer radiotherapy outcome so as to reduce the overall radiation dose and minimize damage to the surrounding healthy cells, both aspects being of great importance in the field of radiation therapy.

  8. Analysis and modification of theory for impact of seaplanes on water

    NASA Technical Reports Server (NTRS)

    Mayo, Wilbur L

    1945-01-01

    An analysis of available theory on seaplane impact and a proposed modification thereto are presented. In previous methods the overall momentum of the float and virtual mass has been assumed to remain constant during the impact but the present analysis shows that this assumption is rigorously correct only when the resultant velocity of the float is normal to the keel. The proposed modification chiefly involves consideration of the fact that forward velocity of the seaplane float causes momentum to be passed into the hydrodynamic downwash (an action that is the entire consideration in the case of the planing float) and consideration of the fact that, for an impact with trim, the rate of penetration is determined not only by the velocity component normal to the keel but also by the velocity component parallel to the keel, which tends to reduce the penetration. Experimental data for planing, oblique impact, and vertical drop are used to show that the accuracy of the proposed theory is good.

  9. Development of neural network techniques for finger-vein pattern classification

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Da; Liu, Chiung-Tsiung; Tsai, Yi-Jang; Liu, Jun-Ching; Chang, Ya-Wen

    2010-02-01

    A personal identification system using finger-vein patterns and neural network techniques is proposed in the present study. In the proposed system, the finger-vein patterns are captured by a device that can transmit near infrared through the finger and record the patterns for signal analysis and classification. The biometric system for verification consists of a combination of feature extraction using principal component analysis and pattern classification using both back-propagation network and adaptive neuro-fuzzy inference systems. Finger-vein features are first extracted by principal component analysis method to reduce the computational burden and removes noise residing in the discarded dimensions. The features are then used in pattern classification and identification. To verify the effect of the proposed adaptive neuro-fuzzy inference system in the pattern classification, the back-propagation network is compared with the proposed system. The experimental results indicated the proposed system using adaptive neuro-fuzzy inference system demonstrated a better performance than the back-propagation network for personal identification using the finger-vein patterns.

  10. An XMM-Newton Monitoring Campaign of the Accretion Flow in IGRJ16318-4848

    NASA Technical Reports Server (NTRS)

    Mushotzky, Richard (Technical Monitor); Nicastro, Fabrizio

    2005-01-01

    This grant is associated to a successful XMM-Newton-AO3 observational proposal to monitor the spectrum of the X-ray loud component of the recently discovered binary system IGR J16138-4848, to study the conditions of the accretion flows (and their evolution) in binary system. All four EPIC-PN and MOS observations of the target have now been performed (the last one of the 4, only 3 months ago). The four observations were logarithmically spaced, so to cover timescales from days to months. Data from all four pointings have now been reduced, using the XMM-Newton data reduction pipeline, and spectra and lightcurves from the target have been extracted. For the first three observations we have already performed the observation-by-observation data analysis, by fitting the single EPIC spectra with spectral models that include an intrinsic continuum power law (reduced at low energy by neutral absorption), a 6.4 keV iron emission line (detected in all spectra with varying intensity) and a Compton-reflection component. A Compton reflection component is also detected in all spectra, although at lower significance. The analysis of the fourth and last observation of our monitoring campaign has just recently begun. Next, we will (1) stack together the four observations of IGR J16138-4848, to obtain high-accuracy estimates of the average spectral parameters of this object; and then (2) proceed to the time-evolving analysis, of the three spectral parameters: (a) Gamma (the slope of the intrinsic continuum), (b) W(FeK), the equivalent width of the 6.4 keV Iron emission line, and (c) R, the relative amount of Compton reflection. Through this time-resolved spectroscopic analysis we hope to constrain (a) the physical state of the accreting matter and its relation with the X-ray output, and (b) the evolution of the accretion flow geometry, distribution and covering factor.

  11. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Bacterial community succession during pig manure and wheat straw aerobic composting covered with a semi-permeable membrane under slight positive pressure.

    PubMed

    Ma, Shuangshuang; Fang, Chen; Sun, Xiaoxi; Han, Lujia; He, Xueqin; Huang, Guangqun

    2018-07-01

    Bacteria play an important role in organic matter degradation and maturity during aerobic composting. This study analyzed composting with or without a membrane cover in laboratory-scale aerobic composting reactor systems. 16S rRNA gene analysis was used to study the bacterial community succession during composting. The richness of the bacterial community decreased and the diversity increased after covering with a semi-permeable membrane and applying a slight positive pressure. Principal components analysis based on operational taxonomic units could distinguish the main composting phases. Linear Discriminant Analysis Effect Size analysis indicated that covering with a semi-permeable membrane reduced the relative abundance of anaerobic Clostridiales and pathogenic Pseudomonas and increased the abundance of Cellvibrionales. In membrane-covered aerobic composting systems, the relative abundance of some bacteria could be affected, especially anaerobic bacteria. Covering could effectively promote fermentation, reduce emissions and ensure organic fertilizer quality. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Guided filter and principal component analysis hybrid method for hyperspectral pansharpening

    NASA Astrophysics Data System (ADS)

    Qu, Jiahui; Li, Yunsong; Dong, Wenqian

    2018-01-01

    Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.

  14. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  15. Brassinosteroid control of shoot gravitropism interacts with ethylene and depends on auxin signaling components.

    PubMed

    Vandenbussche, Filip; Callebert, Pieter; Zadnikova, Petra; Benkova, Eva; Van Der Straeten, Dominique

    2013-01-01

    To reach favorable conditions for photosynthesis, seedlings grow upward when deprived of light upon underground germination. To direct their growth, they use their negative gravitropic capacity. Negative gravitropism is under tight control of multiple hormones. By counting the number of standing plants in a population or by real time monitoring of the reorientation of gravistimulated seedlings of Arabidopsis thaliana, we evaluated the negative gravitropism of ethylene or brassinosteroid (BR) treated plants. Meta-analysis of transcriptomic data on AUX/IAA genes was gathered, and subsequent mutant analysis was performed. Ethylene and BR have opposite effects in regulating shoot gravitropism. Lack of BR enhances gravitropic reorientation in 2-d-old seedlings, whereas ethylene does not. Lack of ethylene signaling results in enhanced BR sensitivity. Ethylene and BRs regulate overlapping sets of AUX/IAA genes. BRs regulate a wider range of auxin signaling components than ethylene. Upward growth in seedlings depends strongly on the internal hormonal balance. Endogenous ethylene stimulates, whereas BRs reduce negative gravitropism in a manner that depends on the function of different, yet overlapping sets of auxin signaling components.

  16. Analysis of a new PM motor design for a rotary dynamic blood Pump.

    PubMed

    Xu, L; Wang, F; Fu, M; Medvedev, A; Smith, W A; Golding, L A

    1997-01-01

    The permanent magnet (PM) motor for a rotary dynamic blood pump requires high power density to coordinate the motor size with the limited pump space and high efficiency to reduce the size and weight of the associated batteries. The motor also serves as a passive axial magnetic thrust bearing, a reacting hydraulic force, and provides a stabilizing force for the radial journal bearing. This article presents analysis of a new PM motor for the blood pump application. High power density is achieved by using the Halbach magnetic array, and high efficiency is accomplished by optimizing the rotor magnet assembly and the stator slots/windings. While both radial and axial forces are greatly enhanced, pulsating components of the torque and force are also significantly reduced.

  17. Analysis of the performance of the drive system and diffuser of the Langley unitary plan wind tunnel

    NASA Technical Reports Server (NTRS)

    Hasel, L. E.; Stallings, R. L.

    1981-01-01

    A broad program was initiated at the Langley Research Center in 1973 to reduce the energy consumption of the laboratory. As a part of this program, the performance characteristics of the Unitary Plan Wind Tunnel were reexamined to determine if potential methods for incresing the operating efficiencies of the tunnel could be formulated. The results of that study are summarized. The performance characteristics of the drive system components and the variable-geometry diffuser system of the tunnel are documented and analyzed. Several potential methods for reducing the energy requirements of the facility are discussed.

  18. Reduction of hexavalent chromium using Aerva lanata L.: elucidation of reduction mechanism and identification of active principles.

    PubMed

    Poonkuzhali, K; Rajeswari, V; Saravanakumar, T; Viswanathamurthi, P; Park, Seung-Moon; Govarthanan, M; Sathishkumar, P; Palvannan, T

    2014-05-15

    The effluent discharge treatment for controlling the environment from non biodegradable metal contaminants using plant extract is an efficient technique. The reduction of hexavalent chromium by abundantly available weed, Aerva lanata L. was investigated using batch equilibrium technique. The variables studied were Cr(VI) concentration, Aerva lanata L. dose, contact time, pH, temperature and agitation speed. Cyclic voltammetry and ICP-MS analysis confirmed the reduction of Cr(VI) to Cr(III). Electrochemical analysis proved that, the chromium has not been degraded and the valency of the chromium has only been changed. ICP-MS analysis shows that 100ng/L of hexavalent chromium was reduced to 97.01ng/L trivalent chromium. These results suggest that components present in the Aerva lanata L. are responsible for the reduction of Cr(VI) to Cr(III). The prime components ferulic acid, kaempherol and β-carboline present in the Aerva lanata L. may be responsible for the reduction of Cr(VI) as evident from LC-MS analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Loneliness Literacy Scale: Development and Evaluation of an Early Indicator for Loneliness Prevention.

    PubMed

    Honigh-de Vlaming, Rianne; Haveman-Nies, Annemien; Bos-Oude Groeniger, Inge; Hooft van Huysduynen, Eveline J C; de Groot, Lisette C P G M; Van't Veer, Pieter

    2014-01-01

    To develop and evaluate the Loneliness Literacy Scale for the assessment of short-term outcomes of a loneliness prevention programme among Dutch elderly persons. Scale development was based on evidence from literature and experiences from local stakeholders and representatives of the target group. The scale was pre-tested among 303 elderly persons aged 65 years and over. Principal component analysis and internal consistency analysis were used to affirm the scale structure, reduce the number of items and assess the reliability of the constructs. Linear regression analysis was conducted to evaluate the association between the literacy constructs and loneliness. The four constructs "motivation", "self-efficacy", "perceived social support" and "subjective norm" derived from principal component analysis captured 56 % of the original variance. Cronbach's coefficient α was above 0.7 for each construct. The constructs "self-efficacy" and "perceived social support" were positively and "subjective norm" was negatively associated with loneliness. To our knowledge this is the first study developing a short-term indicator for loneliness prevention. The indicator contributes to the need of evaluating public health interventions more close to the intervention activities.

  20. Relationships between NIR spectra and sensory attributes of Thai commercial fish sauces.

    PubMed

    Ritthiruangdej, Pitiporn; Suwonsichon, Thongchai

    2007-07-01

    Twenty Thai commercial fish sauces were characterized by sensory descriptive analysis and near-infrared (NIR) spectroscopy. The main objectives were i) to investigate the relationships between sensory attributes and NIR spectra of samples and ii) to characterize the sensory characteristics of fish sauces based on NIR data. A generic descriptive analysis with 12 trained panels was used to characterize the sensory attributes. These attributes consisted of 15 descriptors: brown color, 5 aromatics (sweet, caramelized, fermented, fishy, and musty), 4 tastes (sweet, salty, bitter, and umami), 3 aftertastes (sweet, salty and bitter) and 2 flavors (caramelized and fishy). The results showed that Thai fish sauce samples exhibited significant differences in all of sensory attribute values (p < 0.05). NIR transflectance spectra were obtained from 1100 to 2500 nm. Prior to investigation of the relationships between sensory attributes and NIR spectra, principal component analysis (PCA) was applied to reduce the dimensionality of the spectral data from 622 wavelengths to two uncorrelated components (NIR1 and NIR2) which explained 92 and 7% of the total variation, respectively. NIR1 was highly correlated with the wavelength regions of 1100 - 1544, 1774 - 2062, 2092 - 2308, and 2358 - 2440 nm, while NIR2 was highly correlated with the wavelength regions of 1742 - 1764, 2066 - 2088, and 2312 - 2354 nm. Subsequently, the relationships among these two components and all sensory attributes were also investigated by PCA. The results showed that the first three principal components (PCs) named as fishy flavor component (PC1), sweet component (PC2) and bitterness component (PC3), respectively, explained a total of 66.86% of the variation. NIR1 was mainly correlated to the sensory attributes of fishy aromatic, fishy flavor and sweet aftertaste on PC1. In addition, the PCA using only the factor loadings of NIR1 and NIR2 could be used to classify samples into three groups which showed high, medium and low degrees of fishy aromatic, fishy flavor and sweet aftertaste.

  1. Microbial biofilm studies of the Environmental Control and Life Support System water recovery test for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Obenhuber, D. C.; Huff, T. L.; Rodgers, E. B.

    1991-01-01

    Analysis of biofilm accumulation, studies of iodine disinfection of biofilm, and the potential for microbially influenced corrosion in the water recovery test (WRT) are presented. The analysis of WRT components showed the presence of biofilms and organic deposits in selected tubing. Water samples from the WRT contained sulfate-reducing and acid-producing organisms implicated in corrosion processes. Corrosion of an aluminum alloy was accelerated in the presence of these water samples, but stainless steel corrosion rates were not accelerated.

  2. Analysis of XMM-Newton Data from Extended Sources and the Diffuse X-Ray Background

    NASA Technical Reports Server (NTRS)

    Snowden, Steven

    2011-01-01

    Reduction of X-ray data from extended objects and the diffuse background is a complicated process that requires attention to the details of the instrumental response as well as an understanding of the multiple background components. We present methods and software that we have developed to reduce data from XMM-Newton EPIC imaging observations for both the MOS and PN instruments. The software has now been included in the Science Analysis System (SAS) package available through the XMM-Newton Science Operations Center (SOC).

  3. DOSY Analysis of Micromolar Analytes: Resolving Dilute Mixtures by SABRE Hyperpolarization.

    PubMed

    Reile, Indrek; Aspers, Ruud L E G; Tyburn, Jean-Max; Kempf, James G; Feiters, Martin C; Rutjes, Floris P J T; Tessari, Marco

    2017-07-24

    DOSY is an NMR spectroscopy technique that resolves resonances according to the analytes' diffusion coefficients. It has found use in correlating NMR signals and estimating the number of components in mixtures. Applications of DOSY in dilute mixtures are, however, held back by excessively long measurement times. We demonstrate herein, how the enhanced NMR sensitivity provided by SABRE hyperpolarization allows DOSY analysis of low-micromolar mixtures, thus reducing the concentration requirements by at least 100-fold. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Incorporating resource protection constraints in an analysis of landscape fuel-treatment effectiveness in the northern Sierra Nevada, CA, USA

    Treesearch

    Christopher B. Dow; Brandon M. Collins; Scott L. Stephens

    2016-01-01

    Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two...

  5. Comparative Proteomic Analysis of Normal and Collagen IX Null Mouse Cartilage Reveals Altered Extracellular Matrix Composition and Novel Components of the Collagen IX Interactome*

    PubMed Central

    Brachvogel, Bent; Zaucke, Frank; Dave, Keyur; Norris, Emma L.; Stermann, Jacek; Dayakli, Münire; Koch, Manuel; Gorman, Jeffrey J.; Bateman, John F.; Wilson, Richard

    2013-01-01

    The cartilage extracellular matrix is essential for endochondral bone development and joint function. In addition to the major aggrecan/collagen II framework, the interacting complex of collagen IX, matrilin-3, and cartilage oligomeric matrix protein (COMP) is essential for cartilage matrix stability, as mutations in Col9a1, Col9a2, Col9a3, Comp, and Matn3 genes cause multiple epiphyseal dysplasia, in which patients develop early onset osteoarthritis. In mice, collagen IX ablation results in severely disturbed growth plate organization, hypocellular regions, and abnormal chondrocyte shape. This abnormal differentiation is likely to involve altered cell-matrix interactions but the mechanism is not known. To investigate the molecular basis of the collagen IX null phenotype we analyzed global differences in protein abundance between wild-type and knock-out femoral head cartilage by capillary HPLC tandem mass spectrometry. We identified 297 proteins in 3-day cartilage and 397 proteins in 21-day cartilage. Components that were differentially abundant between wild-type and collagen IX-deficient cartilage included 15 extracellular matrix proteins. Collagen IX ablation was associated with dramatically reduced COMP and matrilin-3, consistent with known interactions. Matrilin-1, matrilin-4, epiphycan, and thrombospondin-4 levels were reduced in collagen IX null cartilage, providing the first in vivo evidence for these proteins belonging to the collagen IX interactome. Thrombospondin-4 expression was reduced at the mRNA level, whereas matrilin-4 was verified as a novel collagen IX-binding protein. Furthermore, changes in TGFβ-induced protein βig-h3 and fibronectin abundance were found in the collagen IX knock-out but not associated with COMP ablation, indicating specific involvement in the abnormal collagen IX null cartilage. In addition, the more widespread expression of collagen XII in the collagen IX-deficient cartilage suggests an attempted compensatory response to the absence of collagen IX. Our differential proteomic analysis of cartilage is a novel approach to identify candidate matrix protein interactions in vivo, underpinning further analysis of mutant cartilage lacking other matrix components or harboring disease-causing mutations. PMID:23530037

  6. Utilization of independent component analysis for accurate pathological ripple detection in intracranial EEG recordings recorded extra- and intra-operatively

    PubMed Central

    Shimamoto, Shoichi; Waldman, Zachary J.; Orosz, Iren; Song, Inkyung; Bragin, Anatol; Fried, Itzhak; Engel, Jerome; Staba, Richard; Sharan, Ashwini; Wu, Chengyuan; Sperling, Michael R.; Weiss, Shennan A.

    2018-01-01

    Objective To develop and validate a detector that identifies ripple (80–200 Hz) events in intracranial EEG (iEEG) recordings in a referential montage and utilizes independent component analysis (ICA) to eliminate or reduce high-frequency artifact contamination. Also, investigate the correspondence of detected ripples and the seizure onset zone (SOZ). Methods iEEG recordings from 16 patients were first band-pass filtered (80–600 Hz) and Infomax ICA was next applied to derive the first independent component (IC1). IC1 was subsequently pruned, and an artifact index was derived to reduce the identification of high-frequency events introduced by the reference electrode signal. A Hilbert detector identified ripple events in the processed iEEG recordings using amplitude and duration criteria. The identified ripple events were further classified and characterized as true or false ripple on spikes, or ripples on oscillations by utilizing a topographical analysis to their time-frequency plot, and confirmed by visual inspection. Results The signal to noise ratio was improved by pruning IC1. The precision of the detector for ripple events was 91.27 ± 4.3%, and the sensitivity of the detector was 79.4 ± 3.0% (N = 16 patients, 5842 ripple events). The sensitivity and precision of the detector was equivalent in iEEG recordings obtained during sleep or intra-operatively. Across all the patients, true ripple on spike rates and also the rates of false ripple on spikes, that were generated due to filter ringing, classified the seizure onset zone (SOZ) with an area under the receiver operating curve (AUROC) of >76%. The magnitude and spectral content of true ripple on spikes generated in the SOZ was distinct as compared with the ripples generated in the NSOZ (p < .001). Conclusions Utilizing ICA to analyze iEEG recordings in referential montage provides many benefits to the study of high-frequency oscillations. The ripple rates and properties defined using this approach may accurately delineate the seizure onset zone. Significance Strategies to improve the spatial resolution of intracranial EEG and reduce artifact can help improve the clinical utility of HFO biomarkers. PMID:29113719

  7. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  8. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  9. A Risk-Based Approach for Aerothermal/TPS Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak

    2007-01-01

    The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.

  10. Effect of cooking time on some nutrient and antinutrient components of bambaragroundnut seeds.

    PubMed

    Omoikhoje, Stanley Omoh; Aruna, Mohammed Bashiru; Bamgbose, Adeyemi Mustapha

    2009-02-01

    The proximate composition, gross energy, mineral composition, percentage sugar, oligosaccharides and antinutrient substances of bambaragroundnut seeds subjected to different cooking times were determined. The seeds were cooked for 30, 60, 90 and 120 min. Results of the proximate analysis showed that only the ether extract and ash were significantly (P < 0.05) reduced as the cooking time increased. In contrast, gross energy values significantly (P < 0.05) increased with increased cooking time. Amongst, the mineral elements assayed, calcium, magnesium and iron were significantly (P < 0.05) increased, while phosphorous, potassium, sodium and copper were reduced significantly (P > 0.05) with inreased cooking time. Percentage sucrose and glucose of bambaragroundnut seeds were significantly (P < 0.05) lowest in the raw form, but increased progressively with increased of cooking time. Raffinose and stachyose levels were reduced significantly by increased cookinf time (P < 0.05) with the least value in seeds cooked for 120 min. Trypsin inhibitor, hemagglutinin and tannin were completely eliminated in seeds cooked for 60 min or longer, but the phytin level was reduced significantly (P < 0.05) by cooking. For a significant detoxification of antinutrient substances and for optimal bioavailability of the component nutrients of bambaragroundnut seeds, an optimum cooking time of 60 min at 100 degrees C is therefore recommended.

  11. Evaluation of Bufadienolides as the Main Antitumor Components in Cinobufacin Injection for Liver and Gastric Cancer Therapy.

    PubMed

    Wei, Xiaolu; Si, Nan; Zhang, Yuefei; Zhao, Haiyu; Yang, Jian; Wang, Hongjie; Wang, Lianmei; Han, Linyu; Bian, Baolin

    2017-01-01

    Cinobufacin injection, also known as huachansu, is a preparation form of Cinobufacini made from Cinobufacin extract liquid. Despite that Cinobufacin injection is shown to shrink liver and gastric tumors, improving patient survival and life quality, the effective components in Cinobufacin remain elusive. In this study, we aim to screen antitumor components from Cinobufacin injection to elucidate the most effective antitumor components for treatment of liver and gastric cancers. High performance liquid chromatography (HPLC) and LC-MS/MS analysis were used to separate and determine the components in Cinobufacin injection. Inhibition rates of various components in Cinobufacin injection on liver and gastric cancer cells were determined with MTT assay; Hepatocellular carcinoma and gastric cancer models were used to assess the antitumor effect of the compounds in vivo. The major constituents in Cinobufacin injection include peptides, nucleic acids, tryptamines and bufotalins. MTT assay revealed that bufadienolides had the best antitumor activity, with peptides being the second most effective components. Bufadienolides showed significant inhibition rates on gastric and hepatocellular tumour growth in vivo. Bufadienolides are the most effective components in Cinobufacini injection for the treatment of liver and gastric cancers. This discovery can greatly facilitate further research in improving the therapeutic effects of Cinobufacin injection, meanwhile reducing its adverse reaction.

  12. Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Marquart, Jed E.

    2005-01-01

    The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the objectives listed, the following tasks were accomplished: 1) Parametric Study Of Preconditioning Parameters And Other Code Inputs; 2) Code Modifications To Reduce Runtimes; 3) Investigation Of Compiler Options To Reduce Code Runtime; and 4) Development/Enhancement of Users Manuals for Aardvark and Phantom

  13. Co-pyrolysis characteristics and kinetic analysis of organic food waste and plastic.

    PubMed

    Tang, Yijing; Huang, Qunxing; Sun, Kai; Chi, Yong; Yan, Jianhua

    2018-02-01

    In this work, typical organic food waste (soybean protein (SP)) and typical chlorine enriched plastic waste (polyvinyl chloride (PVC)) were chosen as principal MSW components and their interaction during co-pyrolysis was investigated. Results indicate that the interaction accelerated the reaction during co-pyrolysis. The activation energies needed were 2-13% lower for the decomposition of mixture compared with linear calculation while the maximum reaction rates were 12-16% higher than calculation. In the fixed-bed experiments, interaction was observed to reduce the yield of tar by 2-69% and promote the yield of char by 13-39% compared with linear calculation. In addition, 2-6 times more heavy components and 61-93% less nitrogen-containing components were formed for tar derived from mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Pharmaceutical identifier confirmation via DART-TOF.

    PubMed

    Easter, Jacob L; Steiner, Robert R

    2014-07-01

    Pharmaceutical analysis comprises a large amount of the casework in forensic controlled substances laboratories. In order to reduce the time of analysis for pharmaceuticals, a Direct Analysis in Real Time ion source coupled with an accurate mass time-of-flight (DART-TOF) mass spectrometer was used to confirm identity. DART-TOF spectral data for pharmaceutical samples were analyzed and evaluated by comparison to standard spectra. Identical mass pharmaceuticals were differentiated using collision induced dissociation fragmentation, present/absent ions, and abundance comparison box plots; principal component analysis (PCA) and linear discriminant analysis (LDA) were used for differentiation of identical mass mixed drug spectra. Mass assignment reproducibility and robustness tests were performed on the DART-TOF spectra. Impacts on the forensic science community include a decrease in analysis time over the traditional gas chromatograph/mass spectrometry (GC/MS) confirmations, better laboratory efficiency, and simpler sample preparation. Using physical identifiers and the DART-TOF to confirm pharmaceutical identity will eliminate the use of GC/MS and effectively reduce analysis time while still complying with accepted analysis protocols. This will prove helpful in laboratories with large backlogs and will simplify the confirmation process. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Prototype of an Interface for Hyphenating Distillation with Gas Chromatography and Mass Spectrometry

    PubMed Central

    Tang, Ya-Ru; Yang, Hui-Hsien; Urban, Pawel L.

    2017-01-01

    Chemical analysis of complex matrices—containing hundreds of compounds—is challenging. Two-dimensional separation techniques provide an efficient way to reduce complexity of mixtures analyzed by mass spectrometry (MS). For example, gasoline is a mixture of numerous compounds, which can be fractionated by distillation techniques. However, coupling conventional distillation with other separations as well as MS is not straightforward. We have established an automatic system for online coupling of simple microscale distillation with gas chromatography (GC) and electron ionization MS. The developed system incorporates an interface between the distillation condenser and the injector of a fused silica capillary GC column. Development of this multidimensional separation (distillation-GC-MS) was preceded by a series of preliminary off-line experiments. In the developed technique, the components with different boiling points are fractionated and instantly analyzed by GC-MS. The obtained data sets illustrate dynamics of the distillation process. An important advantage of the distillation-GC-MS technique is that raw samples can directly be analyzed without removal of the non-volatile matrix residues that could contaminate the GC injection port and the column. Distilling the samples immediately before the injection to the GC column may reduce possible matrix effects—especially in the early phase of separation, when molecules with different volatilities co-migrate. It can also reduce losses of highly volatile components (during fraction collection and transfer). The two separation steps are partly orthogonal, what can slightly increase selectivity of the entire analysis. PMID:28337400

  16. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis.

    PubMed

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-29

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  17. A major haemorrhage protocol improves the delivery of blood component therapy and reduces waste in trauma massive transfusion.

    PubMed

    Khan, Sirat; Allard, Shubha; Weaver, Anne; Barber, Colin; Davenport, Ross; Brohi, Karim

    2013-05-01

    Major haemorrhage protocols (MHP) are required as part of damage control resuscitation regimens in modern trauma care. The primary objectives of this study were to ascertain whether a MHP improved blood product administration and reduced waste compared to traditional massive transfusion protocols (MTP). Datasets on adult trauma admissions 1 year prior and 1 year post implementation of a MHP at a Level 1 trauma centre were obtained from the trauma registry. Demographic and clinical data were collected prospectively including mechanism of injury, physiological observations, ICU admission and length of stay. The volume of blood components (packed red blood cells, platelets, cryoprecipitate and fresh frozen plasma) issued, transfused, returned to stock and wasted within the first 24h was gathered retrospectively. Over the 2-year study period 2986 patient records were available for analysis. 40 patients required a 10+ Units of packed red blood ells transfusion in the MTP group vs. 56 patients post MHP implementation. The administration of blood component therapy improved significantly post MHP implementation. FFP:PRBC transfusion improved from 1:3 to 1:2 (p<0.01) and CRYO:PRBC improved from 1:10 to 1:7 (p<0.05). We reported a significant reduction in the waste of platelets from 14% to 2% (p<0.01). Outcomes had improved: Median hospital length of stay was reduced from 54 days to 26 days (p<0.05). Implementation of a MHP results in improved delivery of blood components and a reduction in the waste of blood products compared to the older model of MTP. In combination with educational programmes MHP can significantly improve blood product administration and patient outcomes in trauma haemorrhage. Level III diagnostic test study. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  18. Change Mechanisms of Schema-Centered Group Psychotherapy with Personality Disorder Patients

    PubMed Central

    Tschacher, Wolfgang; Zorn, Peter; Ramseyer, Fabian

    2012-01-01

    Background This study addressed the temporal properties of personality disorders and their treatment by schema-centered group psychotherapy. It investigated the change mechanisms of psychotherapy using a novel method by which psychotherapy can be modeled explicitly in the temporal domain. Methodology and Findings 69 patients were assigned to a specific schema-centered behavioral group psychotherapy, 26 to social skills training as a control condition. The largest diagnostic subgroups were narcissistic and borderline personality disorder. Both treatments offered 30 group sessions of 100 min duration each, at a frequency of two sessions per week. Therapy process was described by components resulting from principal component analysis of patients' session-reports that were obtained after each session. These patient-assessed components were Clarification, Bond, Rejection, and Emotional Activation. The statistical approach focused on time-lagged associations of components using time-series panel analysis. This method provided a detailed quantitative representation of therapy process. It was found that Clarification played a core role in schema-centered psychotherapy, reducing rejection and regulating the emotion of patients. This was also a change mechanism linked to therapy outcome. Conclusions/Significance The introduced process-oriented methodology allowed to highlight the mechanisms by which psychotherapeutic treatment became effective. Additionally, process models depicted the actual patterns that differentiated specific diagnostic subgroups. Time-series analysis explores Granger causality, a non-experimental approximation of causality based on temporal sequences. This methodology, resting upon naturalistic data, can explicate mechanisms of action in psychotherapy research and illustrate the temporal patterns underlying personality disorders. PMID:22745811

  19. A Genome-Wide RNAi Screen for Modifiers of the Circadian Clock in Human Cells

    PubMed Central

    Zhang, Eric E.; Liu, Andrew C.; Hirota, Tsuyoshi; Miraglia, Loren J.; Welch, Genevieve; Pongsawakul, Pagkapol Y.; Liu, Xianzhong; Atwood, Ann; Huss, Jon W.; Janes, Jeff; Su, Andrew I.; Hogenesch, John B.; Kay, Steve A.

    2009-01-01

    Summary Two decades of research identified more than a dozen clock genes and defined a biochemical feedback mechanism of circadian oscillator function. To identify additional clock genes and modifiers, we conducted a genome-wide siRNA screen in a human cellular clock model. Knockdown of nearly a thousand genes reduced rhythm amplitude. Potent effects on period length or increased amplitude were less frequent; we found hundreds of these and confirmed them in secondary screens. Characterization of a subset of these genes demonstrated a dosage-dependent effect on oscillator function. Protein interaction network analysis showed that dozens of gene products directly or indirectly associate with known clock components. Pathway analysis revealed these genes are overrepresented for components of insulin and hedgehog signaling, the cell cycle, and the folate metabolism. Coupled with data showing many of these pathways are clock-regulated, we conclude the clock is interconnected with many aspects of cellular function. PMID:19765810

  20. Insights into nucleon structure from parton distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melnitchouk, Wally

    2017-05-01

    We review recent progress in understanding the substructure of the nucleon from global QCD analysis of parton distribution functions (PDFs). New high-precision data onW-boson production in p ¯ p collisions have significantly reduced the uncertainty on the d=u PDF ratio at large values of x, indirectly constraining models of the medium modification of bound nucleons. Drell-Yan data from pp and pd scattering reveal new information on the d¯-u¯ asymmetry, clarifying the role of chiral symmetry breaking in the nucleon. In the strange sector, a new chiral SU(3) analysis finds a valence-like component of the strange-quark PDF, giving rise to amore » nontrivial s- ¯ s asymmetry at moderate x values. We also review recent analyses of charm in the nucleon, which have found conflicting indications of the size of the nonperturbative charm component.« less

  1. Aerothermal modeling. Executive summary

    NASA Technical Reports Server (NTRS)

    Kenworthy, M. K.; Correa, S. M.; Burrus, D. L.

    1983-01-01

    One of the significant ways in which the performance level of aircraft turbine engines has been improved is by the use of advanced materials and cooling concepts that allow a significant increase in turbine inlet temperature level, with attendant thermodynamic cycle benefits. Further cycle improvements have been achieved with higher pressure ratio compressors. The higher turbine inlet temperatures and compressor pressure ratios with corresponding higher temperature cooling air has created a very hostile environment for the hot section components. To provide the technology needed to reduce the hot section maintenance costs, NASA has initiated the Hot Section Technology (HOST) program. One key element of this overall program is the Aerothermal Modeling Program. The overall objective of his program is to evolve and validate improved analysis methods for use in the design of aircraft turbine engine combustors. The use of such combustor analysis capabilities can be expected to provide significant improvement in the life and durability characteristics of both combustor and turbine components.

  2. Hydrodynamic Stability of Multicomponent Droplet Gasification in Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Aharon, I.; Shaw, B. D.

    1995-01-01

    This investigation addresses the problem of hydrodynamic stability of a two-component droplet undergoing spherically-symmetrical gasification. The droplet components are assumed to have characteristic liquid species diffusion times that are large relative to characteristic droplet surface regression times. The problem is formulated as a linear stability analysis, with a goal of predicting when spherically-symmetric droplet gasification can be expected to be hydrodynamically unstable from surface-tension gradients acting along the surface of a droplet which result from perturbations. It is found that for the conditions assumed in this paper (quasisteady gas phase, no initial droplet temperature gradients, diffusion-dominated gasification), surface tension gradients do not play a role in the stability characteristics. In addition, all perturbations are predicted to decay such that droplets were hydrodynamically stable. Conditions are identified, however, that deserve more analysis as they may lead to hydrodynamic instabilities driven by capillary effects.

  3. The Coach-Athlete Relationship Questionnaire (CART-Q): development and initial validation.

    PubMed

    Jowett, Sophia; Ntoumanis, Nikos

    2004-08-01

    The purpose of the present study was to develop and validate a self-report instrument that measures the nature of the coach-athlete relationship. Jowett et al.'s (Jowett & Meek, 2000; Jowett, in press) qualitative case studies and relevant literature were used to generate items for an instrument that measures affective, cognitive, and behavioral aspects of the coach-athlete relationship. Two studies were carried out in an attempt to assess content, predictive, and construct validity, as well as internal consistency, of the Coach-Athlete Relationship Questionnaire (CART-Q), using two independent British samples. Principal component analysis and confirmatory factor analysis were used to reduce the number of items, identify principal components, and confirm the latent structure of the CART-Q. Results supported the multidimensional nature of the coach-athlete relationship. The latent structure of the CART-Q was underlined by the latent variables of coaches' and athletes' Closeness (emotions), Commitment (cognitions), and Complementarity (behaviors).

  4. Method for phosphorothioate antisense DNA sequencing by capillary electrophoresis with UV detection.

    PubMed

    Froim, D; Hopkins, C E; Belenky, A; Cohen, A S

    1997-11-01

    The progress of antisense DNA therapy demands development of reliable and convenient methods for sequencing short single-stranded oligonucleotides. A method of phosphorothioate antisense DNA sequencing analysis using UV detection coupled to capillary electrophoresis (CE) has been developed based on a modified chain termination sequencing method. The proposed method reduces the sequencing cost since it uses affordable CE-UV instrumentation and requires no labeling with minimal sample processing before analysis. Cycle sequencing with ThermoSequenase generates quantities of sequencing products that are readily detectable by UV. Discrimination of undesired components from sequencing products in the reaction mixture, previously accomplished by fluorescent or radioactive labeling, is now achieved by bringing concentrations of undesired components below the UV detection range which yields a 'clean', well defined sequence. UV detection coupled with CE offers additional conveniences for sequencing since it can be accomplished with commercially available CE-UV equipment and is readily amenable to automation.

  5. Method for phosphorothioate antisense DNA sequencing by capillary electrophoresis with UV detection.

    PubMed Central

    Froim, D; Hopkins, C E; Belenky, A; Cohen, A S

    1997-01-01

    The progress of antisense DNA therapy demands development of reliable and convenient methods for sequencing short single-stranded oligonucleotides. A method of phosphorothioate antisense DNA sequencing analysis using UV detection coupled to capillary electrophoresis (CE) has been developed based on a modified chain termination sequencing method. The proposed method reduces the sequencing cost since it uses affordable CE-UV instrumentation and requires no labeling with minimal sample processing before analysis. Cycle sequencing with ThermoSequenase generates quantities of sequencing products that are readily detectable by UV. Discrimination of undesired components from sequencing products in the reaction mixture, previously accomplished by fluorescent or radioactive labeling, is now achieved by bringing concentrations of undesired components below the UV detection range which yields a 'clean', well defined sequence. UV detection coupled with CE offers additional conveniences for sequencing since it can be accomplished with commercially available CE-UV equipment and is readily amenable to automation. PMID:9336449

  6. Diffusion Modelling Reveals the Decision Making Processes Underlying Negative Judgement Bias in Rats

    PubMed Central

    Hales, Claire A.; Robinson, Emma S. J.; Houghton, Conor J.

    2016-01-01

    Human decision making is modified by emotional state. Rodents exhibit similar biases during interpretation of ambiguous cues that can be altered by affective state manipulations. In this study, the impact of negative affective state on judgement bias in rats was measured using an ambiguous-cue interpretation task. Acute treatment with an anxiogenic drug (FG7142), and chronic restraint stress and social isolation both induced a bias towards more negative interpretation of the ambiguous cue. The diffusion model was fit to behavioural data to allow further analysis of the underlying decision making processes. To uncover the way in which parameters vary together in relation to affective state manipulations, independent component analysis was conducted on rate of information accumulation and distances to decision threshold parameters for control data. Results from this analysis were applied to parameters from negative affective state manipulations. These projected components were compared to control components to reveal the changes in decision making processes that are due to affective state manipulations. Negative affective bias in rodents induced by either FG7142 or chronic stress is due to a combination of more negative interpretation of the ambiguous cue, reduced anticipation of the high reward and increased anticipation of the low reward. PMID:27023442

  7. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    PubMed Central

    Al-Degs, Yahya; Andri, Bertyl; Thiébaut, Didier; Vial, Jérôme

    2017-01-01

    Retention mechanisms involved in supercritical fluid chromatography (SFC) are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase), a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC) was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition). Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition) data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns' function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components. PMID:28695040

  9. Spectral BRDF-based determination of proper measurement geometries to characterize color shift of special effect coatings.

    PubMed

    Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa

    2013-02-01

    A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.

  10. Chemistry and in vitro antioxidant activity of volatile oil and oleoresins of black pepper (Piper nigrum).

    PubMed

    Kapoor, I P S; Singh, Bandana; Singh, Gurdip; De Heluani, Carola S; De Lampasona, M P; Catalan, Cesar A N

    2009-06-24

    Essential oil and oleoresins (ethanol and ethyl acetate) of Piper nigrum were extracted by using Clevenger and Soxhlet apparatus, respectively. GC-MS analysis of pepper essential oil showed the presence of 54 components representing about 96.6% of the total weight. beta-Caryophylline (29.9%) was found as the major component along with limonene (13.2%), beta-pinene (7.9%), sabinene (5.9%), and several other minor components. The major component of both ethanol and ethyl acetate oleoresins was found to contain piperine (63.9 and 39.0%), with many other components in lesser amounts. The antioxidant activities of essential oil and oleoresins were evaluated against mustard oil by peroxide, p-anisidine, and thiobarbituric acid. Both the oil and oleoresins showed strong antioxidant activity in comparison with butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) but lower than that of propyl gallate (PG). In addition, their inhibitory action by FTC method, scavenging capacity by DPPH (2,2'-diphenyl-1-picrylhydrazyl radical), and reducing power were also determined, proving the strong antioxidant capacity of both the essential oil and oleoresins of pepper.

  11. Two-component end mills with multilayer composite nano-structured coatings as a viable alternative to monolithic carbide end mills

    NASA Astrophysics Data System (ADS)

    Vereschaka, Alexey; Mokritskii, Boris; Mokritskaya, Elena; Sharipov, Oleg; Oganyan, Maksim

    2018-03-01

    The paper deals with the challenges of the application of two-component end mills, which represent a combination of a carbide cutting part and a shank made of cheaper structural material. The calculations of strains and deformations of composite mills were carried out in comparison with solid carbide mills, with the use of the finite element method. The study also involved the comparative analysis of accuracy parameters of machining with monolithic mills and two-component mills with various shank materials. As a result of the conducted cutting tests in milling aluminum alloy with monolithic and two-component end mills with specially developed multilayer composite nano-structured coatings, it has been found that the use of such coatings can reduce strains and, correspondingly, deformations, which can improve the accuracy of machining. Thus, the application of two-component end mills with multilayer composite nano-structured coatings can provide a reduction in the cost of machining while maintaining or even improving the tool life and machining accuracy parameters.

  12. A New First Break Picking for Three-Component VSP Data Using Gesture Sensor and Polarization Analysis

    PubMed Central

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Wang, Ruili; Courtois, Jérémie; Yan, Minhao

    2017-01-01

    A new first break picking for three-component (3C) vertical seismic profiling (VSP) data is proposed to improve the estimation accuracy of first arrivals, which adopts gesture detection calibration and polarization analysis based on the eigenvalue of the covariance matrix. This study aims at addressing the problem that calibration is required for VSP data using the azimuth and dip angle of geophones, due to the direction of geophones being random when applied in a borehole, which will further lead to the first break picking possibly being unreliable. Initially, a gesture-measuring module is integrated in the seismometer to rapidly obtain high-precision gesture data (including azimuth and dip angle information). Using re-rotating and re-projecting using earlier gesture data, the seismic dataset of each component will be calibrated to the direction that is consistent with the vibrator shot orientation. It will promote the reliability of the original data when making each component waveform calibrated to the same virtual reference component, and the corresponding first break will also be properly adjusted. After achieving 3C data calibration, an automatic first break picking algorithm based on the autoregressive-Akaike information criterion (AR-AIC) is adopted to evaluate the first break. Furthermore, in order to enhance the accuracy of the first break picking, the polarization attributes of 3C VSP recordings is applied to constrain the scanning segment of AR-AIC picker, which uses the maximum eigenvalue calculation of the covariance matrix. The contrast results between pre-calibration and post-calibration using field data show that it can further improve the quality of the 3C VSP waveform, which is favorable to subsequent picking. Compared to the obtained short-term average to long-term average (STA/LTA) and the AR-AIC algorithm, the proposed method, combined with polarization analysis, can significantly reduce the picking error. Applications of actual field experiments have also confirmed that the proposed method may be more suitable for the first break picking of 3C VSP. Test using synthesized 3C seismic data with low SNR indicates that the first break is picked with an error between 0.75 ms and 1.5 ms. Accordingly, the proposed method can reduce the picking error for 3C VSP data. PMID:28925981

  13. 10Be in late deglacial climate simulated by ECHAM5-HAM - Part 2: Isolating the solar signal from 10Be deposition

    NASA Astrophysics Data System (ADS)

    Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.

    2013-10-01

    This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 yr time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 BP and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather or climate driven noise in the 10Be deposition flux during different stages of climate. The production signal varies on lower frequencies, dominated by the 11yr solar cycle within the 30 yr time scale of these experiments. The climatic noise is of higher frequencies. We first apply empirical orthogonal functions (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis on the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low frequency components and the long term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high frequency components represent climate driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.

  14. Effects of drying processes on starch-related physicochemical properties, bioactive components and antioxidant properties of yam flours.

    PubMed

    Chen, Xuetao; Li, Xia; Mao, Xinhui; Huang, Hanhan; Wang, Tingting; Qu, Zhuo; Miao, Jing; Gao, Wenyuan

    2017-06-01

    The effects of five different drying processes, air drying (AD), sulphur fumigation drying (SFD), hot air drying (HAD), freeze drying (FD) and microwave drying (MWD) for yams in terms of starch-related properties and antioxidant activity were studied. From the results of scanning electron microscopy (SEM), polarized optical microscopy (POM), X-ray diffraction (XRD), and Fourier transform infrared (FT-IR), the MWD sample was found to contain gelatinized starch granules. The FD yam had more slow digestible (SDS) and resistant starches (RS) compared with those processed with other modern drying methods. The bioactive components and the reducing power of the dried yams, were lower than those of fresh yam. When five dried samples were compared by principal component analysis, the HAD and SFD samples were observed to have the highest comprehensive principal component values. Based on our results, HAD would be a better method for yam drying than the more traditional SFD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Non-rigid image registration using a statistical spline deformation model.

    PubMed

    Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2003-07-01

    We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.

  16. Gas Sensitivity and Sensing Mechanism Studies on Au-Doped TiO2 Nanotube Arrays for Detecting SF6 Decomposed Components

    PubMed Central

    Zhang, Xiaoxing; Yu, Lei; Tie, Jing; Dong, Xingchen

    2014-01-01

    The analysis to SF6 decomposed component gases is an efficient diagnostic approach to detect the partial discharge in gas-insulated switchgear (GIS) for the purpose of accessing the operating state of power equipment. This paper applied the Au-doped TiO2 nanotube array sensor (Au-TiO2 NTAs) to detect SF6 decomposed components. The electrochemical constant potential method was adopted in the Au-TiO2 NTAs' fabrication, and a series of experiments were conducted to test the characteristic SF6 decomposed gases for a thorough investigation of sensing performances. The sensing characteristic curves of intrinsic and Au-doped TiO2 NTAs were compared to study the mechanism of the gas sensing response. The results indicated that the doped Au could change the TiO2 nanotube arrays' performances of gas sensing selectivity in SF6 decomposed components, as well as reducing the working temperature of TiO2 NTAs. PMID:25330053

  17. Analysis of the impacts of well yield and groundwater depth on irrigated agriculture

    NASA Astrophysics Data System (ADS)

    Foster, T.; Brozović, N.; Butler, A. P.

    2015-04-01

    Previous research has found that irrigation water demand is relatively insensitive to water price, suggesting that increased pumping costs due to declining groundwater levels will have limited effects on agricultural water management practices. However, non-linear changes in well yields as aquifer saturated thickness is reduced may have large impacts on irrigated production that are currently neglected in projections of the long-term sustainability of groundwater-fed irrigation. We conduct empirical analysis of observation data and numerical simulations for case studies in Nebraska, USA, to compare the impacts of changes in well yield and groundwater depth on agricultural production. Our findings suggest that declining well pumping capacities reduce irrigated production areas and profits significantly, whereas increased pumping costs reduce profits but have minimal impacts on the intensity of groundwater-fed irrigation. We suggest, therefore, that management of the dynamic relationship between well yield and saturated thickness should be a core component of policies designed to enhance long-term food security and support adaptation to climate change.

  18. Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.

    PubMed

    Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark

    2015-09-01

    "Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.

  19. Imaging and Analysis of Void-defects in Solder Joints Formed in Reduced Gravity using High-Resolution Computed Tomography

    NASA Technical Reports Server (NTRS)

    Easton, John W.; Struk, Peter M.; Rotella, Anthony

    2008-01-01

    As a part of efforts to develop an electronics repair capability for long duration space missions, techniques and materials for soldering components on a circuit board in reduced gravity must be developed. This paper presents results from testing solder joint formation in low gravity on a NASA Reduced Gravity Research Aircraft. The results presented include joints formed using eutectic tin-lead solder and one of the following fluxes: (1) a no-clean flux core, (2) a rosin flux core, and (3) a solid solder wire with external liquid no-clean flux. The solder joints are analyzed with a computed tomography (CT) technique which imaged the interior of the entire solder joint. This replaced an earlier technique that required the solder joint to be destructively ground down revealing a single plane which was subsequently analyzed. The CT analysis technique is described and results presented with implications for future testing as well as implications for the overall electronics repair effort discussed.

  20. Atmospheric concentrations, sources and gas-particle partitioning of PAHs in Beijing after the 29th Olympic Games.

    PubMed

    Ma, Wan-Li; Sun, De-Zhi; Shen, Wei-Guo; Yang, Meng; Qi, Hong; Liu, Li-Yan; Shen, Ji-Min; Li, Yi-Fan

    2011-07-01

    A comprehensive sampling campaign was carried out to study atmospheric concentration of polycyclic aromatic hydrocarbons (PAHs) in Beijing and to evaluate the effectiveness of source control strategies in reducing PAHs pollution after the 29th Olympic Games. The sub-cooled liquid vapor pressure (logP(L)(o))-based model and octanol-air partition coefficient (K(oa))-based model were applied based on each seasonal dateset. Regression analysis among log K(P), logP(L)(o) and log K(oa) exhibited high significant correlations for four seasons. Source factors were identified by principle component analysis and contributions were further estimated by multiple linear regression. Pyrogenic sources and coke oven emission were identified as major sources for both the non-heating and heating seasons. As compared with literatures, the mean PAH concentrations before and after the 29th Olympic Games were reduced by more than 60%, indicating that the source control measures were effective for reducing PAHs pollution in Beijing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. The wave-based substructuring approach for the efficient description of interface dynamics in substructuring

    NASA Astrophysics Data System (ADS)

    Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.

    2010-04-01

    In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.

  2. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  3. Determination of effective loss factors in reduced SEA models

    NASA Astrophysics Data System (ADS)

    Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.

    2017-01-01

    The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.

  4. Efficient principal component analysis for multivariate 3D voxel-based mapping of brain functional imaging data sets as applied to FDG-PET and normal aging.

    PubMed

    Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude

    2003-01-01

    Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.

  5. Reducing Artifacts in TMS-Evoked EEG

    NASA Astrophysics Data System (ADS)

    Fuertes, Juan José; Travieso, Carlos M.; Álvarez, A.; Ferrer, M. A.; Alonso, J. B.

    Transcranial magnetic stimulation induces weak currents within the cranium to activate neuronal firing and its response is recorded using electroencephalography in order to study the brain directly. However, different artifacts contaminate the results. The goal of this study is to process these artifacts and reduce them digitally. Electromagnetic, blink and auditory artifacts are considered, and Signal-Space Projection, Independent Component Analysis and Wiener Filtering methods are used to reduce them. These last two produce a successful solution for electromagnetic artifacts. Regarding the other artifacts, processed with Signal-Space Projection, the method reduces the artifact but modifies the signal as well. Nonetheless, they are modified in an exactly known way and the vector used for the projection is conserved to be taken into account when analyzing the resulting signals. A system which combines the proposed methods would improve the quality of the information presented to physicians.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forno, Massimo Dal; Department of Engineering and Architecture, University of Trieste, Trieste; Craievich, Paolo

    The front-end injection systems of the FERMI@Elettra linac produce high brightness electron beams that define the performance of the Free Electron Laser. The photoinjector mainly consists of the radiofrequency (rf) gun and of two S-band rf structures which accelerate the beam. Accelerating structures endowed with a single feed coupler cause deflection and degradation of the electron beam properties, due to the asymmetry of the electromagnetic field. In this paper, a new type of single feed structure with movable short-circuit is proposed. It has the advantage of having only one waveguide input, but we propose a novel design where the dipolarmore » component is reduced. Moreover, the racetrack geometry allows to reduce the quadrupolar component. This paper presents the microwave design and the analysis of the particle motion inside the linac. A prototype has been machined at the Elettra facility to verify the new coupler design and the rf field has been measured by adopting the bead-pull method. The results are here presented, showing good agreement with the expectations.« less

  7. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  8. CHANGES SDSS: the development of a Spatial Decision Support System for analysing changing hydro-meteorological risk

    NASA Astrophysics Data System (ADS)

    van Westen, Cees; Bakker, Wim; Zhang, Kaixi; Jäger, Stefan; Assmann, Andre; Kass, Steve; Andrejchenko, Vera; Olyazadeh, Roya; Berlin, Julian; Cristal, Irina

    2014-05-01

    Within the framework of the EU FP7 Marie Curie Project CHANGES (www.changes-itn.eu) and the EU FP7 Copernicus project INCREO (http://www.increo-fp7.eu) a spatial decision support system is under development with the aim to analyse the effect of risk reduction planning alternatives on reducing the risk now and in the future, and support decision makers in selecting the best alternatives. The Spatial Decision Support System will be composed of a number of integrated components. The Risk Assessment component allows to carry out spatial risk analysis, with different degrees of complexity, ranging from simple exposure (overlay of hazard and assets maps) to quantitative analysis (using different hazard types, temporal scenarios and vulnerability curves) resulting into risk curves. The platform does not include a component to calculate hazard maps, and existing hazard maps are used as input data for the risk component. The second component of the SDSS is a risk reduction planning component, which forms the core of the platform. This component includes the definition of risk reduction alternatives (related to disaster response planning, risk reduction measures and spatial planning) and links back to the risk assessment module to calculate the new level of risk if the measure is implemented, and a cost-benefit (or cost-effectiveness/ Spatial Multi Criteria Evaluation) component to compare the alternatives and make decision on the optimal one. The third component of the SDSS is a temporal scenario component, which allows to define future scenarios in terms of climate change, land use change and population change, and the time periods for which these scenarios will be made. The component doesn't generate these scenarios but uses input maps for the effect of the scenarios on the hazard and assets maps. The last component is a communication and visualization component, which can compare scenarios and alternatives, not only in the form of maps, but also in other forms (risk curves, tables, graphs). The envisaged users of the platform are organizations involved in planning of risk reduction measures, and that have staff capable of visualizing and analysing spatial data at a municipal scale.

  9. Reliability and Creep/Fatigue Analysis of a CMC Component

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.

    2007-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.

  10. Effect of Raw Crushed Garlic (Allium sativum L.) on Components of Metabolic Syndrome.

    PubMed

    Choudhary, Prema Ram; Jani, Rameshchandra D; Sharma, Megh Shyam

    2017-09-28

    Metabolic syndrome consists of a group of risk factors characterized by abdominal obesity, hypertension, atherogenic dyslipidemia, hyperglycemia, and prothrombotic and proinflammatory conditions. Raw garlic homogenate has been reported to reduce serum lipid levels in animal model; however, no precise studies have been performed to evaluate the effect of raw crushed garlic (Allium sativum L.) on components of metabolic syndrome. Therefore, the present study was designed to investigate the effect of raw crushed garlic on components of metabolic syndrome. A total of 40 metabolic syndrome patients were randomly selected from the diabetic center of SP Medical College, Bikaner, Rajasthan, India. They underwent treatment with 100 mg/kg body weight raw crushed garlic 2 times a day with standard diet for 4 weeks; their anthropometric and serum biochemical variables were measured at both the beginning and the end of the study. Statistical analysis was performed using IBM SPSS version 20, and Student's paired "t" test was used to compare variables before and after treatment with garlic preparation. Raw crushed garlic significantly reduced components of metabolic syndrome including waist circumference (p < .05), systolic and diastolic blood pressure (p < .001), triglycerides (p < .01), fasting blood glucose (p < .0001) and significantly increased serum high-density lipoprotein cholesterol (p < .0001). There was no significant difference found in body mass index (p > .05) of patients with metabolic syndrome after consumption of raw crushed garlic for 4 weeks. Raw crushed garlic has beneficial effects on components of metabolic syndrome; therefore, it can be used as an accompanying remedy for prevention and treatment of patients with metabolic syndrome.

  11. Investigation of ozone and peroxone impacts on natural organic matter character and biofiltration performance using fluorescence spectroscopy.

    PubMed

    Peleato, Nicolás M; Sidhu, Balsher Singh; Legge, Raymond L; Andrews, Robert C

    2017-04-01

    Impacts of ozonation alone as well as an advanced oxidation process of ozone plus hydrogen peroxide (H 2 O 2  + O 3 ) on organic matter prior to and following biofiltration were studied at pilot-scale. Three biofilters were operated in parallel to assess the effects of varying pre-treatment types and dosages. Conventionally treated water (coagulation/flocculation/sedimentation) was fed to one control biofilter, while the remaining two received water with varying applied doses of O 3 or H 2 O 2  + O 3 . Changes in organic matter were characterized using parallel factors analysis (PARAFAC) and fluorescence peak shifts. Intensities of all PARAFAC components were reduced by pre-oxidation, however, individual humic-like components were observed to be impacted to varying degrees upon exposure to O 3 or H 2 O 2  + O 3 . While the control biofilter uniformly reduced fluorescence of all PARAFAC components, three of the humic-like components were produced by biofiltration only when pre-oxidation was applied. A fluorescence red shift, which occurred with the application of O 3 or H 2 O 2  + O 3 , was attributed to a relative increase in carbonyl-containing components based on previously reported results. A subsequent blue shift in fluorescence caused by biofiltration which received pre-oxidized water indicated that biological treatment readily utilized organics produced by pre-oxidation. The results provide an understanding as to the impacts of organic matter character and pre-oxidation on biofiltration efficiency for organic matter removal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Post-blasting seismicity in Rudna copper mine, Poland - source parameters analysis.

    NASA Astrophysics Data System (ADS)

    Caputa, Alicja; Rudziński, Łukasz; Talaga, Adam

    2017-04-01

    The really important hazard in Polish copper mines is high seismicity and corresponding rockbursts. Many methods are used to reduce the seismic hazard. Among others the most effective is preventing blasting in potentially hazardous mining panels. The method is expected to provoke small moderate tremors (up to M2.0) and reduce in this way a stress accumulation in the rockmass. This work presents an analysis, which deals with post-blasting events in Rudna copper mine, Poland. Using the Full Moment Tensor (MT) inversion and seismic spectra analysis, we try to find some characteristic features of post blasting seismic sources. Source parameters estimated for post-blasting events are compared with the parameters of not-provoked mining events that occurred in the vicinity of the provoked sources. Our studies show that focal mechanisms of events which occurred after blasts have similar MT decompositions, namely are characterized by a quite strong isotropic component as compared with the isotropic component of not-provoked events. Also source parameters obtained from spectral analysis show that provoked seismicity has a specific source physics. Among others, it is visible from S to P wave energy ratio, which is higher for not-provoked events. The comparison of all our results reveals a three possible groups of sources: a) occurred just after blasts, b) occurred from 5min to 24h after blasts and c) not-provoked seismicity (more than 24h after blasting). Acknowledgements: This work was supported within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  13. A National Evaluation of the Nighttime and Passenger Restriction Components of Graduated Driver Licensing

    PubMed Central

    Fell, James C.; Todd, Michael; Voas, Robert B.

    2011-01-01

    Introduction The high crash rate of youthful novice drivers has been recognized for half a century. Over the last decade, graduated driver licensing (GDL) systems, which extend the period of supervised driving and limit the novice’s exposure to higher-risk conditions (such as nighttime driving) has effectively reduced crash involvements of novice drivers. Method This study used data from the Fatality Analysis Reporting System (FARS) and the implementation dates of GDL laws in a state-by-year panel study to evaluate the effectiveness of two key elements of GDL laws: nighttime restrictions and passenger limitations. Results Nighttime restrictions were found to reduce 16- and 17-year-old driver involvements in nighttime fatal crashes by an estimated 10% and 16- and 17-year-old drinking drivers in nighttime fatal crashes by 13%. Passenger restrictions were found to reduce 16- and 17-year-old driver involvements in fatal crashes with teen passengers by an estimated 9%. Conclusions These results confirm the effectiveness of these provisions in GDL systems. Impact on Public Health The results of this study indicate that nighttime restrictions and passenger limitations are very important components of any GDL law. PMID:22017831

  14. Pepper seed variety identification based on visible/near-infrared spectral technology

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen

    2016-11-01

    Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.

  15. Vacuum system design and tritium inventory for the TFTR charge exchange diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S.S.

    The charge exchange diagnostic for the TFTR is comprised of two analyzer systems which contain a total of twenty independent mass/energy analyzers and one diagnostic neutral beam tentatively rated at 80 keV, 15 A. The associated vacuum systems were analyzed using the Vacuum System Transient Simulator (VSTS) computer program which models the transient transport of multi-gas species through complex networks of ducts, valves, traps, vacuum pumps, and other related vacuum system components. In addition to providing improved design performance at reduced cost, the analysis yields estimates for the exchange of tritium from the torus to the diagnostic components and ofmore » the diagnostic working gases to the torus.« less

  16. Isotope Brayton electric power system for the 500 to 2500 watt range.

    NASA Technical Reports Server (NTRS)

    Macosko, R. P.; Barna, G. J.; Block, H. B.; Ingle, B. D.

    1972-01-01

    An extensive study was conducted at the Lewis Research Center to evaluate an isotope Brayton electric power system for use in the 500 to 2500 W power range. The study emphasized overall system simplicity in order to reduce parasitic power losses and improve system reliability. The study included detailed parametric cycle analysis, conceptual component designs, and evaluation of system packaging. The study has resulted in the selection of a single-loop system (gas) with six major components including one rotating unit. Calculated net system efficiency varies from 23 to 28% over the power range. The use of the Pu-238 heat source being developed for the Multi-Hundred-Watt Radioisotope Thermoelectric Generator program was assumed.

  17. Overview of thermal barrier coatings in diesel engines

    NASA Technical Reports Server (NTRS)

    Yonushonis, T. M.

    1995-01-01

    An understanding of delamination mechanisms in thermal barrier coatings has been developed for diesel applications through nondestructive evaluation, structural analysis modeling and engine evaluation of various thermal barrier coatings. This knowledge has resulted in improved thermal barrier coatings which survive abusive cyclic fatigue tests in high output diesel engines. Significant efforts are still required to improve the plasma spray processing capability and the economics for complex geometry diesel engine components. Data obtained from advanced diesel engines on the effect of thermal barrier coatings on engine fuel economy and emission has not been encouraging. Although the underlying metal component temperatures have been reduced through the use of thermal barrier coating, engine efficiency and emission trends have not been promising.

  18. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  19. Studies Related to Computer-Assisted Instruction. Semi-Annual Progress Report on Contract Nonr-624(18) October 1, 1968 through March 31, 1969.

    ERIC Educational Resources Information Center

    Glaser, Robert

    A study of response latency in a drill-and-practice task showed that variability in latency measures could be reduced by the use of self-pacing procedures, but not by the detailed analysis of latency into separate components. Experiments carried out on instructional history variables in teaching a mirror image, oblique line discrimination, showed…

  20. System Engineering Analysis of Topside Cranes Installed on AD, AR, and AS Class Ships

    DTIC Science & Technology

    1982-02-06

    4 severity CASREPs. Water or moisture in oumzs or motors accounted for five CASREPs; moisture in a transformer caused a class C fire , which resulted...Components of Bridge Cranes, Monorail Hoist Systems, and Side Port Hoists Associated Equipment: Accumulators Ladders Speed reducers Brakes Load blocks...Switches Bridge Locking devices *Tow bars Bumpers * Monorails Tracks Collector assembly Motors (electrical *Trolley buses Controller and hydraulic) *Trolleys

  1. Dynamic analysis of flexible mechanical systems using LATDYN

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Chang, Che-Wei; Housner, Jerrold M.

    1989-01-01

    A 3-D, finite element based simulation tool for flexible multibody systems is presented. Hinge degrees-of-freedom is built into equations of motion to reduce geometric constraints. The approach avoids the difficulty in selecting deformation modes for flexible components by using assumed mode method. The tool is applied to simulate a practical space structure deployment problem. Results of examples demonstrate the capability of the code and approach.

  2. Development of a gluten-free rice noodle by utilizing protein-polyphenol interaction between soy protein isolate and extract of Acanthopanax sessiliflorus.

    PubMed

    Lee, Da-Som; Kim, Yang; Song, Youngwoon; Lee, Ji-Hye; Lee, Suyong; Yoo, Sang-Ho

    2016-02-01

    The potential of the protein-polyphenol interaction was applied to crosslinking reinforced protein networks in gluten-free rice noodles. Specifically, inter-component interaction between soy protein isolate and extract of Acanthopanax sessiliflorus fruit (ogaja) was examined with a view to improving its quality. In a components-interacting model system, a mixture of soy protein isolate (SPI) and ogaja extract (OE) induced a drastic increase in absorbance at 660 nm by haze formation, while the major anthocyanin of ogaja, cyanidin-3-O-sambubioside, sparsely interacted with SPI or gelatin. Individual or combined treatment of SPI and OE on rice dough decreased all the viscosity parameters in rapid visco analysis. However, SPI-OE treatment significantly increased all the texture parameters of rice dough derived from Mixolab(®) analysis (P < 0.05). Incorporation of SPI in rice dough significantly reduced endothermic ΔH, and SPI-OE treatment further decreased this value. SPI-OE interaction significantly increased the tensile properties of cooked noodle and decreased 53.7% of cooking loss compared to the untreated rice noodle. SPI-OE treatment caused a considerable reinforcement of the network as shown by reducing cooking loss and suggested the potential for utilizing protein-polyphenol interaction for gluten-free rice noodle production. © 2015 Society of Chemical Industry.

  3. Fibrinogen concentrate as first-line therapy in aortic surgery reduces transfusion requirements in patients with platelet counts over or under 100×109/L

    PubMed Central

    Solomon, Cristina; Rahe-Meyer, Niels

    2015-01-01

    Background Administration of fibrinogen concentrate, targeting improved maximum clot firmness (MCF) of the thromboelastometric fibrin-based clot quality test (FIBTEM) is effective as first-line haemostatic therapy in aortic surgery. We performed a post-hoc analysis of data from a randomised, placebo-controlled trial of fibrinogen concentrate, to investigate whether fibrinogen concentrate reduced transfusion requirements for patients with platelet counts over or under 100×109/L. Material and methods Aortic surgery patients with coagulopathic bleeding after cardiopulmonary bypass were randomised to receive either fibrinogen concentrate (n=29) or placebo (n=32). Platelet count was measured upon removal of the aortic clamp, and coagulation and haematology parameters were measured peri-operatively. Transfusion of allogeneic blood components was recorded and compared between groups. Results After cardiopulmonary bypass, haemostatic and coagulation parameters worsened in all groups; plasma fibrinogen level (determined by the Clauss method) decreased by 43–58%, platelet count by 53–64%, FIBTEM maximum clot firmness (MCF) by 38–49%, FIBTEM maximum clot elasticity (MCE) by 43–54%, extrinsically activated test (EXTEM) MCF by 11–22%, EXTEM MCE by 25–41% and the platelet component of the clot by 23–39%. Treatment with fibrinogen concentrate (mean dose 7–9 g in the 4 groups) significantly reduced post-operative allogeneic blood component transfusion requirements when compared to placebo both for patients with a platelet count ≥100×109/L and for patients with a platelet count <100×109/L. Discussion FIBTEM-guided administration of fibrinogen concentrate reduced transfusion requirements when used as a first-line haemostatic therapy during aortic surgery in patients with platelet counts over or under 100×109/L. PMID:25369608

  4. Genomic, proteomic, and biochemical analysis of the organohalide respiratory pathway in Desulfitobacterium dehalogenans.

    PubMed

    Kruse, Thomas; van de Pas, Bram A; Atteia, Ariane; Krab, Klaas; Hagen, Wilfred R; Goodwin, Lynne; Chain, Patrick; Boeren, Sjef; Maphosa, Farai; Schraa, Gosse; de Vos, Willem M; van der Oost, John; Smidt, Hauke; Stams, Alfons J M

    2015-03-01

    Desulfitobacterium dehalogenans is able to grow by organohalide respiration using 3-chloro-4-hydroxyphenyl acetate (Cl-OHPA) as an electron acceptor. We used a combination of genome sequencing, biochemical analysis of redox active components, and shotgun proteomics to study elements of the organohalide respiratory electron transport chain. The genome of Desulfitobacterium dehalogenans JW/IU-DC1(T) consists of a single circular chromosome of 4,321,753 bp with a GC content of 44.97%. The genome contains 4,252 genes, including six rRNA operons and six predicted reductive dehalogenases. One of the reductive dehalogenases, CprA, is encoded by a well-characterized cprTKZEBACD gene cluster. Redox active components were identified in concentrated suspensions of cells grown on formate and Cl-OHPA or formate and fumarate, using electron paramagnetic resonance (EPR), visible spectroscopy, and high-performance liquid chromatography (HPLC) analysis of membrane extracts. In cell suspensions, these components were reduced upon addition of formate and oxidized after addition of Cl-OHPA, indicating involvement in organohalide respiration. Genome analysis revealed genes that likely encode the identified components of the electron transport chain from formate to fumarate or Cl-OHPA. Data presented here suggest that the first part of the electron transport chain from formate to fumarate or Cl-OHPA is shared. Electrons are channeled from an outward-facing formate dehydrogenase via menaquinones to a fumarate reductase located at the cytoplasmic face of the membrane. When Cl-OHPA is the terminal electron acceptor, electrons are transferred from menaquinones to outward-facing CprA, via an as-yet-unidentified membrane complex, and potentially an extracellular flavoprotein acting as an electron shuttle between the quinol dehydrogenase membrane complex and CprA. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Genomic, Proteomic, and Biochemical Analysis of the Organohalide Respiratory Pathway in Desulfitobacterium dehalogenans

    PubMed Central

    van de Pas, Bram A.; Atteia, Ariane; Krab, Klaas; Hagen, Wilfred R.; Goodwin, Lynne; Chain, Patrick; Boeren, Sjef; Maphosa, Farai; Schraa, Gosse; de Vos, Willem M.; van der Oost, John; Smidt, Hauke

    2014-01-01

    Desulfitobacterium dehalogenans is able to grow by organohalide respiration using 3-chloro-4-hydroxyphenyl acetate (Cl-OHPA) as an electron acceptor. We used a combination of genome sequencing, biochemical analysis of redox active components, and shotgun proteomics to study elements of the organohalide respiratory electron transport chain. The genome of Desulfitobacterium dehalogenans JW/IU-DC1T consists of a single circular chromosome of 4,321,753 bp with a GC content of 44.97%. The genome contains 4,252 genes, including six rRNA operons and six predicted reductive dehalogenases. One of the reductive dehalogenases, CprA, is encoded by a well-characterized cprTKZEBACD gene cluster. Redox active components were identified in concentrated suspensions of cells grown on formate and Cl-OHPA or formate and fumarate, using electron paramagnetic resonance (EPR), visible spectroscopy, and high-performance liquid chromatography (HPLC) analysis of membrane extracts. In cell suspensions, these components were reduced upon addition of formate and oxidized after addition of Cl-OHPA, indicating involvement in organohalide respiration. Genome analysis revealed genes that likely encode the identified components of the electron transport chain from formate to fumarate or Cl-OHPA. Data presented here suggest that the first part of the electron transport chain from formate to fumarate or Cl-OHPA is shared. Electrons are channeled from an outward-facing formate dehydrogenase via menaquinones to a fumarate reductase located at the cytoplasmic face of the membrane. When Cl-OHPA is the terminal electron acceptor, electrons are transferred from menaquinones to outward-facing CprA, via an as-yet-unidentified membrane complex, and potentially an extracellular flavoprotein acting as an electron shuttle between the quinol dehydrogenase membrane complex and CprA. PMID:25512312

  6. Construction of a Cyber Attack Model for Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, Athi; Bari, Robert A.; Youngblood, Robert

    The consideration of how one compromised digital equipment can impact neighboring equipment is critical to understanding the progression of cyber attacks. The degree of influence that one component may have on another depends on a variety of factors, including the sharing of resources such as network bandwidth or processing power, the level of trust between components, and the inclusion of segmentation devices such as firewalls. The interactions among components via mechanisms that are unique to the digital world are not usually considered in traditional PRA. This means potential sequences of events that may occur during an attack may be missedmore » if one were to only look at conventional accident sequences. This paper presents a method where, starting from the initial attack vector, the progression of a cyber attack can be modeled. The propagation of the attack is modeled by considering certain attributes of the digital components in the system. These attributes determine the potential vulnerability of a component to a class of attack and the capability gained by the attackers once they are in control of the equipment. The use of attributes allows similar components (components with the same set of attributes) to be modeled in the same way, thereby reducing the computing resources required for analysis of large systems.« less

  7. Numerical simulation of machining distortions on a forged aerospace component following a one and a multi-step approaches

    NASA Astrophysics Data System (ADS)

    Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda

    2018-05-01

    Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.

  8. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    NASA Astrophysics Data System (ADS)

    Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.

    2015-03-01

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  9. Hydrogen Compressor Reliability Investigation and Improvement. Cooperative Research and Development Final Report, CRADA Number CRD-13-514

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terlip, Danny

    2016-03-28

    Diaphragm compressors have become the primary source of on-site hydrogen compression for hydrogen fueling stations around the world. NREL and PDC have undertaken two studies aimed at improving hydrogen compressor operation and reducing the cost contribution to dispensed fuel. The first study identified the failure mechanisms associated with mechanical compression to reduce the maintenance and down-time. The second study will investigate novel station configurations to maximize hydrogen usage and compressor lifetime. This partnership will allow for the simulation of operations in the field and a thorough analysis of the component failure to improve the reliability of diaphragm compression.

  10. Reduction of police vehicle accidents through mechaniically aided supervision

    PubMed Central

    Larson, Lynn D.; Schnelle, John F.; Kirchner, Robert; Carr, Adam F.; Domash, Michele; Risley, Todd R.

    1980-01-01

    Tachograph recorders were installed in 224 vehicles of a metropolitan police department to monitor vehicle operation in an attempt to reduce the rate of accidents. Police sergeants reviewed each tachograph chart and provided feedback to officers regarding their driving performance. Reliability checks and additional feedback procedures were implemented so that upper level supervisors monitored and controlled the performance of field sergeants. The tachograph intervention and components of the feedback system nearly eliminated personal injury accidents and sharply reduced accidents caused by officer negligence. A cost-benefit analysis revealed that the savings in vehicle repair and injury claims outweighed the equipment and operating costs. PMID:16795634

  11. [Analysis of the 4th generation outer space bred Angelica dahurica by FTIR spectroscopy].

    PubMed

    Zhu, Yan-ying; Wu, Peng-le; Liu, Mei-yi; Wang, Zhi-zhou; Guo, Xi-hua; Guan, Ying

    2012-03-01

    The major components of the 4th generation outer space bred angelica and the ground group were determined and analyzed by Fourier transform infrared spectroscopy (FTIR) and second derivative spectrum, considering the large mutation of the plants with space mutagenesis. The results show that the content of the coumarin (1741 cm(-1)), which is the main active components of the space angelica dahurica increased, and the content of the protein (1 459, 1 419 cm(-1)) and the fat (930 cm(-1)) increased slightly, whereas the content of the starch and the dietary fiber reduced drastically. There are obvious differences between the peak values of the second derivative spectra of the plants, revealing that the outer space angelica dahurica contained amine component at 1 279 cm(-1). Space mutation breeding is favor of breeding angelica with better idiosyncrasy.

  12. Improvement of Automated Identification of the Heart Wall in Echocardiography by Suppressing Clutter Component

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2013-07-01

    For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.

  13. Development of advanced high temperature in-cylinder components and tribological systems for low heat rejection diesel engines, phase 1

    NASA Astrophysics Data System (ADS)

    Kroeger, C. A.; Larson, H. J.

    1992-03-01

    Analysis and concept design work completed in Phase 1 have identified a low heat rejection engine configuration with the potential to meet the Heavy Duty Transport Technology program specific fuel consumption goal of 152 g/kW-hr. The proposed engine configuration incorporates low heat rejection, in-cylinder components designed for operation at 24 MPa peak cylinder pressure. Water cooling is eliminated by selective oil cooling of the components. A high temperature lubricant will be required due to increased in-cylinder operating temperatures. A two-stage turbocharger air system with intercooling and aftercooling was selected to meet engine boost and BMEP requirements. A turbocompound turbine stage is incorporated for exhaust energy recovery. The concept engine cost was estimated to be 43 percent higher compared to a Caterpillar 3176 engine. The higher initial engine cost is predicted to be offset by reduced operating costs due the lower fuel consumption.

  14. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    NASA Astrophysics Data System (ADS)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  15. Development of advanced high temperature in-cylinder components and tribological systems for low heat rejection diesel engines, phase 1

    NASA Technical Reports Server (NTRS)

    Kroeger, C. A.; Larson, H. J.

    1992-01-01

    Analysis and concept design work completed in Phase 1 have identified a low heat rejection engine configuration with the potential to meet the Heavy Duty Transport Technology program specific fuel consumption goal of 152 g/kW-hr. The proposed engine configuration incorporates low heat rejection, in-cylinder components designed for operation at 24 MPa peak cylinder pressure. Water cooling is eliminated by selective oil cooling of the components. A high temperature lubricant will be required due to increased in-cylinder operating temperatures. A two-stage turbocharger air system with intercooling and aftercooling was selected to meet engine boost and BMEP requirements. A turbocompound turbine stage is incorporated for exhaust energy recovery. The concept engine cost was estimated to be 43 percent higher compared to a Caterpillar 3176 engine. The higher initial engine cost is predicted to be offset by reduced operating costs due the lower fuel consumption.

  16. Apparatus and method to reduce wear and friction between CMC-to-metal attachment and interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cairo, Ronald Ralph; Parolini, Jason Robert; Delvaux, John McConnell

    An apparatus to reduce wear and friction between CMC-to-metal attachment and interface, including a metal layer configured for insertion between a surface interface between a CMC component and a metal component. The surface interface of the metal layer is compliant relative to asperities of the surface interface of the CMC component. A coefficient of friction between the surface interface of the CMC component and the metal component is about 1.0 or less at an operating temperature between about 300.degree. C. to about 325.degree. C. and a limiting temperature of the metal component.

  17. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images

    PubMed Central

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He

    2018-01-01

    Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424

  18. Multivariate Genetic Correlates of the Auditory Paired Stimuli-Based P2 Event-Related Potential in the Psychosis Dimension From the BSNIP Study.

    PubMed

    Mokhtari, Mohammadreza; Narayanan, Balaji; Hamm, Jordan P; Soh, Pauline; Calhoun, Vince D; Ruaño, Gualberto; Kocherla, Mohan; Windemuth, Andreas; Clementz, Brett A; Tamminga, Carol A; Sweeney, John A; Keshavan, Matcheri S; Pearlson, Godfrey D

    2016-05-01

    The complex molecular etiology of psychosis in schizophrenia (SZ) and psychotic bipolar disorder (PBP) is not well defined, presumably due to their multifactorial genetic architecture. Neurobiological correlates of psychosis can be identified through genetic associations of intermediate phenotypes such as event-related potential (ERP) from auditory paired stimulus processing (APSP). Various ERP components of APSP are heritable and aberrant in SZ, PBP and their relatives, but their multivariate genetic factors are less explored. We investigated the multivariate polygenic association of ERP from 64-sensor auditory paired stimulus data in 149 SZ, 209 PBP probands, and 99 healthy individuals from the multisite Bipolar-Schizophrenia Network on Intermediate Phenotypes study. Multivariate association of 64-channel APSP waveforms with a subset of 16 999 single nucleotide polymorphisms (SNPs) (reduced from 1 million SNP array) was examined using parallel independent component analysis (Para-ICA). Biological pathways associated with the genes were assessed using enrichment-based analysis tools. Para-ICA identified 2 ERP components, of which one was significantly correlated with a genetic network comprising multiple linearly coupled gene variants that explained ~4% of the ERP phenotype variance. Enrichment analysis revealed epidermal growth factor, endocannabinoid signaling, glutamatergic synapse and maltohexaose transport associated with P2 component of the N1-P2 ERP waveform. This ERP component also showed deficits in SZ and PBP. Aberrant P2 component in psychosis was associated with gene networks regulating several fundamental biologic functions, either general or specific to nervous system development. The pathways and processes underlying the gene clusters play a crucial role in brain function, plausibly implicated in psychosis. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  20. Component-Level Electronic-Assembly Repair (CLEAR) Analysis of the Problem Reporting and Corrective Action (PRACA) Database of the International Space Station On-Orbit Electrical Systems

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.

    2011-01-01

    The NASA Constellation Program is investigating and developing technologies to support human exploration of the Moon and Mars. The Component-Level Electronic-Assembly Repair (CLEAR) task is part of the Supportability Project managed by the Exploration Technology Development Program. CLEAR is aimed at enabling a flight crew to diagnose and repair electronic circuits in space yet minimize logistics spares, equipment, and crew time and training. For insight into actual space repair needs, in early 2008 the project examined the operational experience of the International Space Station (ISS) program. CLEAR examined the ISS on-orbit Problem Reporting and Corrective Action database for electrical and electronic system problems. The ISS has higher than predicted reliability yet, as expected, it has persistent problems. A goal was to identify which on-orbit electrical problems could be resolved by a component-level replacement. A further goal was to identify problems that could benefit from the additional diagnostic and test capability that a component-level repair capability could provide. The study indicated that many problems stem from a small set of root causes that also represent distinct component problems. The study also determined that there are certain recurring problems where the current telemetry instrumentation and built-in tests are unable to completely resolve the problem. As a result, the root cause is listed as unknown. Overall, roughly 42 percent of on-orbit electrical problems on ISS could be addressed with a component-level repair. Furthermore, 63 percent of on-orbit electrical problems on ISS could benefit from additional external diagnostic and test capability. These results indicate that in situ component-level repair in combination with diagnostic and test capability can be expected to increase system availability and reduce logistics. The CLEAR approach can increase the flight crew s ability to act decisively to resolve problems while reducing dependency on Earth-supplied logistics for future Constellation Program missions.

  1. Testing for intracycle determinism in pseudoperiodic time series.

    PubMed

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  2. Lattice Boltzmann for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin

    2017-01-01

    Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!

  3. Mindfulness Based Stress Reduction for Academic Evaluation Anxiety: A Naturalistic Longitudinal Study.

    PubMed

    Dundas, Ingrid; Thorsheim, Torbjørn; Hjeltnes, Aslak; Binder, Per Einar

    2016-04-02

    Mindfulness based stress reduction (MBSR) for academic evaluation anxiety and self-confidence in 70 help-seeking bachelor's and master's students was examined. A repeated measures analysis of covariance on the 46 students who completed pretreatment and posttreatment measures (median age = 24 years, 83% women) showed that evaluation anxiety and self-confidence improved. A growth curve analysis with all 70 original participants showed reductions in both cognitive and emotional components of evaluation anxiety, and that reduction continued postintervention. Although more research is needed, this study indicates that MBSR may reduce evaluation anxiety.

  4. Mindfulness Based Stress Reduction for Academic Evaluation Anxiety: A Naturalistic Longitudinal Study

    PubMed Central

    Dundas, Ingrid; Thorsheim, Torbjørn; Hjeltnes, Aslak; Binder, Per Einar

    2016-01-01

    ABSTRACT Mindfulness based stress reduction (MBSR) for academic evaluation anxiety and self-confidence in 70 help-seeking bachelor’s and master’s students was examined. A repeated measures analysis of covariance on the 46 students who completed pretreatment and posttreatment measures (median age = 24 years, 83% women) showed that evaluation anxiety and self-confidence improved. A growth curve analysis with all 70 original participants showed reductions in both cognitive and emotional components of evaluation anxiety, and that reduction continued postintervention. Although more research is needed, this study indicates that MBSR may reduce evaluation anxiety. PMID:27227169

  5. Characterisation of landfill leachate by EEM-PARAFAC-SOM during physical-chemical treatment by coagulation-flocculation, activated carbon adsorption and ion exchange.

    PubMed

    Oloibiri, Violet; De Coninck, Sam; Chys, Michael; Demeestere, Kristof; Van Hulle, Stijn W H

    2017-11-01

    The combination of fluorescence excitation-emission matrices (EEM), parallel factor analysis (PARAFAC) and self-organizing maps (SOM) is shown to be a powerful tool in the follow up of dissolved organic matter (DOM) removal from landfill leachate by physical-chemical treatment consisting of coagulation, granular activated carbon (GAC) and ion exchange. Using PARAFAC, three DOM components were identified: C1 representing humic/fulvic-like compounds; C2 representing tryptophan-like compounds; and C3 representing humic-like compounds. Coagulation with ferric chloride (FeCl 3 ) at a dose of 7 g/L reduced the maximum fluorescence of C1, C2 and C3 by 52%, 17% and 15% respectively, while polyaluminium chloride (PACl) reduced C1 only by 7% at the same dose. DOM removal during GAC and ion exchange treatment of raw and coagulated leachate exhibited different profiles. At less than 2 bed volumes (BV) of treatment, the humic components C1 and C3 were rapidly removed, whereas at BV ≥ 2 the tryptophan-like component C2 was preferentially removed. Overall, leachate treated with coagulation +10.6 BV GAC +10.6 BV ion exchange showed the highest removal of C1 (39% - FeCl 3 , 8% - PACl), C2 (74% - FeCl 3 , 68% - PACl) and no C3 removal; whereas only 52% C2 and no C1 and C3 removal was observed in raw leachate treated with 10.6 BV GAC + 10.6 BV ion exchange only. Analysis of PARAFAC-derived components with SOM revealed that coagulation, GAC and ion exchange can treat leachate at least 50% longer than only GAC and ion exchange before the fluorescence composition of leachate remains unchanged. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Relative potency of varenicline or fluvoxamine to reduce responding for ethanol versus food depends on the presence or absence of concurrently earned food.

    PubMed

    Ginsburg, Brett C; Lamb, Richard J

    2014-03-01

    Varenicline, a nicotinic partial agonist, selectively reduces ethanol (EtOH)- versus sucrose-maintained behavior when tested in separate groups, yet like the indirect agonist fluvoxamine, this selectively inverts when EtOH and food are concurrently available. Here, we extend these findings by examining varenicline and fluvoxamine effects under a multiple concurrent schedule where food and EtOH are concurrently available in different components: Component 1 where the food fixed-ratio was 25 and Component 2 where the food fixed-ratio was 75. The EtOH fixed-ratio was always 5. Food-maintained responding predominated in Component 1, while EtOH-maintained responding predominated in Component 2. In a second experiment, varenicline effects were assessed under a multiple schedule where food, then EtOH, then again food were available in separate 5-minute components with fixed-ratios of 5 for each reinforcement. In the multiple concurrent schedule, varenicline was more potent at reducing food- versus EtOH-maintained responding in both components and reduced EtOH-maintained responding more potently during Component 1 (when food was almost never earned) than in Component 2 (where food was often earned). Fluvoxamine was similarly potent at reducing food- and EtOH-maintained responding. Under the multiple schedule, varenicline, like fluvoxamine, more potently decreases EtOH- versus food maintained responding when only food or EtOH is available in separate components. These results demonstrate that selective effects on drug- versus alternative-maintained behavior depend on the schedule arrangement, and assays in which EtOH or an alternative is the only programmed reinforcement may overestimate the selectivity of treatments to decrease EtOH self-administration. Thus selective effects obtained under one assay may not generalize to another. Better understanding the behavioral mechanisms responsible for these results may help to guide pharmaco-therapeutic development for substance use disorders.

  7. Anthropometric profile of combat athletes via multivariate analysis.

    PubMed

    Burdukiewicz, Anna; Pietraszewska, Jadwiga; Stachoń, Aleksandra; Andrzejewska, Justyna

    2017-11-07

    Athletic success is a complex phenotype influenced by multiple factors, from sport-specific skills to anthropometric characteristics. Considering the latter, the literature has repeatedly indicated that athletes possess distinct physical characteristics depending on the practiced discipline. The aim of the present study was to apply univariate and multivariate methods to assess a wide range of morphometric and somatotypic characteristics in male combat athletes. Biometric data were obtained from 206 male university-level practitioners of judo, jiu-jitsu, karate, kickboxing, taekwondo, and wrestling. Measures included height- and length-based variables, breadths, circumferences, and skinfolds. Body proportions and somatotype, using Sheldon's method of somatotopy as modified by Heath and Carter, were then determined. Body fat percentage was assessed by bioelectrical impedance analysis using tetrapolar hand-to-foot electrodes. Data were subjected to a wide array of statistical analysis. The results show between-group differences in the magnitudes of the analyzed characteristics. While mesomorphy was the dominant component of each group somatotype, enhanced ectomorphy was observed in those disciplines that require a high level of agility. Principal component analysis reduced the multivariate dimensionality of the data to three components (characterizing body size, height-based measures, and the anthropometric structure of the upper extremities) that explained the majority of data variance. The development of a sport-specific anthropometric profile via height- and mass-based and morphometric and somatotypic variables can aid in the design of training protocols and the identification of athlete markers as well as serve as a diagnostic criterion in predicting combat athlete performance.

  8. Distortion control in 20MnCr5 bevel gears after liquid nitriding process to maintain precision dimensions

    NASA Astrophysics Data System (ADS)

    Mahendiran, M.; Kavitha, M.

    2018-02-01

    Robotic and automotive gears are generally very high precision components with limitations in tolerances. Bevel gears are more widely used and dimensionally very close tolerance components that need stability without any backlash or distortion for smooth and trouble free functions. Nitriding is carried out to enhance wear resistance of the surface. The aim of this paper is to reduce the distortion in liquid nitriding process, though plasma nitriding is preferred for high precision components. Various trials were conducted to optimize the process parameters, considering pre dimensional setting for nominal nitriding layer growth. Surface cleaning, suitable fixtures and stress relieving operations were also done to optimize the process. Micro structural analysis and Vickers hardness testing were carried out for analyzing the phase changes, variation in surface hardness and case depth. CNC gear testing machine was used for determining the distortion level. The presence of white layer was found for about 10-15μm in the case depth of 250± 3.5μm showing an average surface hardness of 670 HV. Hence the economical liquid nitriding process was successfully used for producing high hardness and wear resistant coating over 20MnCr5 material with less distortion and reduced secondary grinding process for dimensional control.

  9. The development and implementation of the Chronic Care Management Programme in Counties Manukau.

    PubMed

    Wellingham, John; Tracey, Jocelyn; Rea, Harold; Gribben, Barry

    2003-02-21

    To develop an effective and efficient process for the seamless delivery of care for targeted patients with specific chronic diseases. To reduce inexplicable variation and maximise use of available resources by implementing evidence-based care processes. To develop a programme that is acceptable and applicable to the Counties Manukau region. A model for the management of people with chronic diseases was developed. Model components and potential interventions were piloted. For each disease project, a return on investment was calculated and external evaluation was undertaken. The initial model was subsequently modified and individual disease projects aligned to it. The final Chronic Care Management model, agreed in September 2001, described a single common process. Key components were the targeting of high risk patients, organisation of cost effective interventions into a system of care, and an integrated care server acting as a data warehouse with a rules engine, providing flags and reminders. Return on investment analysis suggested potential savings for each disease component from $277 to $980 per person per annum. For selected chronic diseases, introduction of an integrated chronic care management programme, based on internationally accepted best practice processes and interventions can make significant savings, reducing morbidity and improving the efficiency of health delivery in the Counties Manukau region.

  10. Dysfunctional error-related processing in female psychopathy

    PubMed Central

    Steele, Vaughn R.; Edwards, Bethany G.; Bernat, Edward M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Neurocognitive studies of psychopathy have predominantly focused on male samples. Studies have shown that female psychopaths exhibit similar affective deficits as their male counterparts, but results are less consistent across cognitive domains including response modulation. As such, there may be potential gender differences in error-related processing in psychopathic personality. Here we investigate response-locked event-related potential (ERP) components [the error-related negativity (ERN/Ne) related to early error-detection processes and the error-related positivity (Pe) involved in later post-error processing] in a sample of incarcerated adult female offenders (n = 121) who performed a response inhibition Go/NoGo task. Psychopathy was assessed using the Hare Psychopathy Checklist-Revised (PCL-R). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Consistent with previous research performed in psychopathic males, female psychopaths exhibited specific deficiencies in the neural correlates of post-error processing (as indexed by reduced Pe amplitude) but not in error monitoring (as indexed by intact ERN/Ne amplitude). Specifically, psychopathic traits reflecting interpersonal and affective dysfunction remained significant predictors of both time-domain and PCA measures reflecting reduced Pe mean amplitude. This is the first evidence to suggest that incarcerated female psychopaths exhibit similar dysfunctional post-error processing as male psychopaths. PMID:26060326

  11. Stigma-reducing components in direct-to-consumer prescription ads: onset controllability, offset controllability, and recategorization.

    PubMed

    An, Soontae; Kang, Hannah

    2011-01-01

    This study analyzed direct-to-consumer (DTC) print ads for stigmatized illnesses from 1998 to 2008. Attribution theory and recategorization theory were used as theoretical frames to assess whether those DTC ads contained message components to reduce stigma. DTC ads for 10 stigmatized illnesses in National Geographic, Better Homes and Gardens, Ladies' Home Journal, and Time were analyzed for the presence of onset controllability, offset controllability, and recategorization. Results showed that only 3.7% of ads offered the three message components together and, in fact, 21% of the ads did not contain any of the stigma-reducing message elements. Recategorization cue was the most prevalent component, while cues for onset and offset controllability were relatively less frequent, indicating the lack of educational components. Copyright © Taylor & Francis Group, LLC

  12. Evaluation of methodologies for assessing the overall diet: dietary quality scores and dietary pattern analysis.

    PubMed

    Ocké, Marga C

    2013-05-01

    This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues.

  13. Numerical analysis of a fluidic oscillator

    NASA Astrophysics Data System (ADS)

    Hoettges, Stefan; Schenkel, Torsten; Oertel, Herbert

    2010-11-01

    The technology of fluid logic or fluidic has its origins in 1959 when scientists were looking for alternatives to electronics to realize measuring or automatic control tasks. In recent years interest in fluidic components has been renewed. Possible applications of fluidic oscillators have been tested in flow control, to reduce or eliminate separation regions, to avoid resonance noise in the flow past cavities, to improve combustion processes or for efficient cooling of turbine blades or electronic components. The oscillatory motion of the jet is achieved only by suitable shaping of the nozzle geometry and fluid-dynamic interactions, hence no moving components or external sources of energy are necessary. Therefore fluidic oscillators can be used in extreme environmental conditions, such as high temperatures, aggressive media or within electromagnetic fields. In the present study the working principle of the fluidic oscillator has been identified using three-dimensional unsteady RANS simulations and stability analysis. The numerical models used have been validated successfully against experimental data. Furthermore the effects of changes in inlet velocity, geometry and working fluid on the oscillation frequency have been investigated. Based on the results a new dimensionless number has been derived in order to characterize the unsteady behavior of the fluidic oscillator.

  14. Urbanization and human health in urban India: institutional analysis of water-borne diseases in Ahmedabad.

    PubMed

    Saravanan, V S; Ayessa Idenal, Marissa; Saiyed, Shahin; Saxena, Deepak; Gerke, Solvay

    2016-10-01

    Diseases are rapidly urbanizing. Ageing infrastructures, high levels of inequality, poor urban governance, rapidly growing economies and highly dense and mobile populations all create environments rife for water-borne diseases. This article analyzes the role of institutions as crosscutting entities among a myriad of factors that breed water-borne diseases in the city of Ahmedabad, India. It applies 'path dependency' and a 'rational choice' perspective to understand the factors facilitating the breeding of diseases. This study is based on household surveys of approximately 327 households in two case study wards and intermittent interviews with key informants over a period of 2 years. Principle component analysis is applied to reduce the data and convert a set of observations, which potentially correlate with each other, into components. Institutional analyses behind these components reveal the role of social actors in exploiting the deeply rooted inefficiencies affecting urban health. This has led to a vicious cycle; breaking this cycle requires understanding the political dynamics that underlie the exposure and prevalence of diseases to improve urban health. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Photovoltaic solar panels of crystalline silicon: Characterization and separation.

    PubMed

    Dias, Pablo Ribeiro; Benevit, Mariana Gonçalves; Veit, Hugo Marcelo

    2016-03-01

    Photovoltaic panels have a limited lifespan and estimates show large amounts of solar modules will be discarded as electronic waste in a near future. In order to retrieve important raw materials, reduce production costs and environmental impacts, recycling such devices is important. Initially, this article investigates which silicon photovoltaic module's components are recyclable through their characterization using X-ray fluorescence, X-ray diffraction, energy dispersion spectroscopy and atomic absorption spectroscopy. Next, different separation methods are tested to favour further recycling processes. The glass was identified as soda-lime glass, the metallic filaments were identified as tin-lead coated copper, the panel cells were made of silicon and had silver filaments attached to it and the modules' frames were identified as aluminium, all of which are recyclable. Moreover, three different components segregation methods have been studied. Mechanical milling followed by sieving was able to separate silver from copper while chemical separation using sulphuric acid was able to detach the semiconductor material. A thermo gravimetric analysis was performed to evaluate the use of a pyrolysis step prior to the component's removal. The analysis showed all polymeric fractions present degrade at 500 °C. © The Author(s) 2016.

  16. Managing manifest diseases, but not health risks, saved PepsiCo money over seven years.

    PubMed

    Caloyeras, John P; Liu, Hangsheng; Exum, Ellen; Broderick, Megan; Mattke, Soeren

    2014-01-01

    Workplace wellness programs are increasingly popular. Employers expect them to improve employee health and well-being, lower medical costs, increase productivity, and reduce absenteeism. To test whether such expectations are warranted, we evaluated the cost impact of the lifestyle and disease management components of PepsiCo's wellness program, Healthy Living. We found that seven years of continuous participation in one or both components was associated with an average reduction of $30 in health care cost per member per month. When we looked at each component individually, we found that the disease management component was associated with lower costs and that the lifestyle management component was not. We estimate disease management to reduce health care costs by $136 per member per month, driven by a 29 percent reduction in hospital admissions. Workplace wellness programs may reduce health risks, delay or avoid the onset of chronic diseases, and lower health care costs for employees with manifest chronic disease. But employers and policy makers should not take for granted that the lifestyle management component of such programs can reduce health care costs or even lead to net savings.

  17. A novel quantitative analysis method of three-dimensional fluorescence spectra for vegetable oils contents in edible blend oil

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.

  18. Independent component analysis for the extraction of reliable protein signal profiles from MALDI-TOF mass spectra.

    PubMed

    Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea

    2008-01-01

    Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.

  19. Effects of Thermal Barrier Coatings on Approaches to Turbine Blade Cooling

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.

    2007-01-01

    Reliance on Thermal Barrier Coatings (TBC) to reduce the amount of air used for turbine vane cooling is beneficial both from the standpoint of reduced NOx production, and as a means of improving cycle efficiency through improved component efficiency. It is shown that reducing vane cooling from 10 to 5 percent of mainstream air can lead to NOx reductions of nearly 25 percent while maintaining the same rotor inlet temperature. An analysis is given which shows that, when a TBC is relied upon in the vane thermal design process, significantly less coolant is required using internal cooling alone compared to film cooling. This is especially true for small turbines where internal cooling without film cooling permits the surface boundary layer to remain laminar over a significant fraction of the vane surface.

  20. Powerline noise elimination in biomedical signals via blind source separation and wavelet analysis.

    PubMed

    Akwei-Sekyere, Samuel

    2015-01-01

    The distortion of biomedical signals by powerline noise from recording biomedical devices has the potential to reduce the quality and convolute the interpretations of the data. Usually, powerline noise in biomedical recordings are extinguished via band-stop filters. However, due to the instability of biomedical signals, the distribution of signals filtered out may not be centered at 50/60 Hz. As a result, self-correction methods are needed to optimize the performance of these filters. Since powerline noise is additive in nature, it is intuitive to model powerline noise in a raw recording and subtract it from the raw data in order to obtain a relatively clean signal. This paper proposes a method that utilizes this approach by decomposing the recorded signal and extracting powerline noise via blind source separation and wavelet analysis. The performance of this algorithm was compared with that of a 4th order band-stop Butterworth filter, empirical mode decomposition, independent component analysis and, a combination of empirical mode decomposition with independent component analysis. The proposed method was able to expel sinusoidal signals within powerline noise frequency range with higher fidelity in comparison with the mentioned techniques, especially at low signal-to-noise ratio.

  1. A climatology of total ozone mapping spectrometer data using rotated principal component analysis

    NASA Astrophysics Data System (ADS)

    Eder, Brian K.; Leduc, Sharon K.; Sickles, Joseph E.

    1999-02-01

    The spatial and temporal variability of total column ozone (Ω) obtained from the total ozone mapping spectrometer (TOMS version 7.0) during the period 1980-1992 was examined through the use of a multivariate statistical technique called rotated principal component analysis. Utilization of Kaiser's varimax orthogonal rotation led to the identification of 14, mostly contiguous subregions that together accounted for more than 70% of the total Ω variance. Each subregion displayed statistically unique Ω characteristics that were further examined through time series and spectral density analyses, revealing significant periodicities on semiannual, annual, quasi-biennial, and longer term time frames. This analysis facilitated identification of the probable mechanisms responsible for the variability of Ω within the 14 homogeneous subregions. The mechanisms were either dynamical in nature (i.e., advection associated with baroclinic waves, the quasi-biennial oscillation, or El Niño-Southern Oscillation) or photochemical in nature (i.e., production of odd oxygen (O or O3) associated with the annual progression of the Sun). The analysis has also revealed that the influence of a data retrieval artifact, found in equatorial latitudes of version 6.0 of the TOMS data, has been reduced in version 7.0.

  2. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    PubMed

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Failure Analysis of Sapphire Refractive Secondary Concentrators

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Quinn, George D.

    2009-01-01

    Failure analysis was performed on two sapphire, refractive secondary concentrators (RSC) that failed during elevated temperature testing. Both concentrators failed from machining/handling damage on the lens face. The first concentrator, which failed during testing to 1300 C, exhibited a large r-plane twin extending from the lens through much of the cone. The second concentrator, which was an attempt to reduce temperature gradients and failed during testing to 649 C, exhibited a few small twins on the lens face. The twins were not located at the origin, but represent another mode of failure that needs to be considered in the design of sapphire components. In order to estimate the fracture stress from fractographic evidence, branching constants were measured on sapphire strength specimens. The fractographic analysis indicated radial tensile stresses of 44 to 65 MPa on the lens faces near the origins. Finite element analysis indicated similar stresses for the first RSC, but lower stresses for the second RSC. Better machining and handling might have prevented the fractures, however, temperature gradients and resultant thermal stresses need to be reduced to prevent twinning.

  4. Analysis and design of a genetic circuit for dynamic metabolic engineering.

    PubMed

    Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan

    2013-08-16

    Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.

  5. An 81.6 μW FastICA processor for epileptic seizure detection.

    PubMed

    Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming

    2015-02-01

    To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.

  6. Analysis and design of planar and non-planar wings for induced drag minimization

    NASA Technical Reports Server (NTRS)

    Straussfogel, Dennis M.; Maughmer, Mark D.

    1991-01-01

    Improvements in the aerodynamic efficiency of commercial transport aircraft will reduce fuel usage with subsequent reduced cost, both monetary and environmental. To this end, the current research is aimed at reducing the overall drag of these aircraft with specific emphasis on reducing the drag generated by the lifting surfaces. The ultimate goal of this program is to create a wing design methodology which optimizes the geometry of the wing for lowest total drag within the constraints of a particular design specification. The components of drag which must be considered include profile drag, and wave drag. Profile drag is dependent upon, among other things, the airfoil section and the total wetted area. Induced drag, which is manifested as energy left in the wake by the trailing vortex system is mostly a function of wing span, but also depends on other geometric wing parameters. Wave drag of the wing, important in the transonic flight regime, is largely affected by the airfoil section, wing sweep, and so forth. The optimization problem is that of assessing the various parameters which contribute to the different components of wing drag, and determining the wing geometry which generates the best overall performance for a given aircraft mission. The primary thrust of the research effort to date was in the study of induced drag. Results from the study are presented.

  7. Different Approaches for Ensuring Performance/Reliability of Plastic Encapsulated Microcircuits (PEMs) in Space Applications

    NASA Technical Reports Server (NTRS)

    Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.

    2000-01-01

    Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.

  8. Rapid characterization of chemical markers for discrimination of Moutan Cortex and its processed products by direct injection-based mass spectrometry profiling and metabolomic method.

    PubMed

    Li, Chao-Ran; Li, Meng-Ning; Yang, Hua; Li, Ping; Gao, Wen

    2018-06-01

    Processing of herbal medicines is a characteristic pharmaceutical technique in Traditional Chinese Medicine, which can reduce toxicity and side effect, improve the flavor and efficacy, and even change the pharmacological action entirely. It is significant and crucial to perform a method to find chemical markers for differentiating herbal medicines in different processed degrees. The aim of this study was to perform a rapid and reasonable method to discriminate Moutan Cortex and its processed products, and to reveal the characteristics of chemical components depend on chemical markers. Thirty batches of Moutan Cortex and its processed products, including 11 batches of Raw Moutan Cortex (RMC), 9 batches of Moutan Cortex Tostus (MCT) and 10 batches of Moutan Cortex Carbonisatus (MCC), were directly injected in electrospray ionization quadrupole time-of-flight mass spectrometry (ESI-QTOF MS) for rapid analysis in positive and negative mode. Without chromatographic separation, each run was completed within 3 min. The raw MS data were automatically extracted by background deduction and molecular feature (MF) extraction algorithm. In negative mode, a total of 452 MFs were obtained and then pretreated by data filtration and differential analysis. After that, the filtered 85 MFs were treated by principal component analysis (PCA) to reduce the dimensions. Subsequently, a partial least squares discrimination analysis (PLS-DA) model was constructed for differentiation and chemical markers detection of Moutan Cortex in different processed degrees. The positive mode data were treated as same as those in negative mode. RMC, MCT and MCC were successfully classified. Moreover, 14 and 3 chemical markers from negative and positive mode respectively, were screened by the combination of their relative peak areas and the parameter variable importance in the projection (VIP) values in PLS-DA model. The content changes of these chemical markers were employed in order to illustrate chemical changes of Moutan Cortex after processed. These results showed that the proposed method which combined non-targeted metabolomics analysis with multivariate statistics analysis is reasonable and effective. It could not only be applied to discriminate herbal medicines and their processing products, but also to reveal the characteristics of chemical components during processing. Copyright © 2018. Published by Elsevier GmbH.

  9. Layered Composite Analysis Capability

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Cole, J. G.

    1985-01-01

    Laminated composite material construction is gaining popularity within industry as an attractive alternative to metallic designs where high strength at reduced weights is of prime consideration. This has necessitated the development of an effective analysis capability for the static, dynamic and buckling analyses of structural components constructed of layered composites. Theoretical and user aspects of layered composite analysis and its incorporation into CSA/NASTRAN are discussed. The availability of stress and strain based failure criteria is described which aids the user in reviewing the voluminous output normally produced in such analyses. Simple strategies to obtain minimum weight designs of composite structures are discussed. Several example problems are presented to demonstrate the accuracy and user convenient features of the capability.

  10. Integrated Multi-process Microfluidic Systems for Automating Analysis

    PubMed Central

    Yang, Weichun; Woolley, Adam T.

    2010-01-01

    Microfluidic technologies have been applied extensively in rapid sample analysis. Some current challenges for standard microfluidic systems are relatively high detection limits, and reduced resolving power and peak capacity compared to conventional approaches. The integration of multiple functions and components onto a single platform can overcome these separation and detection limitations of microfluidics. Multiplexed systems can greatly increase peak capacity in multidimensional separations and can increase sample throughput by analyzing many samples simultaneously. On-chip sample preparation, including labeling, preconcentration, cleanup and amplification, can all serve to speed up and automate processes in integrated microfluidic systems. This paper summarizes advances in integrated multi-process microfluidic systems for automated analysis, their benefits and areas for needed improvement. PMID:20514343

  11. Nonlinear, non-stationary image processing technique for eddy current NDE

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Dib, Gerges; Kim, Jaejoon; Zhang, Lu; Xin, Junjun; Udpa, Lalita

    2012-05-01

    Automatic analysis of eddy current (EC) data has facilitated the analysis of large volumes of data generated in the inspection of steam generator tubes in nuclear power plants. The traditional procedure for analysis of EC data includes data calibration, pre-processing, region of interest (ROI) detection, feature extraction and classification. Accurate ROI detection has been enhanced by pre-processing, which involves reducing noise and other undesirable components as well as enhancing defect indications in the raw measurement. This paper presents the Hilbert-Huang Transform (HHT) for feature extraction and support vector machine (SVM) for classification. The performance is shown to significantly better than the existing rule based classification approach used in industry.

  12. Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals

    PubMed Central

    Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.

    2016-01-01

    Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116

  13. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  14. Effectiveness of hand hygiene interventions in reducing illness absence among children in educational settings: a systematic review and meta-analysis

    PubMed Central

    Willmott, Micky; Nicholson, Alexandra; Busse, Heide; MacArthur, Georgina J; Brookes, Sara; Campbell, Rona

    2016-01-01

    Objective To undertake a systematic review and meta-analysis to establish the effectiveness of handwashing in reducing absence and/or the spread of respiratory tract (RT) and/or gastrointestinal (GI) infection among school-aged children and/or staff in educational settings. Design Randomised-controlled trials (RCTs). Setting Schools and other settings with a formal educational component in any country. Patients Children aged 3–11 years, and/or staff working with them. Intervention Interventions with a hand hygiene component. Main outcome measures Incidence of RT or GI infections or symptoms related to such infections; absenteeism; laboratory results of RT and/or GI infections. Results Eighteen cluster RCTs were identified; 13 school-based, 5 in child day care facilities or preschools. Studies were heterogeneous and had significant quality issues including small numbers of clusters and participants and inadequate randomisation. Individual study results suggest interventions may reduce children's absence, RT infection incidence and symptoms, and laboratory confirmed influenza-like illness. Evidence of impact on GI infection or symptoms was equivocal. Conclusions Studies are generally not well executed or reported. Despite updating existing systematic reviews and identifying new studies, evidence of the effect of hand hygiene interventions on infection incidence in educational settings is mostly equivocal but they may decrease RT infection among children. These results update and add to knowledge about this crucial public health issue in key settings with a vulnerable population. More robust, well reported cluster RCTs which learn from existing studies, are required. PMID:26471110

  15. Leukocyte-reduced blood components: patient benefits and practical applications.

    PubMed

    Higgins, V L

    1996-05-01

    To review the various types of filters used for red blood cell and platelet transfusions and to explain the trend in the use of leukocyte removal filters, practical information about their use, considerations in the selection of a filtration method, and cost-effectiveness issues. Published articles, books, and the author's experience. Leukocyte removal filters are used to reduce complications associated with transfused white blood cells that are contained in units of red blood cells and platelets. These complications include nonhemolytic febrile transfusion reactions (NHFTRs), alloimmunization and refractoriness to platelet transfusion, transfusion-transmitted cytomegalovirus (CMV), and immunomodulation. Leukocyte removal filters may be used at the bedside, in a hospital blood bank, or in a blood collection center. Factors that affect the flow rate of these filters include the variations in the blood component, the equipment used, and filter priming. Studies on the cost-effectiveness of using leukocyte-reduced blood components demonstrate savings based on the reduction of NHFTRs, reduction in the number of blood components used, and the use of filtered blood components as the equivalent of CMV seronegative-screened products. The use of leukocyte-reduced blood components significantly diminishes or prevents many of the adverse transfusion reactions associated with donor white blood cells. Leukocyte removal filters are cost-effective, and filters should be selected based on their ability to consistently achieve low leukocyte residual levels as well as their ease of use. Physicians may order leukocyte-reduced blood components for specific patients, or the components may be used because of an established institutional transfusion policy. Nurses often participate in deciding on a filtration method, primarily based on ease of use. Understanding the considerations in selecting a filtration method will help nurses make appropriate decisions to ensure quality patient care.

  16. Dual Fan Separator within the Universal Waste Management System

    NASA Technical Reports Server (NTRS)

    Stapleton, Tom; Converse, Dave; Broyan, James Lee, Jr.

    2014-01-01

    Since NASA's new spacecraft in development for both LEO and Deep Space capability have considerable crew volume reduction in comparison to the Space Shuttle, the need became apparent for a smaller commode. In response the Universal Waste Management System (UWMS) was designed, resulting in an 80% volume reduction from the last US commode, while enhancing performance. The ISS WMS and previous shuttle commodes have a fan supplying air flow to capture feces and a separator to capture urine and separate air from the captured air/urine mixture. The UWMS combined both rotating equipment components into a single unit, referred to at the Dual Fan Separator (DFS). The combination of these components resulted in considerable packaging efficiency and weight reduction, removing inter-component plumbing, individual mounting configurations and required only a single motor and motor controller, in some of the intended UWMS platform applications the urine is pumped to the ISS Urine Processor Assembly (UPA) system. It requires the DFS to include less than 2.00% air inclusion, by volume, in the delivered urine. The rotational speed needs to be kept as low as possible in centrifugal urine separators to reduce air inclusion in the pumped fluid, while fans depend on rotational speed to develop delivered head. To satisfy these conflicting requirements, a gear reducer was included, allowing the fans to rotate at a much higher speed than the separator. This paper outlines the studies and analysis performed to develop the DFS configuration. The studies included a configuration trade study, dynamic stability analysis of the rotating bodies and a performance analysis of included labyrinth seals. NASA is considering a program to fly the UWMS aboard the ISS as a flight experiment. The goal of this activity is to advance the Technical Readiness Level (TRL) of the DFS and determine if the concept is ready to be included as part of the flight experiment deliverable.

  17. Automotive Stirling Engine Development Program

    NASA Technical Reports Server (NTRS)

    Nightingale, N.; Ernst, W.; Richey, A.; Simetkosky, M.; Smith, G.; Antonelli, M. (Editor)

    1983-01-01

    Mod I engine testing and test results, the test of a Mod I engine in the United States, Mod I engine characterization and analysis, Mod I Transient Test Bed fuel economy, Mod I-A engine performance are discussed. Stirling engine reference engine manufacturing and reduced size studies, components and subsystems, and the study and test of low-cost casting alloys are also covered. The overall program philosophy is outlined, and data and results are presented.

  18. Infrastructure, Components and System Level Testing and Analysis of Electric Vehicles: Cooperative Research and Development Final Report, CRADA Number CRD-09-353

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubauer, J.

    2013-05-01

    Battery technology is critical for the development of innovative electric vehicle networks, which can enhance transportation sustainability and reduce dependence on petroleum. This cooperative research proposed by Better Place and NREL will focus on predicting the life-cycle economics of batteries, characterizing battery technologies under various operating and usage conditions, and designing optimal usage profiles for battery recharging and use.

  19. An Analysis of Information Systems Technology Initiatives and Small Businesses in the DoD Small Business Innovation Research (SBIR) Program

    DTIC Science & Technology

    2012-09-01

    PAGE INTENTIONALLY LEFT BLANK xv LIST OF ACRONYMS AND ABBREVIATIONS CAE Component Acquisition Executive COTS Commercial Off-The-Shelf DARPA...and reduce program lifecycle costs by expanding the pool of vendors and incorporating small innovative high -tech businesses in defense IT...acquisition. Particularly within the high -tech IT sector, small businesses have been consistently recognized as exceptional resources for the research and

  20. Homomorphic filtering textural analysis technique to reduce multiplicative noise in the 11Oba nano-doped liquid crystalline compounds

    NASA Astrophysics Data System (ADS)

    Madhav, B. T. P.; Pardhasaradhi, P.; Manepalli, R. K. N. R.; Pisipati, V. G. K. M.

    2015-07-01

    The compound undecyloxy benzoic acid (11Oba) exhibits nematic and smectic-C phases while a nano-doped undecyloxy benzoic acid with ZnO exhibits the same nematic and smectic-C phases with reduced clearing temperature as expected. The doping is done with 0.5% and 1% ZnO molecules. The clearing temperatures are reduced by approximately 4 ° and 6 °, respectively (differential scanning calorimeter data). While collecting the images from a polarizing microscope connected with hot stage and camera, the illumination and reflectance combined multiplicatively and the image quality was reduced to identify the exact phase in the compound. A novel technique of homomorphic filtering is used in this manuscript through which multiplicative noise components of the image are separated linearly in the frequency domain. This technique provides a frequency domain procedure to improve the appearance of an image by gray level range compression and contrast enhancement.

  1. Inventory of File gfs.t06z.pgrb2.0p25.anl

    Science.gov Websites

    UGRD analysis U-Component of Wind [m/s] 005 10 mb VGRD analysis V-Component of Wind [m/s] 006 10 mb -Component of Wind [m/s] 011 20 mb VGRD analysis V-Component of Wind [m/s] 012 20 mb ABSV analysis Absolute UGRD analysis U-Component of Wind [m/s] 018 30 mb VGRD analysis V-Component of Wind [m/s] 019 30 mb

  2. Inventory of File gfs.t06z.pgrb2.0p50.anl

    Science.gov Websites

    UGRD analysis U-Component of Wind [m/s] 005 10 mb VGRD analysis V-Component of Wind [m/s] 006 10 mb -Component of Wind [m/s] 011 20 mb VGRD analysis V-Component of Wind [m/s] 012 20 mb ABSV analysis Absolute UGRD analysis U-Component of Wind [m/s] 018 30 mb VGRD analysis V-Component of Wind [m/s] 019 30 mb

  3. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  4. High frequency oscillations evoked by peripheral magnetic stimulation.

    PubMed

    Biller, S; Simon, L; Fiedler, P; Strohmeier, D; Haueisen, J

    2011-01-01

    The analysis of somatosensory evoked potentials (SEP) and / or fields (SEF) is a well-established and important tool for investigating the functioning of the peripheral and central human nervous system. A standard technique to evoke SEPs / SEFs is the stimulation of the median nerve by using a bipolar electrical stimulus. We aim at an alternative stimulation technique enabling stimulation of deep nerve structures while reducing patient stress and error susceptibility. In the current study, we apply a commercial transcranial magnetic stimulation system for peripheral magnetic stimulation of the median nerve. We compare the results of simultaneously recorded EEG signals to prove applicability of our technique to evoke SEPs including low frequency components (LFC) as well as high frequency oscillations (HFO). Therefore, we compare amplitude, latency and time-frequency characteristics of the SEP of 14 healthy volunteers after electric and magnetic stimulation. Both low frequency components and high frequency oscillations were detected. The HFOs were superimposed onto the primary cortical response N20. Statistical analysis revealed significantly lower amplitudes and increased latencies for LFC and HFO components after magnetic stimulation. The differences indicate the inability of magnetic stimulation to elicit supramaximal responses. A psycho-perceptual evaluation showed that magnetic stimulation was less unpleasant for 12 out of the 14 volunteers. In conclusion, we showed that LFC and HFO components related to median nerve stimulation can be evoked by peripheral magnetic stimulation.

  5. Development of qualitative and quantitative analysis methods in pharmaceutical application with new selective signal excitation methods for 13 C solid-state nuclear magnetic resonance using 1 H T1rho relaxation time.

    PubMed

    Nasu, Mamiko; Nemoto, Takayuki; Mimura, Hisashi; Sako, Kazuhiro

    2013-01-01

    Most pharmaceutical drug substances and excipients in formulations exist in a crystalline or amorphous form, and an understanding of their state during manufacture and storage is critically important, particularly in formulated products. Carbon 13 solid-state nuclear magnetic resonance (NMR) spectroscopy is useful for studying the chemical and physical state of pharmaceutical solids in a formulated product. We developed two new selective signal excitation methods in (13) C solid-state NMR to extract the spectrum of a target component from such a mixture. These methods were based on equalization of the proton relaxation time in a single domain via rapid intraproton spin diffusion and the difference in proton spin-lattice relaxation time in the rotating frame ((1) H T1rho) of individual components in the mixture. Introduction of simple pulse sequences to one-dimensional experiments reduced data acquisition time and increased flexibility. We then demonstrated these methods in a commercially available drug and in a mixture of two saccharides, in which the (13) C signals of the target components were selectively excited, and showed them to be applicable to the quantitative analysis of individual components in solid mixtures, such as formulated products, polymorphic mixtures, or mixtures of crystalline and amorphous phases. Copyright © 2012 Wiley Periodicals, Inc.

  6. Design Performance of Front Steering-Type Electron Cyclotron Launcher for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, K.; Imai, T.; Kobayashi, N.

    2005-01-15

    The performance of a front steering (FS)-type electron cyclotron launcher designed for the International Thermonuclear Experimental Reactor (ITER) is evaluated with a thermal, electromagnetic, and nuclear analysis of the components; a mechanical test of a spiral tube for the steering mirror; and a rotational test of bearings. The launcher consists of a front shield and a launcher plug where three movable optic mirrors to steer incident multimegawatt radio-frequency beam power, waveguide components, nuclear shields, and vacuum windows are installed. The windows are located behind a closure plate to isolate the transmission lines from the radioactivated circumstance (vacuum vessel). The waveguidemore » lines of the launcher are doglegged to reduce the direct neutron streaming toward the vacuum windows and other components. The maximum stresses on the critical components such as the steering mirror, its cooling tube, and the front shield are less than their allowable stresses. It was also identified that the stress on the launcher, which yielded from electromagnetic force caused by plasma disruption, was a little larger than the criteria, and a modification of the launcher plug structure was necessary. The nuclear analysis result shows that the neutron shield capability of the launcher satisfies the shield criteria of the ITER. It concludes that the design of the FS launcher is generally suitable for application to the ITER.« less

  7. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    PubMed

    Arndt, Stephan; Sahker, Ethan; Hedden, Suzy

    2017-01-01

    The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.

  8. Home treatment for mental health problems: a systematic review.

    PubMed

    Burns, T; Knapp, M; Catty, J; Healey, A; Henderson, J; Watt, H; Wright, C

    2001-01-01

    This review investigates the effectiveness of 'home treatment' for mental health problems in terms of hospitalisation and cost-effectiveness. For the purposes of this review, 'home treatment' is defined as a service that enables the patient to be treated outside hospital as far as possible and remain in their usual place of residence. METHODS - SYSTEMATIC LITERATURE SEARCH: 'Home treatment' excluded studies focused on day, residential and foster care. The review was based on Cochrane methodology, but non-randomised studies were included if they compared two services; these were only analysed if they provided evidence of the groups' baseline clinical comparability. METHODS - REVIEW OF ECONOMIC EVALUATIONS: Economic evaluations among the studies found were reviewed against established criteria. METHODS - IDENTIFICATION OF SERVICE COMPONENTS: A three-round Delphi exercise ascertained the degree of consensus among expert psychiatrists concerning the important components of community-based services that enable them to treat patients outside hospital. The identified components were used to construct the follow-up questionnaire. METHODS - FOLLOW-UP OF AUTHORS: As a supplement to the information available in the papers, authors of all the studies were followed up for data on service components, sustainability of programmes and service utilisation. METHODS - DATA ANALYSIS: The outcome measure was mean days in hospital per patient per month over the follow-up period. (1) Comparative analysis - compared experimental to control services. It analysed all studies with available data, divided into 'inpatient-control' and 'community-control' studies, and tested for associations between service components and difference in hospital days. (2) Experimental services analysis - analysed only experimental service data and tested for associations between service components and hospital days. RESULTS - SYSTEMATIC LITERATURE SEARCH: A total of 91 studies were found, conducted over a 30-year period. The majority (87) focused on people with psychotic disorders. RESULTS - REVIEW OF ECONOMIC EVALUATIONS: Only 22 studies included economic evaluations. They provided little conclusive evidence about cost-effectiveness because of problems with the heterogeneity of services, sample size, outcome measures and quality of analysis. RESULTS - DELPHI EXERCISE: In all, 16 items were rated as 'essential', falling into six categories: home environment; skill-mix; psychiatrist involvement; service management; caseload size; and health/social care integration. There was consensus that caseloads under 25 and flexible working hours over 7 days were important, but little support for caseloads under 15 or for 24-hour services, and consensus that home visiting was essential, but not on teams being 'explicitly dedicated' to home treatment. RESULTS - RESPONSE TO FOLLOW-UP: A total of 60% of authors responded, supplying data on service components and hospital days in most cases. Other service utilisation data were far less readily available. RESULTS - SERVICE CHARACTERISATION AND CLASSIFICATION: The services were homogeneous in terms of 'home treatment function' but fairly heterogeneous in terms of other components. There was some evidence for a group of services that were multidisciplinary, had psychiatrists as integrated team members, had smaller caseloads, visited patients at home regularly and took responsibility for both health and social care. This was not a cohesive group, however. RESULTS - SUSTAINABILITY OF SERVICES: The sustainability of home treatment services was modest: less than half the services whose authors responded were still identifiable. Services were more likely to be operational if the study had found them to reduce hospitalisation significantly. RESULTS - META-ANALYSIS: Meta-analysis with heterogeneous studies is problematic. The evidence base for the effectiveness of services identifiable as 'home treatment' was not strong. Within the 'inpatient-control' study group, the mean reduction in hospitalisation was 5 days per patient per month (for 1-year studies only). No statistical significance could be measured for this result. For 'community-control' studies, the reduction in hospitalisation was negligible. Moreover, the heterogeneity of control services, the wide range of outcome measures and the limited availability of data might have confounded the analysis. Regularly visiting at home and dual responsibility for health and social care were associated with reduced hospitalisation. Evidence for other components was inconclusive. Few conclusions could be drawn from the analysis of service utilisation data. RESULTS - LOCATION: Studies were predominately from the USA and UK, more of them being from the USA. North American studies found a reduction in hospitalisation of 1 day per patient per month more than European studies. North American and European services differed on some service components, but this was unlikely to account for this finding, particularly as no difference was found in their experimental service results. CONCLUSIONS - STATE OF RESEARCH: There is a clear need for further studies, particularly in the UK. The benefit of home treatment over admission in terms of days in hospital was clear, but over other community-based alternatives was inconclusive. CONCLUSIONS - NON-RANDOMISED STUDIES: Difficulties in systematically searching for non-randomised studies may have contributed to the smaller number of such studies found (35, compared with 56 randomised controlled trials). This imbalance was compounded by a relatively poor response rate from non-randomised controlled trial authors. Including them in the analysis had little effect. CONCLUSIONS - LIMITATIONS OF THIS REVIEW: A broad area was reviewed in order to avoid the problem of analysing by service label. While reviews of narrower areas may risk implying a homogeneity of the services that is unwarranted, the current strategy has the drawback that the studies cover a range of heterogeneous services. The poor definition of control services, however, is ubiquitous in this field, however reviewed areas are defined. Inclusion of mean data for which no standard deviations were available was problematic in that it prevented measuring the significance of the main findings. The lack of availability of this data, however, is an important finding, demonstrating the difficulty in seeking certainty in this area. Only days in hospital and cost-effectiveness were analysed here. The range and lack of uniformity of measures used in this field made meta-analysis of other outcomes impossible. It should be noted, however, that the findings pertain to these aspects alone. The Delphi exercise reported here was limited in being conducted only with psychiatrists, rather than a multidisciplinary panel. Its findings were used as a framework for the follow-up and analysis. Their possible bias should be borne in mind when considering them as findings in themselves. CONCLUSIONS - IMPLICATIONS FOR CLINICIANS: The evidence base for home treatment compared with other community-based services is not strong, although it does show that home treatment reduces days spent in hospital compared with inpatient treatment. There is evidence that visiting patients at home regularly and taking responsibility for both health and social care each reduce days in hospital. CONCLUSIONS - IMPLICATIONS FOR CONSUMERS: Services that visit patients at home regularly and those that take responsibility for both health and social care are likely to reduce time spent in hospital. Psychiatrists surveyed in this review also considered support for carers to be essential. The evidence from this review, however, was that few services currently have protocols for meeting carers' needs. CONCLUSIONS - RECOMMENDATIONS FOR RESEARCH AND COMMISSIONERS: A centrally coordinated research strategy, with attention to study design, is recommended. Studies should include economic evaluations that report health and social service utilisation. Service components should be collected and reported for both experimental and control services. Studies should be designed with adequate power and longer durations of follow-up and use comparable outcome measures to facilitate meta-analysis. Research protocols should be adhered to throughout the studies. It may be advisable that independent researchers conduct studies in future. It is no longer recommended that home treatment be tested against inpatient care, or that small, localised studies replicate existing, more highly powered studies.

  9. Hot isostatically pressed manufacture of high strength MERL 76 disk and seal shapes

    NASA Technical Reports Server (NTRS)

    Eng, R. D.; Evans, D. J.

    1982-01-01

    The feasibility of using MERL 76, an advanced high strength direct hot isostatic pressed powder metallurgy superalloy, as a full scale component in a high technology, long life, commercial turbine engine were demonstrated. The component was a JT9D first stage turbine disk. The JT9D disk rim temperature capability was increased by at least 22 C and the weight of JT9D high pressure turbine rotating components was reduced by at least 35 pounds by replacement of forged Superwaspaloy components with hot isostatic pressed (HIP) MERL 76 components. The process control plan and acceptance criteria for manufacture of MERL 76 HIP consolidated components were generated. Disk components were manufactured for spin/burst rig test, experimental engine tests, and design data generation, which established lower design properties including tensile, stress-rupture, 0.2% creep and notched (Kt = 2.5) low cycle fatigue properties, Sonntag, fatigue crack propagation, and low cycle fatigue crack threshold data. Direct HIP MERL 76, when compared to conventionally forged Superwaspaloy, is demonstrated to be superior in mechanical properties, increased rim temperature capability, reduced component weight, and reduced material cost by at least 30% based on 1980 costs.

  10. Variations in Kinematics during Clinical Gait Analysis in Stroke Patients

    PubMed Central

    Boudarham, Julien; Roche, Nicolas; Pradon, Didier; Bonnyaud, Céline; Bensmail, Djamel; Zory, Raphael

    2013-01-01

    In addition to changes in spatio-temporal and kinematic parameters, patients with stroke exhibit fear of falling as well as fatigability during gait. These changes could compromise interpretation of data from gait analysis. The aim of this study was to determine if the gait of hemiplegic patients changes significantly over successive gait trials. Forty two stroke patients and twenty healthy subjects performed 9 gait trials during a gait analysis session. The mean and variability of spatio-temporal and kinematic joint parameters were analyzed during 3 groups of consecutive gait trials (1–3, 4–6 and 7–9). Principal component analysis was used to reduce the number of variables from the joint kinematic waveforms and to identify the parts of the gait cycle which changed during the gait analysis session. The results showed that i) spontaneous gait velocity and the other spatio-temporal parameters significantly increased, and ii) gait variability decreased, over the last 6 gait trials compared to the first 3, for hemiplegic patients but not healthy subjects. Principal component analysis revealed changes in the sagittal waveforms of the hip, knee and ankle for hemiplegic patients after the first 3 gait trials. These results suggest that at the beginning of the gait analysis session, stroke patients exhibited phase of adaptation,characterized by a “cautious gait” but no fatigue was observed. PMID:23799100

  11. Reducing Cross-Polarized Radiation From A Microstrip Antenna

    NASA Technical Reports Server (NTRS)

    Huang, John

    1991-01-01

    Change in configuration of feed of nominally linearly polarized microstrip-patch transmitting array antenna reduces cross-polarized component of its radiation. Patches fed on opposing sides, in opposite phases. Combination of spatial symmetry and temporal asymmetry causes copolarized components of radiation from fundamental modes of patches to reinforce each other and cross-polarized components of radiation from higher-order modes to cancel each other.

  12. A component modes projection and assembly model reduction methodology for articulated, multi-flexible body structures

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Tsuha, Walter S.

    1993-01-01

    A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.

  13. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  14. Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.

    PubMed

    Beauducel, André

    2018-02-01

    The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Head repositioning accuracy in patients with whiplash-associated disorders.

    PubMed

    Feipel, Veronique; Salvia, Patrick; Klein, Helene; Rooze, Marcel

    2006-01-15

    Controlled study, measuring head repositioning error (HRE) using an electrogoniometric device. To compare HRE in neutral position, axial rotation and complex postures of patients with whiplash-associated disorders (WAD) to that of control subjects. The presence of kinesthetic alterations in patients with WAD is controversial. In 26 control subjects and 29 patients with WAD (aged 22-74 years), head kinematics was sampled using a 3-dimensional electrogoniometer mounted using a harness and a helmet. All tasks were realized in seated position. The repositioning tasks included neutral repositioning after maximal flexion-extension, eyes open and blindfolded, repositioning at 50 degrees of axial rotation, and repositioning at 50 degrees of axial rotation combined to 20 degrees of ipsilateral bending. The flexion-extension, ipsilateral bending, and axial rotation components of HRE were considered. A multiple-way repeated-measures analysis of variance was used to compare tasks and groups. The WAD group displayed a reduced flexion-extension range (P = 1.9 x 10(-4)), and larger HRE during flexion-extension and repositioning tasks (P = 0.009) than controls. Neither group nor task affected maximal motion velocity. Neutral HRE of the flexion-extension component was larger in blindfolded condition (P = 0.03). Ipsilateral bending and axial rotation HRE components were smaller than the flexion-extension component (P = 7.1 x 10(-23)). For pure rotation repositioning, axial rotation HRE was significantly larger than flexion-extension and ipsilateral bending repositioning error (P = 3.0 x 10(-23)). Ipsilateral bending component of HRE was significantly larger combined tasks than for pure rotation tasks (P = 0.004). In patients with WAD, range of motion and head repositioning accuracy were reduced. However, the differences were small. Vision suppression and task type influenced HRE.

  16. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  17. Ways to reduce patient turnaround time and improve service quality in emergency departments.

    PubMed

    Sinreich, David; Marmor, Yariv

    2005-01-01

    Recent years have witnessed a fundamental change in the function of emergency departments (EDs). The emphasis of the ED shifts from triage to saving the lives of shock-trauma rooms equipped with state-of-the-art equipment. At the same time walk-in clinics are being set up to treat ambulatory type patients. Simultaneously ED overcrowding has become a common sight in many large urban hospitals. This paper recognises that in order to provide quality treatment to all these patient types, ED process operations have to be flexible and efficient. The paper aims to examine one major benchmark for measuring service quality--patient turnaround time, claiming that in order to provide the quality treatment to which EDs aspire, this time needs to be reduced. This study starts by separating the process each patient type goes through when treated at the ED into unique components. Next, using a simple model, the impact each of these components has on the total patient turnaround time is determined. This in turn, identifies the components that need to be addressed if patient turnaround time is to be streamlined. The model was tested using data that were gathered through a comprehensive time study in six major hospitals. The analysis reveals that waiting time comprises 51-63 per cent of total patient turnaround time in the ED. Its major components are: time away for an x-ray examination; waiting time for the first physician's examination; and waiting time for blood work. The study covers several hospitals and analyses over 20,000 process components; as such the common findings may serve as guidelines to other hospitals when addressing this issue.

  18. Degradation of organophosphate esters in sewage sludge: Effects of aerobic/anaerobic treatments and bacterial community compositions.

    PubMed

    Pang, Long; Ge, Liming; Yang, Peijie; He, Han; Zhang, Hongzhong

    2018-05-01

    In this study, the degradation of organophosphate esters (OPEs) in sewage sludge with aerobic composting and anaerobic digestion was investigated. The total concentrations of six OPEs (ΣOPEs) in the whole treatment process reduced in the order of anaerobic digestion combined with pig manure (T3) > aerobic composting combined with pig manure (T1) > aerobic composting (T2) > anaerobic digestion (T4). The addition of pig manure significantly enhanced the removal rate of OPEs in both aerobic and anaerobic treatments. The abundance and diversity of bacterial community reduced after the treatment process. Shannon index, principal component analysis, network analysis, and heat map further confirmed the variation of bacterial community compositions among different treatments. Five genera (i.e., Flavobacterium, Bacillus, Alcaligene, Pseudomonas, and Bacillus megaterium) might be responsible for the degradation of OPE compounds in sewage sludge. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. [Preventing maternal and child malnutrition: the nutrition component of the Mesoamerican Health Initiative 2015].

    PubMed

    Rivera, Juan A; Martorell, Reynaldo; González, Wendy; Lutter, Chessa; Cossío, Teresa González de; Flores-Ayala, Rafael; Uauy, Ricardo; Delgado, Hernán

    2011-01-01

    To describe the regional master plan of nutrition to address maternal and child malnutrition in a 5- year period developed by the Nutrition Technical Group. The Nutrition Technical Group developed a situation analysis describing the main nutrition problems, policies and programs in Mesoamerica. The situation analysis and a literature review about effective interventions to address malnutrition were conducted to develop a nutrition master plan. The Nutrition Technical Group held various meetings to develop, discuss and validate the master plan. Theory of change identified problems and barriers, the actions to be developed, the changes and impacts expected. A package of interventions is proposed to reduce undernutrition and micronutrient deficiencies useful under different epidemiological contexts. The nutrition master plan provides a guideline of best practices that can be used for evidence-informed decision making and the development of national policies and programs to reduce malnutrition.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less

Top